Commit a8f8f1eb authored by Daniel Scheffler's avatar Daniel Scheffler
Browse files

Added usage instructions.


Signed-off-by: Daniel Scheffler's avatarDaniel Scheffler <danschef@gfz-potsdam.de>
parent 9f58c392
......@@ -36,10 +36,20 @@ The input images can have any [GDAL compatible image format](http://www.gdal.org
AROSICS supports local and global co-registration.
* Local co-registration:
A dense grid of tie points is automatically computed, whereas tie points are subsequently validated using a multistage workflow. Only those tie points not marked as false-positives are used to compute the parameters of an affine transformation. Warping of the target image is done using an appropriate resampling technique (cubic by default).
A dense grid of tie points is automatically computed, whereas tie points are subsequently validated using a
multistage workflow. Only those tie points not marked as false-positives are used to compute the parameters
of an affine transformation. Warping of the target image is done using an appropriate resampling technique
(cubic by default).
* Global co-registration:
Only a global X/Y translation is computed within a small subset of the input images (window position is adjustable). This allows very fast co-registration but only corrects for translational (global) X/Y shifts. The calculated subpixel-shifts are (by default) applied to the geocoding information of the output image. No spatial resampling is done automatically as long as both input images have the same projection. If you need the output image to be aligned to the reference image coordinate grid (by using an appropriate resampling algorithm), use the '-align_grids' option.
Only a global X/Y translation is computed within a small subset of the input images (window position is adjustable).
This allows very fast co-registration but only corrects for translational (global) X/Y shifts. The calculated
subpixel-shifts are (by default) applied to the geocoding information of the output image. No spatial resampling
is done automatically as long as both input images have the same projection. If you need the output image to be
aligned to the reference image coordinate grid (by using an appropriate resampling algorithm), use the '-align_grids'
option.
AROSICS is designed to robustly handle the typical difficulties of multi-sensoral/multi-temporal images. Clouds are automatically handled by the implemented outlier detection algorithms. The user may provide user-defined masks to exclude certain image areas from tie point creation. The image overlap area is automatically calculated. Thereby, no-data regions within the images are automatically respected. Providing the map coordinates of the actual data corners lets you save some calculation time, because in this case the automatic algorithm can be skipped. The no-data value of each image is automatically derived from the image corners. The verbose program mode gives some more output about the interim results, shows some figures and writes the used footprint and overlap polygons to disk. Note, that maybe the figures must be manually closed in in order to continue the processing (depending on your Python configuration).
......@@ -52,9 +62,10 @@ For further details regarding the implemented algorithm, example use cases, qual
Installation
------------
AROSICS depends on some open source packages which are usually installed without problems by the automatic install
routine. However, for some projects, we strongly recommend resolving the dependency before the automatic installer
is run. This approach avoids problems with conflicting versions of the same software.
AROSICS depends on some open source packages which are usually installed without problems by
the automatic install routine. However, for some projects, we strongly recommend resolving the
dependencies before the automatic installer is run. This approach avoids problems with
conflicting versions of the same software.
Using [conda](https://conda.io/docs/), the recommended approach is:
```bash
......@@ -87,14 +98,16 @@ PATH=$PATH:/path/to/your/installation/folder/arosics:/path/to/your/installation/
```
AROSICS has been tested with Python 3.4+ and Python 2.7. It should be fully compatible to all Python versions above 2.7.
AROSICS has been tested with Python 3.4+ and Python 2.7. It should be fully compatible to all
Python versions above 2.7.
# Modules
## CoReg
This module calculates spatial shifts and performs a global correction (based on a single matching window).
This module calculates spatial shifts and performs a global correction
(based on a single matching window).
### Python Interface
......@@ -303,7 +316,10 @@ python arosics_cli.py global /path/to/your/ref_image.bsq /path/to/your/tgt_image
## CoReg_local
This module has been designed to detect and correct geometric shifts present locally in your input image. The class COREG_LOCAL calculates a grid of spatial shifts with points spread over the whole overlap area of the input images. Based on this grid a correction of local shifts can be performed.
This module has been designed to detect and correct geometric shifts present locally in your
input image. The class COREG_LOCAL calculates a grid of spatial shifts with points spread over
the whole overlap area of the input images. Based on this grid a correction of local shifts
can be performed.
### Python interface
......
......@@ -29,9 +29,7 @@ Feature overview
multistage workflow. Only those tie points not marked as false-positives are used to compute the parameters of an
affine transformation. Warping of the target image is done using an appropriate resampling technique
(cubic by default).
|
* Global co-registration:
Only a global X/Y translation is computed within a small subset of the input images (window position is adjustable).
......
......@@ -10,3 +10,11 @@ At the command line, arosics provides the **arosics_cli.py** command:
:filename: ./../bin/arosics_cli.py
:func: get_arosics_argparser
:prog: arosics_cli.py
.. note::
The verbose program mode gives some more output about the interim results,
shows some figures and writes the used footprint and overlap polygons to disk.
Maybe the figures must be manually closed in in order to continue the processing
(depending on your Python configuration).
......@@ -46,7 +46,8 @@ extensions = [
'sphinx.ext.viewcode',
'sphinx.ext.todo',
'sphinxarg.ext',
'sphinx_autodoc_typehints'
'sphinx_autodoc_typehints',
'sphinx.ext.intersphinx'
]
# Add any paths that contain templates here, relative to this directory.
......@@ -123,11 +124,18 @@ autoclass_content = 'both'
todo_include_todos = True
# Increase content width of generated docs
# Apply custom sphinx styles (e.g., increase content width of generated docs)
def setup(app):
app.add_stylesheet('custom.css')
# Add mappings for intersphinx extension (allows to link to the API reference of other sphinx documentations)
intersphinx_mapping = {
'geoarray': ('http://danschef.gitext.gfz-potsdam.de/geoarray/doc/', None),
'python': ('http://docs.python.org/3', None),
}
# -- Options for HTML output -------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
......
......@@ -41,6 +41,12 @@ Using conda_, the recommended approach is:
This is the preferred method to install arosics, as it will always install the most recent stable release.
.. note::
AROSICS has been tested with Python 3.4+ and Python 2.7. It should be fully compatible to all Python versions
above 2.7. However, we will continously drop the support for Python 2.7 in future.
If you don't have `pip`_ installed, this `Python installation guide`_ can guide
you through the process.
......
.. _usage:
#####
Usage
#####
##################
Usage instructions
##################
In this section you can find some advice how to use AROSICS for co-registration of your input data.
The package offers two main interfaces:
In this section you can find some advice how to use AROSICS for the detection and correction
of misregistrations locally or globally present in your input data.
.. toctree::
:maxdepth: 1
:hidden:
:caption: the command line interface
.. todo::
This section is not yet complete but will be continously updated in future.
If you miss topics, feel free to suggest new entries here!
usage/cli.rst
.. toctree::
:maxdepth: 2
:name: the Python API interface
usage/api.rst
usage/input_data_requirements.rst
usage/global_coreg.rst
usage/local_coreg.rst
.. todo::
.. seealso::
This section is not yet complete but will be continously updated in future.
If you miss topics, feel free to suggest new entries here!
For details regarding the implemented algorithm, example use cases, quality assessment and benchmarks
refer to the (open-access) paper about AROSICS:
`Scheffler et al. 2017 <http://www.mdpi.com/2072-4292/9/7/676>`__
Global image co-registration
****************************
Use the class :class:`arosics.COREG` to detect and correct global spatial shifts between a reference and a target image.
It computes a global X-/Y-shift based on a single matching window with customizable position.
Using the Python API
--------------------
calculate spatial shifts - with input data on disk
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: python
from arosics import COREG
im_reference = '/path/to/your/ref_image.bsq'
im_target = '/path/to/your/tgt_image.bsq'
CR = COREG(im_reference, im_target, wp=(354223, 5805559), ws=(256,256))
CR.calculate_spatial_shifts()
.. code-block:: python
Calculating actual data corner coordinates for reference image...
Corner coordinates of reference image:
[[319090.0, 5790510.0], [351800.0, 5899940.0], [409790.0, 5900040.0], [409790.0, 5790250.0], [319090.0, 5790250.0]]
Calculating actual data corner coordinates for image to be shifted...
Corner coordinates of image to be shifted:
[[319460.0, 5790510.0], [352270.0, 5900040.0], [409790.0, 5900040.0], [409790.0, 5790250.0], [319460.0, 5790250.0]]
Matching window position (X,Y): 354223/5805559
Detected integer shifts (X/Y): 0/-2
Detected subpixel shifts (X/Y): 0.357885632465/0.433837319984
Calculated total shifts in fft pixel units (X/Y): 0.357885632465/-1.56616268002
Calculated total shifts in reference pixel units (X/Y): 0.357885632465/-1.56616268002
Calculated total shifts in target pixel units (X/Y): 0.357885632465/-1.56616268002
Calculated map shifts (X,Y): 3.578856324660592 15.661626799963415
Original map info: ['UTM', 1, 1, 300000.0, 5900040.0, 10.0, 10.0, 33, 'North', 'WGS-84']
Updated map info: ['UTM', 1, 1, '300003.57885632466', '5900055.6616268', 10.0, 10.0, 33, 'North', 'WGS-84']
calculate spatial shifts - without any disk access
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
First, create some example input images for AROSICS in-memory
(we use instances of :class:`GeoArray<geoarray.GeoArray>` for that):
.. code-block:: python
from geoarray import GeoArray
from arosics import COREG
im_reference = '/path/to/your/ref_image.bsq'
im_target = '/path/to/your/tgt_image.bsq'
# get a sample numpy array with corresponding geoinformation as reference image
geoArr = GeoArray(im_reference)
ref_ndarray = geoArr[:] # numpy.ndarray with shape (10980, 10980)
ref_gt = geoArr.geotransform # GDAL geotransform: (300000.0, 10.0, 0.0, 5900040.0, 0.0, -10.0)
ref_prj = geoArr.projection # projection as WKT string ('PROJCS["WGS 84 / UTM zone 33N....')
# get a sample numpy array with corresponding geoinformation as target image
geoArr = GeoArray(im_target)
tgt_ndarray = geoArr[:] # numpy.ndarray with shape (10980, 10980)
tgt_gt = geoArr.geotransform # GDAL geotransform: (300000.0, 10.0, 0.0, 5900040.0, 0.0, -10.0)
tgt_prj = geoArr.projection # projection as WKT string ('PROJCS["WGS 84 / UTM zone 33N....')
# create in-memory instances of GeoArray from the numpy array data, the GDAL geotransform tuple and the WKT
# projection string
geoArr_reference = GeoArray(ref_ndarray, ref_gt, ref_prj)
geoArr_target = GeoArray(tgt_ndarray, tgt_gt, tgt_prj)
Now pass these in-memory :class:`GeoArray<geoarray.GeoArray>` instances to :class:`arosics.COREG`
and calculate spatial shifts:
.. code-block:: python
CR = COREG(geoArr_reference, geoArr_target, wp=(354223, 5805559), ws=(256,256))
CR.calculate_spatial_shifts()
.. code-block:: python
Calculating actual data corner coordinates for reference image...
Corner coordinates of reference image:
[[300000.0, 5848140.0], [409790.0, 5848140.0], [409790.0, 5790250.0], [300000.0, 5790250.0]]
Calculating actual data corner coordinates for image to be shifted...
Corner coordinates of image to be shifted:
[[300000.0, 5847770.0], [409790.0, 5847770.0], [409790.0, 5790250.0], [300000.0, 5790250.0]]
Matching window position (X,Y): 354223/5805559
Detected integer shifts (X/Y): 0/-2
Detected subpixel shifts (X/Y): 0.357885632465/0.433837319984
Calculated total shifts in fft pixel units (X/Y): 0.357885632465/-1.56616268002
Calculated total shifts in reference pixel units (X/Y): 0.357885632465/-1.56616268002
Calculated total shifts in target pixel units (X/Y): 0.357885632465/-1.56616268002
Calculated map shifts (X,Y): 3.578856324660592/15.661626799963415
Calculated absolute shift vector length in map units: 16.065328089207995
Calculated angle of shift vector in degrees from North: 192.8717191970359
Original map info: ['UTM', 1, 1, 300000.0, 5900040.0, 10.0, 10.0, 33, 'North', 'WGS-84']
Updated map info: ['UTM', 1, 1, '300003.57885632466', '5900055.6616268', 10.0, 10.0, 33, 'North', 'WGS-84']
'success'
correct shifts
~~~~~~~~~~~~~~
:meth:`CR.correct_shifts() <arosics.COREG.correct_shifts>` returns an
:class:`OrderedDict<collections.OrderedDict>` containing the co-registered
numpy array and its corresponding geoinformation.
.. code-block:: python
CR.correct_shifts()
.. code-block:: python
OrderedDict([('band', None),
('is shifted', True),
('is resampled', False),
('updated map info',
['UTM',
1,
1,
300003.57885632466,
5900025.6616268,
10.0,
10.0,
33,
'North',
'WGS-84']),
('updated geotransform',
[300000.0, 10.0, 0.0, 5900040.0, 0.0, -10.0]),
('updated projection',
'PROJCS["WGS 84 / UTM zone 33N",GEOGCS["WGS 84",DATUM["WGS_1984",SPHEROID["WGS 84",6378137,298.257223563,AUTHORITY["EPSG","7030"]],AUTHORITY["EPSG","6326"]],PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]],UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]],AXIS["Latitude",NORTH],AXIS["Longitude",EAST],AUTHORITY["EPSG","4326"]],PROJECTION["Transverse_Mercator"],PARAMETER["latitude_of_origin",0],PARAMETER["central_meridian",15],PARAMETER["scale_factor",0.9996],PARAMETER["false_easting",500000],PARAMETER["false_northing",0],UNIT["metre",1,AUTHORITY["EPSG","9001"]],AXIS["Easting",EAST],AXIS["Northing",NORTH],AUTHORITY["EPSG","32633"]]'),
('arr_shifted', array([[ 0, 0, 0, ..., 953, 972, 1044],
[ 0, 0, 0, ..., 1001, 973, 1019],
[ 0, 0, 0, ..., 953, 985, 1020],
...,
[ 0, 0, 0, ..., 755, 763, 773],
[ 0, 0, 0, ..., 760, 763, 749],
[9999, 9999, 9999, ..., 9999, 9999, 9999]], dtype=uint16)),
('GeoArray_shifted',
<geoarray.GeoArray at 0x7f6c5a1cabe0>)])
To write the coregistered image to disk, the :class:`arosics.COREG` class needs to be instanced with a filepath given to
keyword 'path_out'. The output raster format can be any format supported by GDAL.
Find a list of supported formats here: http://www.gdal.org/formats_list.html
apply detected shifts to multiple images
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sometimes it can be useful to apply the same shifts to multiple images, e.g., to different mask images derived from
the same satellite dataset. For this purpose you can calculate spatial shifts using the :class:`arosics.COREG` class
(see above) and then apply the calculated shifts to mulitple images using the :class:`arosics.DESHIFTER` class.
Take a look at the keyword arguments of the :class:`arosics.DESHIFTER` class when you need further adjustments
(e.g. output paths for the corrected images; aligned output grid, ...).
.. code-block:: python
from arosics import DESHIFTER
DESHIFTER(im_target1, CR.coreg_info).correct_shifts()
DESHIFTER(im_target2, CR.coreg_info).correct_shifts()
.. code-block:: python
OrderedDict([('band', None),
('is shifted', True),
('is resampled', False),
('updated map info',
['UTM',
1,
1,
300003.57885632466,
5900025.6616268,
10.0,
10.0,
33,
'North',
'WGS-84']),
('updated geotransform',
[300000.0, 10.0, 0.0, 5900040.0, 0.0, -10.0]),
('updated projection',
'PROJCS["WGS 84 / UTM zone 33N",GEOGCS["WGS 84",DATUM["WGS_1984",SPHEROID["WGS 84",6378137,298.257223563,AUTHORITY["EPSG","7030"]],AUTHORITY["EPSG","6326"]],PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]],UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]],AXIS["Latitude",NORTH],AXIS["Longitude",EAST],AUTHORITY["EPSG","4326"]],PROJECTION["Transverse_Mercator"],PARAMETER["latitude_of_origin",0],PARAMETER["central_meridian",15],PARAMETER["scale_factor",0.9996],PARAMETER["false_easting",500000],PARAMETER["false_northing",0],UNIT["metre",1,AUTHORITY["EPSG","9001"]],AXIS["Easting",EAST],AXIS["Northing",NORTH],AUTHORITY["EPSG","32633"]]'),
('arr_shifted', array([[ 0, 0, 0, ..., 953, 972, 1044],
[ 0, 0, 0, ..., 1001, 973, 1019],
[ 0, 0, 0, ..., 953, 985, 1020],
...,
[ 0, 0, 0, ..., 755, 763, 773],
[ 0, 0, 0, ..., 760, 763, 749],
[9999, 9999, 9999, ..., 9999, 9999, 9999]], dtype=uint16)),
('GeoArray_shifted',
<geoarray.GeoArray at 0x7f6c5a1caa58>)])
----
Using the Shell console
-----------------------
The help instructions of the console interface can be accessed like this:
.. code-block:: bash
python arosics_cli.py -h
Follow these instructions to run AROSICS from a shell console. For example, the most simple call for a global
co-registration would look like this:
.. code-block:: bash
python arosics_cli.py global /path/to/your/ref_image.bsq /path/to/your/tgt_image.bsq
Requirements to your input data
*******************************
Compatible image formats
~~~~~~~~~~~~~~~~~~~~~~~~
The input images can have any GDAL compatible image format. You can find a list here:
http://www.gdal.org/formats_list.html
Geocoding
~~~~~~~~~
Your target image must be approximately geocoded to your reference image.
In case of ENVI files, this means they must have a 'map info' and a 'coordinate system string' as attributes of their
header file.
.. note::
AROSICS also allows to compute the misregistration between two input images without any geocoding. In this case,
it is assumed that both images have the same spatial resolution and their upper-left coordinates approximately
represents the same map coordinate. The computed misregistration is then returned in image coordinate units.
Geographic overlap
~~~~~~~~~~~~~~~~~~
The input images must have a geographic overlap but clipping them to same geographical extent is NOT neccessary.
The image overlap area is automatically calculated. Thereby, no-data regions within the images are automatically
respected.
Spatial resolution
~~~~~~~~~~~~~~~~~~
The input images may have different spatial resolutions. Any needed resampling of the data is done automatically.
.. attention::
Please try avoid any spatial resampling of the input images before running AROSICS. It might affect
the accuracy of the computed misregistration.
Orthorectified datasets
~~~~~~~~~~~~~~~~~~~~~~~
Please use ortho-rectified input data in order to minimize local shifts in the input images.
No-data values
~~~~~~~~~~~~~~
The no-data value of each image is automatically derived from the image corners. However, this may fail if the actual
no-data value is not present within a 3x3 matrix at the image corners. User provided no-data values will speed up the
computation and avoid wrongly derived values.
Actual image corner coordinates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Providing the map coordinates of the actual image corners lets you save some computation time,
because in this case the implemented automatic algorithm can be skipped.
Image masks / areas to be excluded from tie-point creation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The user may provide user-defined masks to exclude certain image areas from tie point creation. This is useful for
example in case of cloud areas or moving objects. However, the outlier detection algorithms of AROSICS will filter out
tie points with large differences to the surrounding area.
Unequal sensors and image acquisition conditions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
AROSICS is designed to robustly handle the typical difficulties of multi-sensoral/multi-temporal images.
This diff is collapsed.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment