Commit 684850c0 authored by Daniel Scheffler's avatar Daniel Scheffler
Browse files

Further improved documentation.


Signed-off-by: Daniel Scheffler's avatarDaniel Scheffler <danschef@gfz-potsdam.de>
parent 4afde6d5
Pipeline #15813 passed with stages
in 10 minutes and 5 seconds
......@@ -78,10 +78,10 @@ nosetests: clean-test ## Runs nosetests with coverage, xUnit and nose-html-outpu
docs: ## generate Sphinx HTML documentation, including API docs
rm -f docs/gms_preprocessing.rst
rm -f docs/modules.rst
sphinx-apidoc -o docs/ gms_preprocessing
sphinx-apidoc -o docs/ gms_preprocessing --doc-project 'API Reference'
$(MAKE) -C docs clean
$(MAKE) -C docs html
#$(BROWSER) docs/_build/html/index.html
# $(BROWSER) docs/_build/html/index.html
servedocs: docs ## compile the docs watching for changes
watchmedo shell-command -p '*.rst' -c '$(MAKE) -C docs html' -R -D .
......
=====
*****
About
=====
*****
The goal of the gms_preprocessing Python library is to provide a fully automatic
pre-precessing pipeline for spatial and spectral fusion (i.e., homogenization)
......@@ -14,10 +14,10 @@ Landsat-5, Landsat-7, Landsat-8, Sentinel-2A and Sentinel-2B.
Feature overview
----------------
================
Level-1 processing:
^^^^^^^^^^^^^^^^^^^
-------------------
* data import and metadata homogenization (compatibility: Landsat-5/7/8, Sentinel-2A/2B)
* equalization of acquisition- and illumination geometry
......@@ -25,7 +25,7 @@ Level-1 processing:
* correction of geometric errors (using `AROSICS <https://gitext.gfz-potsdam.de/danschef/arosics>`_)
Level-2 processing:
^^^^^^^^^^^^^^^^^^^
-------------------
* spatial homogenization
* spectral homogenization (using `SpecHomo <https://gitext.gfz-potsdam.de/geomultisens/spechomo>`_)
......
.. _algorithm_description:
Algorithm descriptions
======================
* TODO
##############################################
Documentation of the gms_preprocessing package
==============================================
##############################################
.. todo::
This documentation is not yet complete but will be continously updated in future.
If you miss topics, feel free to suggest new entries here!
Your contributions are always welcome!
Contents:
.. toctree::
:maxdepth: 3
:maxdepth: 4
:caption: Contents:
about
Source code repository <https://gitext.gfz-potsdam.de/geomultisens/gms_preprocessing>
installation
algorithm_descriptions
usage
modules
contributing
authors
history
......
============
************
Installation
============
************
Using Anaconda or Miniconda (recommended)
-----------------------------------------
=========================================
Using conda_ (latest version recommended), gms_preprocessing is installed as follows:
......@@ -28,7 +28,7 @@ automatically resolves all the dependencies.
Using pip (not recommended)
---------------------------
===========================
There is also a `pip`_ installer for gms_preprocessing. However, please note that gms_preprocessing depends on some
open source packages that may cause problems when installed with pip. Therefore, we strongly recommend
......
.. include:: ../README.rst
==================
Usage instructions
==================
*****
Usage
*****
In this section you can find some advice how to use gms_preprocessing
with regard to the Python API and the command line interface.
Python API
**********
==========
.. toctree::
:maxdepth: 4
......@@ -19,14 +18,11 @@ Python API
Command line interface
**********************
run_gms.py
----------
======================
At the command line, gms_preprocessing provides the **run_gms.py** command:
.. argparse::
:filename: ./../bin/run_gms.py
:func: get_gms_argparser
:prog: run_gms.py
.. toctree::
:maxdepth: 4
usage/cli_run_gms.rst
Add new data manually
~~~~~~~~~~~~~~~~~~~~~
^^^^^^^^^^^^^^^^^^^^^
You can also add datasets to the local GeoMultiSens data storage which you previously downloaded on your own
(e.g., via EarthExplorer_ or the `Copernicus Open Access Hub`_).
......
_ref__add_new_data_to_the_database:
.. _ref__add_new_data_to_the_database:
Add new data to the database
****************************
----------------------------
There are three ways to add new satellite data to the locally stored database. You can use the **WebUI**,
you can run the **data downloader** from the command line or you **add the data manually**.
......
run_gms.py
----------
.. argparse::
:filename: ./../bin/run_gms.py
:func: get_gms_argparser
:prog: run_gms.py
.. _ref__create_new_jobs:
Create new jobs
***************
---------------
There are multiple ways to create new jobs depending on what you have. The section below gives a brief overview.
.. note:
.. note::
Only those datasets that were correctly added to the local GeoMultiSens data storage before can be used to create a
new GeoMultiSens preprocessing job (see :ref:`ref__add_new_data_to_the_database`).
Create a job from a list of filenames
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The list of filenames refers to the filenames of the previously downloaded provider archive data.
......@@ -37,18 +37,22 @@ The list of filenames refers to the filenames of the previously downloaded provi
OUT:
New job created successfully. job-ID: 26193017
The job contains:
- 2 Landsat-7 ETM+_SLC_OFF scenes
- 2 Landsat-7 ETM+_SLC_OFF scenes
Create a job from a list of entity IDs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- TODO
Create a job from a list of scene IDs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- TODO
Create a job from a dictionary
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- TODO
Execute jobs
************
------------
Once a job is created (see :ref:`ref__create_new_jobs`), it can be executed as follows:
......
Using the data downloader
~~~~~~~~~~~~~~~~~~~~~~~~~
^^^^^^^^^^^^^^^^^^^^^^^^^
The GeoMultiSens data downloader downloads the requested data and makes sure that the new dataset is properly added to
the local GeoMultiSens data storage directory as well as to the metadata database.
......
......@@ -84,7 +84,7 @@ req_setup = ['setuptools-git'] # needed for package_data version controlled by
req_test = ['coverage', 'nose', 'nose2', 'nose-htmloutput', 'rednose']
req_doc = ['sphinx-argparse', 'sphinx_rtd_theme']
req_doc = ['sphinx-autodoc-typehint', 'sphinx-argparse', 'sphinx_rtd_theme']
req_lint = ['flake8', 'pycodestyle', 'pydocstyle']
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment