867
Views
32
CrossRef citations to date
0
Altmetric
Articles

A hierarchical spatiotemporal adaptive fusion model using one image pair

, &
Pages 639-655 | Received 19 May 2016, Accepted 08 Sep 2016, Published online: 01 Nov 2016
 

ABSTRACT

Image fusion techniques that blend multi-sensor characteristics to generate synthetic data with fine resolutions have generated great interest within the remote sensing community. Over the past decade, although many advances have been made in the spatiotemporal fusion models, there still remain several shortcomings in existing methods. In this article, a hierarchical spatiotemporal adaptive fusion model (HSTAFM) is proposed for producing daily synthetic fine-resolution fusions. The suggested model uses only one prior or posterior image pair, especially with the aim being to predict arbitrary temporal changes. The proposed model is implemented in two stages. First, the coarse-resolution image is enhanced through super-resolution based on sparse representation; second, a pre-selection of temporal change is performed. It then adopts a two-level strategy to select similar pixels, and blends multi-sensor features adaptively to generate the final synthetic data. The results of tests using both simulated and actual observed data show that the model can accurately capture both seasonal phenology change and land-cover-type change. Comparisons between HSTAFM and other developed models also demonstrate our proposed model produces consistently lower biases.

Acknowledgements

We would like to thank the three anonymous reviewers and external editor for providing valuable suggestions and comments, which have greatly improved this manuscript.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

This study was supported by the Ministry of Science and Technology, China, National Research Program [grant number 2012CB955501], [grant number 2013AA122003], [grant number 2012AA12A407]; the National Natural Science Foundation of China [grant number 41271099].

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.