362
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Reconstructing higher-resolution four-dimensional time-varying volumetric data

&
Article: 2289837 | Received 08 Aug 2023, Accepted 27 Nov 2023, Published online: 08 Dec 2023
 

Abstract

We have witnessed substantial growth in super-resolution research within the computer vision community. Unlike previous works that mainly focus on the super-resolution synthesis of images, videos, or single volumes, our research is dedicated to the super-resolution synthesis of time-varying volumetric data, which are generated from scientific simulations and are crucial for domain scientists to understand and analyse complex scientific phenomena. Compared to previous works, our research presents a greater challenge: the time-varying volumetric data have higher dimensions, making it more difficult to synthesise super-resolution that maintains good spatio-temporal consistency while achieving high visual quality. To tackle this challenge, we introduce a new GAN-based network called SSR-DoubleUNetGAN, which includes novel network architecture and loss functions, allowing for accurate synthesis of spatial super-resolution for time-varying volumetric data with relatively fast training time. Our method can be applied in the context of in-situ visualisation to aid domain scientists in analysing more time-varying volumetric data more efficiently. In addition, it can be used in the compression-decompression pipeline to recover the super-resolution time-varying volumetric data from their low-resolution counterpart. To demonstrate its effectiveness, we applied various time-varying volumetric datasets from different scientific simulations to it. To demonstrate its advantages, we compared it qualitatively and quantitatively with five state-of-the-art super-resolution techniques, namely SSR-TVD, Tricubic, SRResNet, Cubic, and Linear. Furthermore, we conducted an ablation study to validate its important modules. The experimental results show that our method outperforms the compared state-of-the-art techniques.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The data that support the findings of this study are openly available in IEEE SciVis Contest repository at https://sciviscontest.ieeevis.org/.

Additional information

Funding

This work was funded by the Natural Science Foundation of Zhejiang Province of China [grant number LTGY23F020007], and by the Humanities and Social Sciences Foundation of Ministry of Education of China [grant number 23YJC760011].