Explicit Temporal Embedding in Deep Generative Latent Models for Longitudinal Medical Image Synthesis

Research output: Working paperPreprintResearch

Documents

  • Fulltext

    Final published version, 6.5 MB, PDF document

Medical imaging plays a vital role in modern diagnostics and treatment. The temporal nature of disease or treatment progression often results in longitudinal data. Due to the cost and potential harm, acquiring large medical datasets necessary for deep learning can be difficult. Medical image synthesis could help mitigate this problem. However, until now, the availability of GANs capable of synthesizing longitudinal volumetric data has been limited. To address this, we use the recent advances in latent space-based image editing to propose a novel joint learning scheme to explicitly embed temporal dependencies in the latent space of GANs. This, in contrast to previous methods, allows us to synthesize continuous, smooth, and high-quality longitudinal volumetric data with limited supervision. We show the effectiveness of our approach on three datasets containing different longitudinal dependencies. Namely, modeling a simple image transformation, breathing motion, and tumor regression, all while showing minimal disentanglement. The implementation is made available online at https://github.com/julschoen/Temp-GAN.
Original languageEnglish
Publisherarxiv.org
DOIs
Publication statusPublished - 13 Jan 2023

    Research areas

  • cs.CV, cs.LG

Number of downloads are based on statistics from Google Scholar and www.ku.dk


No data available

ID: 333626101