Skip to main content
Fig. 3 | BMC Medical Imaging

Fig. 3

From: MedFusionGAN: multimodal medical image fusion using an unsupervised deep generative adversarial network

Fig. 3

The MedFusionGAN framework \(\mathcal {G_\theta }:\ \{\mathcal {X}, \mathcal {Y}\} \rightarrow \mathcal {F}\). The \(\mathcal {G}_\theta\) aims to map the CT (\(\mathcal {X}\)) and MRI (\(\mathcal {Y}\)) images data distribution under the unsupervised framework to image fusion (\(\mathcal {F}\)) while the \(\mathcal {D}_\vartheta\) quantify the data distribution distance between the source images and the fusion image (a). The fusion images will have a probability distribution sampled from source images (b)

Back to article page