This repository contains code of the paper `WS-cGAN: Weakly Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images` accepted at IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium. This work has been done at the [Remote Sensing Image Analysis group](https://www.rsim.tu-berlin.de/menue/remote_sensing_image_analysis_group/) by [Jose Luis Holgado](),[Mahdyar Ravanbakhsh](https://www.rsim.tu-berlin.de/menue/team/dr_sayyed_mahdyar_ravanbakhsh/) and [Begüm Demir](https://begumdemir.com/).
## Abstract:
Deep Neural Networks have recently demonstrated promising performance in binary change detection (CD) problems in remote sensing (RS), requiring a large amount of labeled multitemporal training samples. Since collecting such data is time-consuming and costly, most of the existing methods rely on pre-trained networks on publicly available computer vision (CV) datasets. However, because of the differences in image characteristics in CV and RS, this approach limits the performance of the existing CD methods. To address this problem, we propose a self-supervised conditional Generative Adversarial Network (S\textsuperscript{2}-cGAN). The proposed S\textsuperscript{2}-cGAN is trained to generate only the distribution of unchanged samples. To this end, the proposed method consists of two main steps: 1) Generating a reconstructed version of the input image as an unchanged image 2) Learning the distribution of unchanged samples through an adversarial game. Unlike the existing GAN based methods (which only use the discriminator during the adversarial training to supervise the generator), the S\textsuperscript{2}-cGAN directly exploits the discriminator likelihood to solve the binary CD task. Experimental results show the effectiveness of the proposed S\textsuperscript{2}-cGAN when compared to the state of the art CD methods.\blfootnote{\scriptsize{Our code is available online: \href{https://gitlab.tubit.tu-berlin.de/rsim/S2-cGAN}{https://gitlab.tubit.tu-berlin.de/rsim/S2-cGAN}}}
Deep Neural Networks have recently demonstrated promising performance in binary change detection (CD) problems in remote sensing (RS), requiring a large amount of labeled multitemporal training samples. Since collecting such data is time-consuming and costly, most of the existing methods rely on pre-trained networks on publicly available computer vision (CV) datasets. However, because of the differences in image characteristics in CV and RS, this approach limits the performance of the existing CD methods. To address this problem, we propose a self-supervised conditional Generative Adversarial Network (S2-cGAN). The proposed S2}-cGAN is trained to generate only the distribution of unchanged samples. To this end, the proposed method consists of two main steps: 1) Generating a reconstructed version of the input image as an unchanged image 2) Learning the distribution of unchanged samples through an adversarial game. Unlike the existing GAN based methods (which only use the discriminator during the adversarial training to supervise the generator), the S2-cGAN directly exploits the discriminator likelihood to solve the binary CD task. Experimental results show the effectiveness of the proposed S2-cGAN when compared to the state of the art CD methods.\blfootnote{\scriptsize{Our code is available online: \href{https://gitlab.tubit.tu-berlin.de/rsim/S2-cGAN}{https://gitlab.tubit.tu-berlin.de/rsim/S2-cGAN}}}
If you use this code, please cite our paper given below: