Skip to content
Snippets Groups Projects

S2-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images

This repository contains code of the paper S2-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images accepted at IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium. This work has been done at the Remote Sensing Image Analysis group by Jose Luis Holgado,Mahdyar Ravanbakhsh and Begüm Demir.

Abstract

Deep Neural Networks have recently demonstrated promising performance in binary change detection (CD) problems in remote sensing (RS), requiring a large amount of labeled multitemporal training samples. Since collecting such data is time-consuming and costly, most of the existing methods rely on pre-trained networks on publicly available computer vision (CV) datasets. However, because of the differences in image characteristics in CV and RS, this approach limits the performance of the existing CD methods. To address this problem, we propose a self-supervised conditional Generative Adversarial Network (S2-cGAN). The proposed S2-cGAN is trained to generate only the distribution of unchanged samples. To this end, the proposed method consists of two main steps: 1) Generating a reconstructed version of the input image as an unchanged image 2) Learning the distribution of unchanged samples through an adversarial game. Unlike the existing GAN based methods (which only use the discriminator during the adversarial training to supervise the generator), the S2-cGAN directly exploits the discriminator likelihood to solve the binary CD task. Experimental results show the effectiveness of the proposed S2-cGAN when compared to the state of the art CD methods

If you use this code, please cite our paper given below:

J.L. Holgado Alvarez, M. Ravanbakhsh B. Demіr, "S2-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images",IGARSS ,2020.

@article{IG203936,
  author={J.L. {Holgado Alvarez}, M. {Ravanbakhsh} and B. {Demіr}},
  journal={IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium}, 
  title={S2-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images}, 
  year={2020},
  doi={""}
}

Detection strategy introduced in S2-cGAN.

The following Illustration represents the change detection strategy introduced by S2-cGAN

Change Detection output

On this section you can see an example of the score maps associated to different pairs of patches, and its comparison to the ground-truth

Prerequisites

  • The code is tested with Python 3.7, PyTorch 0.4.1 and Ubuntu 18.04.
  • Please, check out the requirements file for more information
  • For reproducibility, you can find our datasets on the following repository: Our repository
  • Password: Igarss_2020

Hardware setup

  • The software was train and tested in a machine with the following features:
  • Nvidia Quadro P2000
  • 16 GB RAM
  • 12 CPU cores

Dataset Description

including 2 pilot sites, and each site consists of a ground truth map (labeled changed and unchanged at pixel-level) and two-period Worldview 2 satellite images (Worldview 3 and WV3 were incorrectly written in our paper), located in Shenzhen, China, with a size of 1431×1431 pixels and a spatial resolution of 2 meters, acquired in 2010 and 2015 respectively. Original dataset, Worldview 2

dataset repository

Our repository Password: Igarss_2020

Data creation

The script data_creator.py expects the following command line arguments:

  • --band3 directory location of the rgb channels
  • --band4 directory location of the I channel
  • --output_dir output parent directory
  • --test_dir output directory for json testset
  • --train_dir output directory for json trainset
  • --eval_dir output directory for json evaluation set
  • --gt name and extension for ground-truth patches
  • --A name and extension for T1 patches
  • --Bname and extension for T2 patches
  • --city identification code template for patches, by default wv2{}_{}. the first gap corresponds to tile id and the second gap correspond to patch id in the tile
  • --custom_filter_active boolean variable to trigger filter operations over the trainset, the filter consist in remove patches from the trainset that contains a ratio of change/no_change bigger than certain threshold
  • --filter_value, filter threshold value (0, 1)
  • --kernel patch size, by default 128
  • --offset padding value, by default 0
  • --stride stride value, by default 10

Training

The script train.py expects the following command line arguments:

  • --dataset specify the path to the trainset json file

Test

The script test.py expects the following command line arguments:

  • --dataset specify the path to the testset json file
  • --results_dir specify the path to the output directory

Evaluation

The script evaluation.py expects the following command line arguments:

  • --dataset specify the path to the evaluationtset json file
  • --results_dir specify the path to the output directory created by test.py
  • --date select the date when the experiment test.py was performed
  • --experiment_n specify the experiment id [int]

Base options

The general configurations by found under base_options expects the following command line arguments:

  • --dataroot path to patches folder
  • --name network name

Authors

Jose Luis Holgado Alvarez

Acknowledgements

This software was developed based on the work of.

  • title:
  • Image-to-Image Translation with Conditional Adversarial Nets
  • conference:
  • Published in: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • Repository:
  • https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix
  • Authors:
  • Phillip Isola
  • Jun-Yan Zhu
  • Tinghui Zhou
  • Alexei A. Efros

Maintained by

Jose Luis Holgado Alvarez

License

The code in this repository to facilitate the use of the S2-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images is licensed under the MIT License:

MIT License

Copyright (c) 2020 The Authors of The Paper, " S2-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images"

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.