Skip to content
Snippets Groups Projects

S2-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images

This repository contains the code of the paper S2-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images accepted at the IEEE International Geoscience and Remote Sensing Symposium, 2020. This work has been done at the Remote Sensing Image Analysis group by Jose Luis Holgado,Mahdyar Ravanbakhsh and Begüm Demir.

Abstract

Deep Neural Networks have recently demonstrated promising performance in binary change detection (CD) problems in remote sensing (RS), requiring a large amount of labeled multitemporal training samples. Since collecting such data is time-consuming and costly, most of the existing methods rely on pre-trained networks on publicly available computer vision (CV) datasets. However, because of the differences in image characteristics in CV and RS, this approach limits the performance of the existing CD methods. To address this problem, we propose a self-supervised conditional Generative Adversarial Network (S2-cGAN). The proposed S2-cGAN is trained to generate only the distribution of unchanged samples. To this end, the proposed method consists of two main steps: 1) Generating a reconstructed version of the input image as an unchanged image 2) Learning the distribution of unchanged samples through an adversarial game. Unlike the existing GAN based methods (which only use the discriminator during the adversarial training to supervise the generator), the S2-cGAN directly exploits the discriminator likelihood to solve the binary CD task. Experimental results show the effectiveness of the proposed S2-cGAN when compared to the state of the art CD methods

Citation of our paper can be written as:

J. L. Holgado Alvarez, M. Ravanbakhsh, B. Demіr, "S2-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images", IEEE International Geoscience and Remote Sensing Symposium, Hawaii, USA, 2020.

@article{IG203936,
  author={J.L. {Holgado Alvarez}, M. {Ravanbakhsh} and B. {Demіr}},
  journal={IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium}, 
  title={S^2-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images}, 
  year={2020},
  doi={""}
}

S2-cGAN Change Detection Strategy

The following illustration represents the change detection strategy introduced in S2-cGAN

For the details please check the original paper

Examples of score maps associated to a pair of patches, and its comparison to the ground-truth

Prerequisites

  • The code is tested with Python 3.7, PyTorch 0.4.1 and Ubuntu 18.04.
  • Please, check out the requirements file for more information
  • For reproducibility, you can find our train, test, and validiation sets on the following repository: Our repository
  • Password: Igarss_2020

Hardware setup

  • The software was trained and tested in a machine with the following features:
  • Nvidia Quadro P2000
  • 16 GB RAM
  • 12 CPU cores

Dataset Description

We used Worldview 2 satellite Very High Spatial Resolution (VHR) multispec-tral images dataset.

Original dataset, Worldview 2

Our repository Password: Igarss_2020

Data creation

The script data_creator.py expects the following command line arguments:

  • --band3 directory location of the rgb channels
  • --band4 directory location of the I channel
  • --output_dir output parent directory
  • --test_dir output directory for json testset
  • --train_dir output directory for json trainset
  • --eval_dir output directory for json evaluation set
  • --gt name and extension for ground-truth patches
  • --A name and extension for T1 patches
  • --Bname and extension for T2 patches
  • --city identification code template for patches, by default wv2{}_{}. the first gap corresponds to tile id and the second gap correspond to patch id in the tile
  • --custom_filter_active boolean variable to trigger filter operations over the trainset, the filter consist in remove patches from the trainset that contains a ratio of change/no_change bigger than certain threshold
  • --filter_value, filter threshold value (0, 1)
  • --kernel patch size, by default 128
  • --offset padding value, by default 0
  • --stride stride value, by default 10
    • execution example: python data_creator.py

Training

The script train.py expects the following command line arguments:

  • --dataset specify the path to the trainset json file
  • execution example: python train.py --dataroot /patches --dataset /train_set/patch_coords.json --name S2_NET_001

Test

The script test.py expects the following command line arguments:

  • --dataset specify the path to the testset json file
  • --results_dir specify the path to the output directory
  • execution example: python test.py --dataroot /patches --dataset /test_set/patch_coords.json --name S2_NET_001 --results_dir /results

Evaluation

The script evaluation.py expects the following command line arguments:

  • --dataset specify the path to the evaluationtset json file
  • --results_dir specify the path to the output directory created by test.py
  • --date select the date when the experiment test.py was performed
  • execution example: python evaluation.py

Base options

The general configurations by found under base_options expects the following command line arguments:

  • --dataroot path to patches folder
  • --name network name

Authors

Jose Luis Holgado Alvarez

Acknowledgements

This software was developed based on the work of.

  • title:
  • Image-to-Image Translation with Conditional Adversarial Nets
  • conference:
  • Published in: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • Repository:
  • https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix
  • Authors:
  • Phillip Isola
  • Jun-Yan Zhu
  • Tinghui Zhou
  • Alexei A. Efros

Maintained by

Jose Luis Holgado Alvarez

License

The code in this repository to facilitate the use of the S<sup>2</sup>-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images is licensed under the MIT License:

MIT License

Copyright (c) 2020 The Authors of The Paper, " S<sup>2</sup>-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images"

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.