Commit 28ebb56b authored by ravanbakhsh's avatar ravanbakhsh
Browse files

Update README.md bad merge fix

parent 7200231c
# S<sup>2</sup>-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images
<<<<<<< HEAD
This repository contains the code of the paper S<sup>2</sup>-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images accepted at the IEEE International Geoscience and Remote Sensing Symposium, 2020. This work has been done at the [Remote Sensing Image Analysis group](https://www.rsim.tu-berlin.de/menue/remote_sensing_image_analysis_group/) by [Jose Luis Holgado](),[Mahdyar Ravanbakhsh](https://www.rsim.tu-berlin.de/menue/team/dr_sayyed_mahdyar_ravanbakhsh/) and [Begüm Demir](https://begumdemir.com/).
## Abstract
Deep Neural Networks have recently demonstrated promising performance in binary change detection (CD) problems in remote sensing (RS), requiring a large amount of labeled multitemporal training samples. Since collecting such data is time-consuming and costly, most of the existing methods rely on pre-trained networks on publicly available computer vision (CV) datasets. However, because of the differences in image characteristics in CV and RS, this approach limits the performance of the existing CD methods. To address this problem, we propose a self-supervised conditional Generative Adversarial Network (S<sup>2</sup>-cGAN). The proposed S<sup>2</sup>-cGAN is trained to generate only the distribution of unchanged samples. To this end, the proposed method consists of two main steps: 1) Generating a reconstructed version of the input image as an unchanged image 2) Learning the distribution of unchanged samples through an adversarial game. Unlike the existing GAN based methods (which only use the discriminator during the adversarial training to supervise the generator), the S<sup>2</sup>-cGAN directly exploits the discriminator likelihood to solve the binary CD task. Experimental results show the effectiveness of the proposed S<sup>2</sup>-cGAN when compared to the state of the art CD methods
Citation of our paper can be written as:
=======
This repository contains the code of the paper [S<sup>2</sup>-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images](https://arxiv.org/abs/2007.02565) accepted at the IEEE International Geoscience and Remote Sensing Symposium, 2020. This work has been done at the [Remote Sensing Image Analysis group](https://www.rsim.tu-berlin.de/menue/remote_sensing_image_analysis_group/) by [Jose Luis Holgado](), [Mahdyar Ravanbakhsh](https://www.rsim.tu-berlin.de/menue/team/dr_sayyed_mahdyar_ravanbakhsh/) and [Begüm Demir](https://begumdemir.com/).
......@@ -15,7 +7,6 @@ This repository contains the code of the paper [S<sup>2</sup>-cGAN: Self-Supervi
Deep Neural Networks have recently demonstrated promising performance in binary change detection (CD) problems in remote sensing (RS), requiring a large amount of labeled multitemporal training samples. Since collecting such data is time-consuming and costly, most of the existing methods rely on pre-trained networks on publicly available computer vision (CV) datasets. However, because of the differences in image characteristics in CV and RS, this approach limits the performance of the existing CD methods. To address this problem, we propose a self-supervised conditional Generative Adversarial Network (S<sup>2</sup>-cGAN). The proposed S<sup>2</sup>-cGAN is trained to generate only the distribution of unchanged samples. To this end, the proposed method consists of two main steps: 1) Generating a reconstructed version of the input image as an unchanged image 2) Learning the distribution of unchanged samples through an adversarial game. Unlike the existing GAN based methods (which only use the discriminator during the adversarial training to supervise the generator), the S<sup>2</sup>-cGAN directly exploits the discriminator likelihood to solve the binary CD task. Experimental results show the effectiveness of the proposed S<sup>2</sup>-cGAN when compared to the state of the art CD methods.
If you use our code, please cite the associated paper as:
>>>>>>> 3c6a3edf9fade715d06aa570c759fb31475ed83a
> J. L. Holgado Alvarez, M. Ravanbakhsh, B. Demіr, "S<sup>2</sup>-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images", IEEE International Geoscience and Remote Sensing Symposium, Hawaii, USA, 2020.
......@@ -24,35 +15,11 @@ If you use our code, please cite the associated paper as:
author={J.L. {Holgado Alvarez}, M. {Ravanbakhsh} and B. {Demіr}},
journal={IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium},
title={S^2-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images},
<<<<<<< HEAD
year={2020},
doi={""}
=======
year={2020}
>>>>>>> 3c6a3edf9fade715d06aa570c759fb31475ed83a
}
```
## S<sup>2</sup>-cGAN Change Detection Strategy
<!--![](test.png)-->
<<<<<<< HEAD
The following illustration represents the change detection strategy introduced in S<sup>2</sup>-cGAN
<img src="test.png" width="450" height="200">
For the details please check the original paper
Examples of score maps associated to a pair of patches, and its comparison to the ground-truth
<img src="eval_example.png" width="400" height="400">
## Prerequisites
* The code is tested with Python 3.7, PyTorch 0.4.1 and Ubuntu 18.04.
* Please, check out the requirements file for more information
* For reproducibility, you can find our train, test, and validiation sets on the following repository: [Our repository](https://tubcloud.tu-berlin.de/s/BQDWwwcFEkotHJ9)
* Password: Igarss_2020
## Hardware setup
* The software was trained and tested in a machine with the following features:
=======
The following illustration represents the change detection strategy introduced in S<sup>2</sup>-cGAN.
<img src="test.png" width="450" height="200">
......@@ -72,27 +39,10 @@ For the details please check our [paper](https://arxiv.org/abs/2007.02565).
### Hardware setup
The model was trained and tested in a machine with the following features:
>>>>>>> 3c6a3edf9fade715d06aa570c759fb31475ed83a
* Nvidia Quadro P2000
* 16 GB RAM
* 12 CPU cores
<<<<<<< HEAD
## Dataset Description
We used Worldview 2 satellite Very High Spatial Resolution (VHR) multispectral images dataset provided here:
[Original dataset, Worldview 2](https://github.com/MinZHANG-WHU/FDCNN)
## Data creation
The script `data_creator.py` expects the following command line arguments:
* `--band3` directory location of the rgb channels
* `--band4` directory location of the I channel
* `--output_dir` output parent directory
* `--test_dir` output directory for json testset
* `--train_dir` output directory for json trainset
* `--eval_dir` output directory for json evaluation set
=======
### Dataset
We used Worldview 2 satellite Very High Spatial Resolution (VHR) multispectral images dataset provided [here](https://github.com/MinZHANG-WHU/FDCNN).
......@@ -110,7 +60,6 @@ The script `data_creator.py` expects the following arguments, you can used the d
* `--test_dir` output directory for testset json file
* `--train_dir` output directory for trainset json file
* `--eval_dir` output directory for evaluation json file
>>>>>>> 3c6a3edf9fade715d06aa570c759fb31475ed83a
* `--gt` name and extension for ground-truth patches
* `--A` name and extension for T1 patches
* `--B`name and extension for T2 patches
......@@ -120,39 +69,6 @@ The script `data_creator.py` expects the following arguments, you can used the d
* `--kernel` patch size, by default 128
* `--offset` padding value, by default 0
* `--stride` stride value, by default 10
<<<<<<< HEAD
* * execution example: `python data_creator.py `
## Training
The script `train.py` expects the following command line arguments:
* `--dataset` specify the path to the trainset json file
* execution example: `python train.py --dataroot /patches --dataset /train_set/patch_coords.json --name S2_NET_001`
## Test
The script `test.py` expects the following command line arguments:
* `--dataset` specify the path to the testset json file
* `--results_dir` specify the path to the output directory
* execution example: `python test.py --dataroot /patches --dataset /test_set/patch_coords.json --name S2_NET_001 --results_dir /results`
## Evaluation
The script `evaluation.py` expects the following command line arguments:
* `--dataset` specify the path to the evaluationtset json file
* `--results_dir` specify the path to the output directory created by test.py
* `--date` select the date when the experiment test.py was performed
* execution example: `python evaluation.py `
## Base options
The general configurations by found under base_options expects the following command line arguments:
* `--dataroot` path to patches folder
* `--name` network name
=======
### Training Phase
......@@ -193,31 +109,12 @@ The script `performance_evaluation.py` expects the following command line argume
* `--results_dir` specify the path to the output directory created by test.py
>>>>>>> 3c6a3edf9fade715d06aa570c759fb31475ed83a
## Authors
**Jose Luis Holgado Alvarez**
## Acknowledgements
<<<<<<< HEAD
This software was developed based on the work of.
* title:
* Image-to-Image Translation with Conditional Adversarial Nets
* conference:
* Published in: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
* Repository:
* https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix
* Authors:
* Phillip Isola
* Jun-Yan Zhu
* Tinghui Zhou
* Alexei A. Efros
## Maintained by
**Jose Luis Holgado Alvarez**
* Contact point:
=======
This software was developed based on:
P. Isola, J. Zhu, T. Zhou, A. A. Efros. [Image-to-Image Translation with Conditional Adversarial Nets]( https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix) , 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
......@@ -226,25 +123,16 @@ P. Isola, J. Zhu, T. Zhou, A. A. Efros. [Image-to-Image Translation with Conditi
**Jose Luis Holgado Alvarez**
E-mail:
>>>>>>> 3c6a3edf9fade715d06aa570c759fb31475ed83a
* jose.l.holgadoalvarez@campus.tu-berlin.de
* jlhalv92@gmail.com
## License
<<<<<<< HEAD
The code in this repository to facilitate the use of the `S<sup>2</sup>-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images` is licensed under the **MIT License**:
=======
The code in this repository to facilitate the use of the `S2-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images` is licensed under the **MIT License**:
>>>>>>> 3c6a3edf9fade715d06aa570c759fb31475ed83a
```
MIT License
<<<<<<< HEAD
Copyright (c) 2020 The Authors of The Paper, " S<sup>2</sup>-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images"
=======
Copyright (c) 2020 The Authors of The Paper, " S2-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images"
>>>>>>> 3c6a3edf9fade715d06aa570c759fb31475ed83a
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
......@@ -263,4 +151,4 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
\ No newline at end of file
```
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment