Skip to content
Snippets Groups Projects
Name Last commit Last update
README.md

Learning Across Decentralized Multi-Modal Remote Sensing Archives with Federated Learning

This repository contains code of the paper abstract Learning Across Decentralized Multi-Modal Remote Sensing Archives with Federated Learning submitted to IEEE International Geoscience and Remote Sensing Symposium (IGARSS) 2023. This work has been done at the Remote Sensing Image Analysis group by Baris Buyuktas, Gencer Sumbul, and Begüm Demir.

This repository contains (in parts) code that has been adapted from:

Introduction

Remote sensing (RS) images are usually stored in compressed format to reduce the storage size of the archives. Thus, existing content-based image retrieval (CBIR) systems in RS require decoding images before applying CBIR (which is computationally demanding in the case of large-scale CBIR problems). To address this problem, in this paper, we present a joint framework that simultaneously learns RS image compression and indexing, eliminating the need for decoding RS images before applying CBIR. The proposed framework is made up of two modules. The first module aims at effectively compressing RS images. It is achieved based on an auto-encoder architecture. The second module aims at producing hash codes with a high discrimination capability. It is achieved based on a deep hashing method that exploits soft pairwise, bit-balancing and classification loss functions. We also propose a two stage learning strategy with gradient manipulation techniques to obtain image representations that are compatible with both RS image indexing and compression.

Prerequisites

The code in this repository requires Python 3.7.6, pytorch 1.7.0 and ranger coder.

The code is tested in Ubuntu 20.04.

An exemplary setup which contains everything needed:

(1) wget  https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
(2) bash Miniconda3-latest-Linux-x86_64.sh (say yes to append path to bashrc)
(3) source .bashrc
(4) sudo add-apt-repository ppa:ubuntugis/ppa && sudo apt-get update && sudo apt-get install g++
(5) sudo apt-get install libgdal-dev gdal-bin
(6) ogrinfo --version
(7) conda activate base
(9) conda install matplotlib scipy scikit-learn scikit-image tqdm pillow pytorch  
(7) pip install wandb glymur pybind11 xlrd faiss-gpu
(10) pip install --global-option=build_ext --global-option="-I/usr/include/gdal/" GDAL==<GDAL VERSION FROM OGRINFO>
(11) python ./hashing-and-compression/compression/cpp_exts/setup.py  build
(12) python ./hashing-and-compression/compression/cpp_exts/setup.py  install

Datasets

Downloaded data should be placed in a folder named Dataset and keep the original structure as following:

Dataset └───BigEarthNet | └───S2A_MSIL2A_20170613T101031_0_48 | │ S2A_MSIL2A_20170613T101031_0_48_B0 | │ ... | ... | └───...

Note: The train/val/test splits of the dataset and its subsets are placed in ./hashing-and-compression/datasets.

To load the data from memory-map files for fast processing, set the flag --flag_npmem to create and load binary files.

To use the subset data, set the flag --flag_subset.

Logging results with W&B

  • Create an account here (free): https://wandb.ai
  • After the account is set, make sure to include your API key in parameters.py under --wandb_key.
  • Set the flag --log_online to use wandb logging, if the network is unavailable in the training environment, set the flag --wandb_dryrun to make wandb store the data locally, and upload the data with the command wandb sync <$path/wandb/offline..>
  • Set --project, --group and --savename during the training.

Training

The training of the joint model is divided into two stages, the first stage is done by train_compression.py, and the second stage is done by train.py.

The training of the hashing baseline is done by train_hashing.py.

All the parameters are listed and explained in parameters.py.

A set of exemplary runs is provided in ./hashing-and-compression/sample_run.

If the training is stopped accidentally, it can be resumed by setting --load_from_checkpoint, load the check point from $save_path/every_epoch.pth.tar.

Common setups

The following are common setups for all the training scripts:

  • --source_path: Path for the dataset, eg. $path/Dataset.
  • --dataset_name: Dataset name, choose BigEarthNet.
  • --save_path: Path to save everything.
  • --use_npmem: Flag. If set, create memory-map files and read data from memory-map files during training.
  • --flag_subset: Flag. If set, select subset dataset.
  • --batch_size: The size of each batch that will be processed.
  • --epochs: Total number of epochs for training.

Train joint model

Stage 1: train the compression part to a wide range of bit-rates

The script train_compression.py expects the following command line arguments:

  • --arch_c: Compression model architecture.
  • --noBpp_epoch: Epochs which will not backprop Bpp loss, only applicable for CNN compression model.
  • --iter: iterations for RNN compression model.

Note:
flag --noBpp_epoch sets the number of epochs which trains the distortion loss only in CNN compression model.

When the current epoch is smaller than --noBpp_epoch, the checkpoint is saved for the best PSNR on the validation set.

When the current epoch is larger than --noBpp_epoch, the checkpoint is saved for current smallest bit-rate on the validation set.

 # CNN compression model
 python ./hashing-and-compression/train_compression.py --source_path ./Dataset --dataset_name BigEarthNet --save_path ./output --use_npmem --flag_subset  --batch_size 32 --arch_c AttentionResidualJointMixGaussian --epochs 1500 --noBpp_epoch 120 --log_online --project  compression_BigEarthNet --group  ssim_cnn  --savename  ssim_cnn     

Stage 2: train the framework jointly

The script train.py expects the following command line arguments:

  • --load_from_checkpoint: Path of the pretrained compression model in Stage 1.
  • --flag_start_new: Flag. If set, reset the initial epoch to 0 and log everything in a new folder.
  • --flag_PCGrad: Flag. If set, the hashing sublosses are optimized by PCGrad.
  • --arch_h: Hashing model architecture.
  • --hash_bits: length of the hashcode.
  • --lr: learning rate of the compression part, default 1e-7.
  • --hash_lr: learning rate of the hashing part, default 1e-4.
  • --flag_from_y_hat: Flag. If set, train hashcodes from quantilized latents.

Note: Stage 2 loads the pretrained compression part, and train the comrpession and hashing jointly.

The checkpoint is saved for the best averaged precision on the validation set.

 # The CNN compression part is optimized by MGDA, the hashing part is optimized by PCGrad.
 python ./hashing-and-compression/train.py --source_path ./Dataset --dataset_name BigEarthNet --save_path ./output --use_npmem --flag_subset --batch_size 32 --epochs 40 --load_from_checkpoint ./output/ssim_cnn/AttentionResidualJointMixGaussian_bpp0.7.pth.tar --flag_start_new --flag_PCGrad --arch_h MiLAN_SGN_attention  --hash_bits 64 --lr 1e-7 --hash_lr 1e-4 --log_online --project hashing_and_compression --group ssim_cnn --savename SSIM_stage2_bpp0.7_hashbits64 

Train hashing baseline

The script train_hashing.py expects the following command line arguments:

  • --arch_autoencoder: Backbone architecture.
  • --arch_h: Hashing model architecture.
  • --flag_PCGrad: Flag. If set, the hashing sublosses are optimized by PCGrad.
  • --flag_dwa: Flag. If set, the hashing sublosses are optimized by Dynamic Weight Average.
  • --hash_bits: length of the hashcode.
  • --lr: learning rate of the backbone, default 1e-4.
  • --hash_lr: learning rate of the hashing part, default 1e-4.
# CNN backbone
python  ./hashing-and-compression/train_hashing.py --source_path ./Dataset --dataset_name BigEarthNet --save_path ./output --use_npmem --flag_subset  --batch_size 32 --epochs 40 --arch_autoencoder AutoencoderAttentionResidual --arch_h  MiLAN_SGN  --flag_PCGrad --hash_bits 64 --log_online --project hashing  --group  ssim_cnn --savename  hashing_baseline_hashbit64

Evaluation

The evaluation results are written into the checkpoint folder.

The script eval.py expects the following command line arguments:

  • --dataset_path: Path for the dataset, eg. $path/Dataset/BigEarthNet.
  • --use_npmem: Flag. If set, create memory-map files and read data from memory-map files during evaluation.
  • --flag_subset: Flag. If set, select subset dataset.
  • --batch_size: The size of each batch that will be processed.
  • --metrics: List of metrics for evaluating retrieval performance.
  • --K: Numper of samples to compute metrics.
  • --load_from_checkpoint: Path of the pretrained models (compression or hashing or both).
  • --entropy_estimation: Flag. If set, use estimated entropy to evaluate the compression performance.
  • --cuda: Flag. If set, use cuda during evluation.
  • --flag_hash_from_bits: Flag. If set, the retrieval is evaluated from hashcodes generated from bitstream.
# Evaluate the compression and retrieval performance of joint training
 python ./hashing-and-compression/eval.py --dataset_path ./Dataset/BigEarthNet --use_npmem --flag_subset --load_from_checkpoint ./output/SSIM_stage2_bpp0.7_hashbits64/AttentionResidualJointMixGaussian_MiLAN_SGN_attention_bpp0.7.pth.tar

Authors

Baris Buyuktas https://rsim.berlin/team/members/baris-buyuktas

Gencer Sümbül https://rsim.berlin/team/members/gencer-sumbul

License

The code in this repository is licensed under the MIT License:

MIT License

Copyright (c) 2022 The Authors of The Paper, "Learning Across Decentralized Multi-Modal Remote Sensing Archives with Federated Learning"

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.