From 71f49049164afc6f704ac0c2271ca64b04d9820f Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Baris=20B=C3=BCy=C3=BCktas?= <baris.bueyuektas@tu-berlin.de>
Date: Fri, 13 Jan 2023 14:28:06 +0000
Subject: [PATCH] Update README.md

---
 README.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/README.md b/README.md
index 1970203..06f198d 100644
--- a/README.md
+++ b/README.md
@@ -9,7 +9,7 @@ This repository contains (in parts) code that has been adapted from:
 
 
 ## Introduction
-Remote sensing (RS) image archives are stored under different databases due to their growth in size and the data storage limitations of gathering all the data in a centralized server. In addition, RS image archives of some data providers (e.g., commercial providers) may not be accessible to the public due to commercial concerns, legal regulations, etc. However, most of the deep learning (DL) based approaches require full access to data while learning the model parameters of deep neural networks (DNNs). When there is no access to data on decentralized RS image archives, federated learning (FL) can be used, which aims to learn DNN models on distributed databases (i.e., clients) and to find the optimal model parameters in a global server (i.e., global model) without accessing data on clients. However, RS images on different clients can be associated with different data modalities. To address the above-mentioned issues, as the first time in RS, we propose a FL framework in the context of multi-modal scene classification in RS. The proposed framework aims to learn DNNs on a central server for multi-modal classification problems without accessing data on clients, which can hold RS images associated with multiple modalities. To this end, our framework includes three modules: 1) multi-modal fusion (MF); 2) feature whitening (FW); and 3) mutual information maximization (MIM). According to the results, our framework achieves the highest score compared to MSFedAvg, which is a way to perform multi-modal FL by separately performing FedAvg algorithm for multiple modalities and then averaging class probabilities during inference.
+Remote sensing (RS) image archives are stored under different databases due to their growth in size and the data storage limitations of gathering all the data in a centralized server. In addition, RS image archives of some data providers (e.g., commercial providers) may not be accessible to the public due to commercial concerns, legal regulations, etc. However, most of the deep learning (DL) based approaches require full access to data while learning the model parameters of deep neural networks (DNNs). When there is no access to data on decentralized RS image archives, federated learning (FL) can be used, which aims to learn DNN models on distributed databases (i.e., clients) and to find the optimal model parameters in a global server (i.e., global model) without accessing data on clients. However, RS images on different clients can be associated with different data modalities. To address the above-mentioned issue, as the first time in RS, we propose a FL framework in the context of multi-modal scene classification in RS. The proposed framework aims to learn DNNs on a central server for multi-modal classification problems without accessing data on clients, which can hold RS images associated with multiple modalities. To this end, our framework includes three modules: 1) multi-modal fusion (MF); 2) feature whitening (FW); and 3) mutual information maximization (MIM). According to the results, our framework achieves the highest score compared to MSFedAvg, which is a way to perform multi-modal FL by separately performing FedAvg algorithm for multiple modalities and then averaging class probabilities during inference.
 
 ## Prerequisites
 The code in this repository requires Python 3.10.4, pytorch 1.12.1. 
-- 
GitLab