Please note that these tutorials refer to a deprecated version of the platform. The current version of the platform, which is not publicly available, has a more advanced architecture and provides a wider range of functionalities. These tutorials are only for illustrative purposes and showcase a limited number of the platform’s capabilities.

Federated learning: basic concepts

In this notebook we provide a simple example of how to perform an experiment in a federated environment with a PyTorch model using custom layers on the network, with the help of Sherpa.ai Federated Learning framework. To set up the federated learning experiment we will show the simple steps for loading the dataset and distribute it to a federated network of clients, and we will define the model that will be trained in the federated learning rounds.

The data

We are going to use a popular dataset: the framework provides some functions to load the Emnist digits dataset.

import matplotlib.pyplot as plt
import shfl
from shfl.auxiliar_functions_for_notebooks.functionsFL import *
from sklearn.metrics import accuracy_score, f1_score
import torch
import torch.nn as nn
import torch.optim as optim

database = shfl.data_base.Emnist()
train_data, train_labels, test_data, test_labels = database.load_data()
2022-03-21 09:27:45.734748: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2022-03-21 09:27:45.734765: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.

Let's inspect some properties of the loaded data.

print(len(train_data))
print(len(test_data))
print(type(train_data[0]))
train_data[0].shape
240000
40000
<class 'numpy.ndarray'>





(28, 28)

So, as we have seen, our dataset is composed of a set of matrices that are 28 by 28. Before starting with the federated scenario, we can take a look at a sample of the training data.

plt.imshow(train_data[0])
<matplotlib.image.AxesImage at 0x7fb13f2057f0>

png

We are going to simulate a federated learning scenario with a set of client nodes containing private data, and a central server that will be responsible for coordinating the different clients. But, first of all, we have to simulate the data contained in every client. In order to do that, we are going to use the previously loaded dataset. The assumption in this example is that the data is distributed as a set of independent and identically distributed random variables, with every node having approximately the same amount of data.

iid_distribution = shfl.data_distribution.IidDataDistribution(database)
nodes_federation, test_data, test_labels = iid_distribution.get_nodes_federation(num_nodes=20, percent=10)

That's it! We have created federated data from the Emnist dataset using 20 nodes and 10 percent of the available data. This data is distributed to a set of data nodes in the form of private data.

The model

A federated learning algorithm is defined by a machine learning model, locally deployed in each node, that learns from the respective node's private data. Then, an aggregating mechanism aggregates the different model parameters uploaded by the client nodes to later redistribute them to the nodes again to continue the training, in a loop.

In this example, we will use a deep learning model using PyTorch to build it. The framework provides classes on using PyTorch custom models in a federated learning scenario, your only job is to create a function acting as model builder. Moreover, the framework allows introducing user defined layers into the model, adding more customization possibilities. In this example, we are defining a Flatten layer and then using it on our model.

def accuracy(y_pred, y_true):
    """
    # Arguments:
        y_pred: Predictions with shape BxC (B: batch lenght; C: number of classes). Sum 1 for row.
        y_true: Labels for data with One Hot Encoded format
    """
    return accuracy_score(np.argmax(y_pred, -1), np.argmax(y_true, -1))

def f1(y_pred, y_true):
    """
    # Arguments:
        y_pred: Predictions with shape BxC (B: batch lenght; C: number of classes). Sum 1 for row.
        y_true: Labels for data with One Hot Encoded format
    """
    return f1_score(np.argmax(y_pred, -1), np.argmax(y_true, -1), average='macro')


class Flatten(nn.Module):
    def forward(self, input):
        return input.view(input.size(0), -1)
    

def model_builder():
    model = nn.Sequential(
        nn.Conv2d(1, 32, kernel_size=(3, 3), stride=1, padding=1),
        nn.ReLU(),
        nn.MaxPool2d(kernel_size=2, stride=2),
        nn.Dropout(.4),
        nn.Conv2d(32, 32, kernel_size=(3, 3), stride=1, padding=1),
        nn.ReLU(),
        nn.MaxPool2d(kernel_size=2, stride=2),
        nn.Dropout(.3),
        Flatten(),
        nn.Linear(1568, 128),
        nn.ReLU(),
        nn.Dropout(.1),
        nn.Linear(128, 64),
        nn.ReLU(),
        nn.Linear(64, 10),
        nn.Softmax(dim=1)

    )
    loss = nn.CrossEntropyLoss()
    optimizer = optim.RMSprop(model.parameters(), lr=0.001, eps=1e-07)

    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

    return shfl.model.DeepLearningModelPyTorch(model=model, loss=loss, optimizer=optimizer,
                                               device=device, metrics={'accuracy':accuracy, 'f1':f1})

Now, the only missing piece is the aggregation operator. Nevertheless, the framework provides some aggregation operators that we can use. In the following piece of code, we define the federated aggregation mechanism. Moreover, we define the federated government based on the PyTorch model, the federated data, and the aggregation mechanism.

aggregator = shfl.federated_aggregator.FedAvgAggregator()
federated_government = shfl.federated_government.FederatedGovernment(model_builder(), nodes_federation, aggregator)

Before running the algorithm, we want to apply a transformation to the data. A good practice is to define a federated operation that will ensure that the transformation is applied to the federated data in all the client nodes. We want to reshape the data, so we define the following Federated Transformation.

nodes_federation.apply_data_transformation(reshape_data_pt);

In addition, we want to normalize the data, as it is a good practice in order to make the model converge faster. We define a federated transformation using mean and standard deviation (std) parameters. We use the mean and std estimated from the training set in this example. Although the ideal parameters would be an aggregation of the mean and std of each client's training datasets, we use the mean and std of the global dataset as a simple approximation for this experiment.

mean = np.mean(train_data.data)
std = np.std(train_data.data)
nodes_federation.apply_data_transformation(normalize_data, mean=mean, std=std);

Run the federated learning experiment

We are now ready to execute our federated learning algorithm. The global test data we are using is not used in a real case, it is experimental. However, it needs the same data transformations as the federated data to become useful.

test_data = np.reshape(test_data, (test_data.shape[0], 1, test_data.shape[1], test_data.shape[2]))
test_data = (test_data - mean) / std

federated_government.run_rounds(3, test_data, test_labels)
Evaluation in round 0:

Collaborative model test -> Loss: 2.229773759841919, Accuracy: 0.680375
========================


Evaluation in round 1:

Collaborative model test -> Loss: 1.6345568895339966, Accuracy: 0.842725
========================


Evaluation in round 2:

Collaborative model test -> Loss: 1.5518444776535034, Accuracy: 0.918725
========================
;