Please note that these tutorials refer to a deprecated version of the platform. The current version of the platform, which is not publicly available, has a more advanced architecture and provides a wider range of functionalities. These tutorials are only for illustrative purposes and showcase a limited number of the platform’s capabilities.

Federated learning: pretrained model

In this notebook, we provide a simple example of how to perform an experiment in a federated environment with the help of the Sherpa.ai Federated Learning framework. We are going to use a pretrained model to show how new or already trained models can be included in the training.

This is important as companies tend to have local models trained and already working, so this federated training is used for boosting the performance starting from an already good point. If a robust model is used as the base model (with the parameters already tuned for this problem), the time to converge will be significantly reduced.

The data

The framework provides some functions for loading the Emnist digits dataset for experimental purposes.

import copy
import matplotlib.pyplot as plt
import numpy as np
import shfl
from shfl.auxiliar_functions_for_notebooks.functionsFL import *
import tensorflow as tf

database = shfl.data_base.Emnist()
train_data, train_labels, test_data, test_labels = database.load_data()
2022-03-18 13:55:52.771174: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2022-03-18 13:55:52.771190: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.

Let's inspect some properties of the loaded data.

print(len(train_data))
print(len(test_data))
print(type(train_data[0]))
train_data[0].shape
240000
40000
<class 'numpy.ndarray'>





(28, 28)

So, as we have seen, our dataset is composed of a set of matrices that are 28 by 28. Before starting with the federated scenario, we can take a look at a sample in the training data.

plt.imshow(train_data[0])
<matplotlib.image.AxesImage at 0x7f31650be7c0>

png

We are going to simulate a federated learning scenario with a set of client nodes containing private data, and a central server that will be responsible for coordinating the different clients. There are a set of different possibilities for distributing the data. The distribution of the data is one of the factors that can have the most impact on a federated algorithm.

In order to do that, we are going to use the previously loaded dataset. The assumption in this example is that the data is distributed as a set of independent and identically distributed random variables, with every node having approximately the same amount of data.

Let's assume that the actual model is trained only with the central company's samples and that it has never used any other data. Assigning half of the data to the central headquarters and the rest to other entities could be a nice scenario to test the behaviour of the model.

First of all, let's divide in half the original dataset. We will mantain the global testing dataset the same size, as this remaining data is used just for experimental purposes and does not fit a real scenario.

divided_data = np.array_split(train_data, 3)
divided_labels = np.array_split(train_labels, 3)

central_data = divided_data[0]

rest_data= np.concatenate((divided_data[1], divided_data[2]), axis=0)


central_labels = divided_labels[0]

rest_labels= np.concatenate((divided_labels[1], divided_labels[2]), axis=0)


central_data = central_data.reshape(-1,28,28,1)

database._train_data = rest_data
database._train_labels = rest_labels

iid_distribution = shfl.data_distribution.IidDataDistribution(database)
nodes_federation, test_data, test_label = iid_distribution.get_nodes_federation(num_nodes=5, percent=100)

That's it! We have created federated data from the Emnist dataset using 20 nodes and 10 percent of the available data. This data is distributed to a set of data nodes in the form of private data.

The model

A federated learning algorithm is defined by a machine learning model, locally deployed in each node, that learns from the respective node's private data. Then, an aggregating mechanism aggregates the different model parameters uploaded by the client nodes to later redistribute them to the nodes again to continue the training, in a loop.

In this example, we will use a deep learning model using Keras to build it. The framework provides classes on using Tensorflow and Keras models in a federated learning scenario, your only job is to create a function acting as model builder. Moreover, the framework provides classes to allow using pretrained Tensorflow and Keras models. In this example, we will use a pretrained Keras learning model.

Let's use the central data to train the main headquarters model:

#If you want execute in GPU, you must uncomment this two lines.
# physical_devices = tf.config.experimental.list_physical_devices('GPU')
# tf.config.experimental.set_memory_growth(physical_devices[0], True)
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Conv2D(16, kernel_size=(3, 3), padding='same', activation='relu', strides=1, input_shape=(28, 28, 1)))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2, strides=2, padding='valid'))
model.add(tf.keras.layers.Dropout(0.4))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(10, activation='softmax'))

model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])

model.fit(x=central_data, y=central_labels, batch_size=128, epochs=1, validation_split=0.3, 
                verbose=1, shuffle=False)
2022-03-18 13:55:56.243411: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2022-03-18 13:55:56.243428: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
2022-03-18 13:55:56.243442: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (SH-083-WS): /proc/driver/nvidia/version does not exist
2022-03-18 13:55:56.243594: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.


438/438 [==============================] - 6s 13ms/step - loss: 1.5809 - accuracy: 0.7521 - val_loss: 0.2730 - val_accuracy: 0.9362





<keras.callbacks.History at 0x7f3160eb7a00>

Now that the training is done, it is important to calculate the predictions over the test data to later compare these to the ones obtained with the help of Federated Learning.

predictions_cent = model.predict(test_data)

For the Federated training, we encapsulate the model in order to work with the Sherpa.ai Federated Learning tool.

def model_builder():
    pretrained_model = model
    
    loss = tf.keras.losses.CategoricalCrossentropy()
    optimizer = tf.keras.optimizers.RMSprop()
    metrics = [tf.keras.metrics.categorical_accuracy]
    
    return shfl.model.DeepLearningModel(model=pretrained_model, loss=loss, optimizer=optimizer, metrics=metrics)

Now, the only piece missing is the aggregation operator. Nevertheless, the framework provides some aggregation operators that we can use. In the following piece of code, we define the federated aggregation mechanism. Moreover, we define the federated government based on the Keras learning model, the federated data, and the aggregation mechanism.

aggregator = shfl.federated_aggregator.FedAvgAggregator()
federated_government = shfl.federated_government.FederatedGovernment(model_builder(), nodes_federation, aggregator)

Before running the algorithm, we want to apply transformations to the data. A good practice is to define a federated operation that will ensure that the transformation is applied to the federated data in all the client nodes. We want to reshape and cast to float the data, so we define the following federated transformations from our library directly to the nodes' data:

nodes_federation.apply_data_transformation(reshape_data_tf);
nodes_federation.apply_data_transformation(cast_to_float);

Run the federated learning experiment

We are now ready to execute our federated learning algorithm. The global test data we are using is not used in a real case, it is experimental. However, it needs the same data transformations as the federated data to become useful.

test_data = np.reshape(test_data, (test_data.shape[0], test_data.shape[1], test_data.shape[2],1))
test_data = test_data.astype(np.float32)

federated_government.run_rounds(5, test_data, test_label)
Evaluation in round 0:

Collaborative model test -> Loss: 0.1214061975479126, Accuracy: 0.9671750068664551
========================


Evaluation in round 1:

Collaborative model test -> Loss: 0.09997041523456573, Accuracy: 0.9735749959945679
========================


Evaluation in round 2:

Collaborative model test -> Loss: 0.08663196116685867, Accuracy: 0.9773749709129333
========================


Evaluation in round 3:

Collaborative model test -> Loss: 0.07743880152702332, Accuracy: 0.9807000160217285
========================


Evaluation in round 4:

Collaborative model test -> Loss: 0.07763899862766266, Accuracy: 0.9810500144958496
========================

ROC CURVE

Now that we have finished the training, let's compare both models and see how did both models behave.

federated_model = federated_government._server._model
predictions_fed = federated_model.predict(test_data)

After the training process, we see the accuracy of the model, and then, we begin calculating the ROC AUC scores for these 2 models.

values=[predictions_cent, predictions_fed]
titles=['Headquarters model', 'Federated model']
colors=['red','green']
linestyle=['-','-.']

plot_all_roc_curves(test_labels, values, titles, colors, linestyle)

png

F1 score

Another metric used is the F1 score, which seeks the balance between precision and recall values. This metric will allow us to estimate performance in another way and better compare the results.

n_classes=10

values_f1_fed = predictions_fed.argmax(axis=-1)
values_f1_fed = np.eye(n_classes)[values_f1_fed]

values_f1_cent = predictions_cent.argmax(axis=-1)
values_f1_cent = np.eye(n_classes)[values_f1_cent]

score_fed_f1 = f1_score(test_labels, values_f1_fed, average='macro')
score_cent_f1 = f1_score(test_labels, values_f1_cent, average='macro')


values=[round(score_cent_f1, 3), round(score_fed_f1, 3)]
titles=['Centralized', 'Federated']
colors=['red', 'green']
plot_all_f1(values, titles, colors)

png

;