Privacy Artificial Intelligence

View on GitHub

Pretrained Model

In this notebook, we provide a simple example of how to perform an experiment in a federated environment, with the help of this framework. We are going to use a popular dataset to start the experimentation in a federated environment. The framework provides some functions for loading the Emnist Digits dataset.

import shfl
import matplotlib.pyplot as plt

database = shfl.data_base.Emnist()
train_data, train_labels, test_data, test_labels = database.load_data()

Let's inspect some properties of the loaded data.

print(len(train_data))
print(len(test_data))
print(type(train_data[0]))
train_data[0].shape
240000
40000
<class 'numpy.ndarray'>

(28, 28)

So, as we have seen, our dataset is composed of a set of matrices that are 28 by 28. Before starting with the federated scenario, we can take a look at a sample in the training data.

plt.imshow(train_data[0])

png

We are going to simulate a federated learning scenario with a set of client nodes containing private data, and a central server that will be responsible for coordinating the different clients. But, first of all, we have to simulate the data contained in every client. In order to do that, we are going to use the previously loaded dataset. The assumption in this example is that the data is distributed as a set of independent and identically distributed random variables, with every node having approximately the same amount of data. There are a set of different possibilities for distributing the data. The distribution of the data is one of the factors that can have the most impact on a federated algorithm. Therefore, the framework has some of the most common distributions implemented, which allows you to easily experiment with different situations. In Sampling Methods, you can dig into the options that the framework provides, at the moment.

iid_distribution = shfl.data_distribution.IidDataDistribution(database)
federated_data, test_data, test_label = iid_distribution.get_federated_data(num_nodes=20, percent=10)

That's it! We have created federated data from the Emnist dataset using 20 nodes and 10 percent of the available data. This data is distributed to a set of data nodes in the form of private data. Let's learn a little more about the federated data.

print(type(federated_data))
print(federated_data.num_nodes())
federated_data[0].private_data
<class 'shfl.private.federated_operation.FederatedData'>
20
Node private data, you can see the data for debug purposes but the data remains in the node
<class 'dict'>
{'5619496464': <shfl.private.data.LabeledData object at 0x14ef93d90>}

As we can see, private data in a node is not directly accessible, but the framework provides mechanisms to use this data in a machine learning model. A federated learning algorithm is defined by a machine learning model, locally deployed in each node, that learns from the respective node’s private data and an aggregating mechanism to aggregate the different model parameters uploaded by the client nodes to a central node. In this example, we will use a deep learning model using Keras to build it. The framework provides classes on using TensorFlow (see TensorFlow Model) and Keras (see A Simple Experiment) models in a federated learning scenario, your only job is to create a function acting as model builder. Moreover, the framework provides classes on using pretrained TensorFlow and Keras models. In this example, we will use a pretrained Keras learning model.

import tensorflow as tf
#If you want execute in GPU, you must uncomment this two lines.
# physical_devices = tf.config.experimental.list_physical_devices('GPU')
# tf.config.experimental.set_memory_growth(physical_devices[0], True)

train_data = train_data.reshape(-1,28,28,1)

model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu', strides=1, input_shape=(28, 28, 1)))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2, strides=2, padding='valid'))
model.add(tf.keras.layers.Dropout(0.4))
model.add(tf.keras.layers.Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu', strides=1))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2, strides=2, padding='valid'))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dropout(0.1))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(10, activation='softmax'))

model.compile(optimizer="rmsprop", loss="categorical_crossentropy", metrics=["accuracy"])

model.fit(x=train_data, y=train_labels, batch_size=128, epochs=3, validation_split=0.2,
                verbose=1, shuffle=False)
Epoch 1/3
1500/1500 [==============================] - 306s 204ms/step - loss: 0.3516 - accuracy: 0.9317 - val_loss: 0.0537 - val_accuracy: 0.9857
Epoch 2/3
1500/1500 [==============================] - 232s 154ms/step - loss: 0.0742 - accuracy: 0.9798 - val_loss: 0.0413 - val_accuracy: 0.9895
Epoch 3/3
1500/1500 [==============================] - 221s 147ms/step - loss: 0.0678 - accuracy: 0.9819 - val_loss: 0.0433 - val_accuracy: 0.9891


<tensorflow.python.keras.callbacks.History at 0x150515bd0>
def model_builder():
    return shfl.model.DeepLearningModel(model=model)

Now, the only piece missing is the aggregation operator. Nevertheless, the framework provides some aggregation operators that we can use. In the following piece of code, we define the federated aggregation mechanism. Moreover, we define the federated government based on the Keras learning model, the federated data and the aggregation mechanism.

aggregator = shfl.federated_aggregator.FedAvgAggregator()
federated_government = shfl.federated_government.FederatedGovernment(model_builder, federated_data, aggregator)

If you want to see all the aggregation operators, you can check out the Aggregation Operators notebook. Before running the algorithm, we want to apply a transformation to the data. A good practice is to define a federated operation that will ensure that the transformation is applied to the federated data in all the client nodes. We want to reshape the data, so we define the following FederatedTransformation.

import numpy as np

class Reshape(shfl.private.FederatedTransformation):

    def apply(self, labeled_data):
        labeled_data.data = np.reshape(labeled_data.data, (labeled_data.data.shape[0], labeled_data.data.shape[1], labeled_data.data.shape[2],1))

class CastFloat(shfl.private.FederatedTransformation):

    def apply(self, labeled_data):
        labeled_data.data = labeled_data.data.astype(np.float32)

shfl.private.federated_operation.apply_federated_transformation(federated_data, Reshape())
shfl.private.federated_operation.apply_federated_transformation(federated_data, CastFloat())

We are now ready to execute our federated learning algorithm.

test_data = np.reshape(test_data, (test_data.shape[0], test_data.shape[1], test_data.shape[2],1))
test_data = test_data.astype(np.float32)
federated_government.run_rounds(2, test_data, test_label)
Accuracy round 0
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef93dd0>: [0.059097617864608765, 0.984250009059906]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef93910>: [0.05121442303061485, 0.9866750240325928]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef93390>: [0.04485281929373741, 0.9889249801635742]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef4a050>: [0.04413288086652756, 0.9881500005722046]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef4a650>: [0.040550678968429565, 0.9884750247001648]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef4a9d0>: [0.07566175609827042, 0.9838250279426575]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef4af10>: [0.038839858025312424, 0.9895750284194946]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef4a150>: [0.05437803268432617, 0.9887999892234802]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef4a750>: [0.03736472502350807, 0.9901000261306763]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef4ad10>: [0.0360310897231102, 0.9903249740600586]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef4a710>: [0.04765767231583595, 0.9865999817848206]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14b5c76d0>: [0.10265535861253738, 0.9778500199317932]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14b5c7c10>: [0.06003579869866371, 0.9826499819755554]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14b5c7650>: [0.03950931131839752, 0.9896000027656555]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef83e10>: [0.05210889130830765, 0.985450029373169]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef833d0>: [0.03694518283009529, 0.9903749823570251]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef83d50>: [0.05010194703936577, 0.9865750074386597]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef830d0>: [0.043626025319099426, 0.9879999756813049]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef83dd0>: [0.06857322156429291, 0.9818999767303467]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef83d90>: [0.057210374623537064, 0.9850000143051147]
Global model test performance : [0.029051978141069412, 0.9924499988555908]



Accuracy round 1
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef93dd0>: [0.04505598917603493, 0.98785001039505]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef93910>: [0.031614579260349274, 0.9909499883651733]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef93390>: [0.04146265238523483, 0.9889249801635742]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef4a050>: [0.03844350948929787, 0.9900500178337097]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef4a650>: [0.045038823038339615, 0.9879000186920166]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef4a9d0>: [0.04560331255197525, 0.9894999861717224]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef4af10>: [0.035667870193719864, 0.9908999800682068]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef4a150>: [0.043869953602552414, 0.9886749982833862]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef4a750>: [0.04920487478375435, 0.9887999892234802]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef4ad10>: [0.03859545290470123, 0.9897750020027161]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef4a710>: [0.044665370136499405, 0.9881749749183655]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14b5c76d0>: [0.07344941049814224, 0.9815000295639038]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14b5c7c10>: [0.0483868308365345, 0.9872999787330627]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14b5c7650>: [0.04226008802652359, 0.989175021648407]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef83e10>: [0.04259002208709717, 0.9884999990463257]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef833d0>: [0.04175693914294243, 0.9884999990463257]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef83d50>: [0.03984719514846802, 0.989549994468689]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef830d0>: [0.037409715354442596, 0.9908249974250793]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef83dd0>: [0.04247940704226494, 0.9897750020027161]
Test performance client <shfl.private.federated_operation.FederatedDataNode object at 0x14ef83d90>: [0.04220372065901756, 0.9893249869346619]
Global model test performance : [0.029832247644662857, 0.9924250245094299]