Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
tensorflow
GitHub Repository: tensorflow/docs-l10n
Path: blob/master/site/en-snapshot/federated/openmined2020/openmined_conference_2020.ipynb
25118 views
Kernel: Python 3

Before we start

To edit the colab notebook, please go to "File" -> "Save a copy in Drive" and make any edits on your copy.

Before we start, please run the following to make sure that your environment is correctly setup. If you don't see a greeting, please refer to the Installation guide for instructions.

#@title Upgrade tensorflow_federated and load TensorBoard #@test {"skip": true} !pip install --quiet --upgrade tensorflow-federated !pip install --quiet --upgrade nest-asyncio import nest_asyncio nest_asyncio.apply() %load_ext tensorboard import sys if not sys.warnoptions: import warnings warnings.simplefilter("ignore")
#@title import collections from matplotlib import pyplot as plt from IPython.display import display, HTML, IFrame import numpy as np import tensorflow as tf import tensorflow_federated as tff tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR) np.random.seed(0) def greetings(): display(HTML('<b><font size="6" color="#ff00f4">Greetings, virtual tutorial participants!</font></b>')) return True l = tff.federated_computation(greetings)()

TensorFlow Federated for Image Classification

Let's experiment with federated learning in simulation. In this tutorial, we use the classic MNIST training example to introduce the Federated Learning (FL) API layer of TFF, tff.learning - a set of higher-level interfaces that can be used to perform common types of federated learning tasks, such as federated training, against user-supplied models implemented in TensorFlow.

Tutorial Outline

We'll be training a model to perform image classification using the classic MNIST dataset, with the neural net learning to classify digit from image. In this case, we'll be simulating federated learning, with the training data distributed on different devices.

Sections

  1. Load TFF Libraries.

  2. Explore/preprocess federated EMNIST dataset.

  3. Create a model.

  4. Set up federated averaging process for training.

  5. Analyze training metrics.

  6. Set up federated evaluation computation.

  7. Analyze evaluation metrics.

Preparing the input data

Let's start with the data. Federated learning requires a federated data set, i.e., a collection of data from multiple users. Federated data is typically non-i.i.d., which poses a unique set of challenges. Users typically have different distributions of data depending on usage patterns.

In order to facilitate experimentation, we seeded the TFF repository with a few datasets.

Here's how we can load our sample dataset.

# Code for loading federated data from TFF repository emnist_train, emnist_test = tff.simulation.datasets.emnist.load_data()

The data sets returned by load_data() are instances of tff.simulation.datasets.ClientData, an interface that allows you to enumerate the set of users, to construct a tf.data.Dataset that represents the data of a particular user, and to query the structure of individual elements.

Let's explore the dataset.

len(emnist_train.client_ids)
# Let's look at the shape of our data example_dataset = emnist_train.create_tf_dataset_for_client( emnist_train.client_ids[0]) example_dataset.element_spec
# Let's select an example dataset from one of our simulated clients example_dataset = emnist_train.create_tf_dataset_for_client( emnist_train.client_ids[0]) # Your code to get an example element from one client: example_element = next(iter(example_dataset)) example_element['label'].numpy()
plt.imshow(example_element['pixels'].numpy(), cmap='gray', aspect='equal') plt.grid(False) _ = plt.show()

Exploring non-iid data

## Example MNIST digits for one client f = plt.figure(figsize=(20,4)) j = 0 for e in example_dataset.take(40): plt.subplot(4, 10, j+1) plt.imshow(e['pixels'].numpy(), cmap='gray', aspect='equal') plt.axis('off') j += 1
# Number of examples per layer for a sample of clients f = plt.figure(figsize=(12,7)) f.suptitle("Label Counts for a Sample of Clients") for i in range(6): ds = emnist_train.create_tf_dataset_for_client(emnist_train.client_ids[i]) k = collections.defaultdict(list) for e in ds: k[e['label'].numpy()].append(e['label'].numpy()) plt.subplot(2, 3, i+1) plt.title("Client {}".format(i)) for j in range(10): plt.hist(k[j], density=False, bins=[0,1,2,3,4,5,6,7,8,9,10])
# Let's play around with the emnist_train dataset. # Let's explore the non-iid charateristic of the example data. for i in range(5): ds = emnist_train.create_tf_dataset_for_client(emnist_train.client_ids[i]) k = collections.defaultdict(list) for e in ds: k[e['label'].numpy()].append(e['pixels'].numpy()) f = plt.figure(i, figsize=(12,5)) f.suptitle("Client #{}'s Mean Image Per Label".format(i)) for j in range(10): mn_img = np.mean(k[j],0) plt.subplot(2, 5, j+1) plt.imshow(mn_img.reshape((28,28)))#,cmap='gray') plt.axis('off') # Each client has different mean images -- each client will be nudging the model # in their own directions.

Preprocessing the data

Since the data is already a tf.data.Dataset, preprocessing can be accomplished using Dataset transformations. See here for more detail on these transformations.

NUM_CLIENTS = 10 NUM_EPOCHS = 5 BATCH_SIZE = 20 SHUFFLE_BUFFER = 100 PREFETCH_BUFFER=10 def preprocess(dataset): def batch_format_fn(element): """Flatten a batch `pixels` and return the features as an `OrderedDict`.""" return collections.OrderedDict( x=tf.reshape(element['pixels'], [-1, 784]), y=tf.reshape(element['label'], [-1, 1])) return dataset.repeat(NUM_EPOCHS).shuffle(SHUFFLE_BUFFER).batch( BATCH_SIZE).map(batch_format_fn).prefetch(PREFETCH_BUFFER)

Let's verify this worked.

preprocessed_example_dataset = preprocess(example_dataset) sample_batch = tf.nest.map_structure(lambda x: x.numpy(), next(iter(preprocessed_example_dataset))) sample_batch

Here's a simple helper function that will construct a list of datasets from the given set of users as an input to a round of training or evaluation.

def make_federated_data(client_data, client_ids): return [ preprocess(client_data.create_tf_dataset_for_client(x)) for x in client_ids ]

Now, how do we choose clients?

sample_clients = emnist_train.client_ids[0:NUM_CLIENTS] # Your code to get the federated dataset here for the sampled clients: federated_train_data = make_federated_data(emnist_train, sample_clients) print('Number of client datasets: {l}'.format(l=len(federated_train_data))) print('First dataset: {d}'.format(d=federated_train_data[0]))

Creating a model with Keras

If you are using Keras, you likely already have code that constructs a Keras model. Here's an example of a simple model that will suffice for our needs.

def create_keras_model(): return tf.keras.models.Sequential([ tf.keras.layers.InputLayer(input_shape=(784,)), tf.keras.layers.Dense(10, kernel_initializer='zeros'), tf.keras.layers.Softmax(), ])

Centralized training with Keras

## Centralized training with keras --------------------------------------------- # This is separate from the TFF tutorial, and demonstrates how to train a # Keras model in a centralized fashion (contrasting training in a federated env) (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() # Preprocess the data (these are NumPy arrays) x_train = x_train.reshape(60000, 784).astype("float32") / 255 y_train = y_train.astype("float32") mod = create_keras_model() mod.compile( optimizer=tf.keras.optimizers.RMSprop(), loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()] ) h = mod.fit( x_train, y_train, batch_size=64, epochs=2 ) # ------------------------------------------------------------------------------

Federated training using a Keras model

In order to use any model with TFF, it needs to be wrapped in an instance of the tff.learning.Model interface.

More keras metrics you can add are found here.

def model_fn(): # We _must_ create a new model here, and _not_ capture it from an external # scope. TFF will call this within different graph contexts. keras_model = create_keras_model() return tff.learning.from_keras_model( keras_model, input_spec=preprocessed_example_dataset.element_spec, loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])

Training the model on federated data

Now that we have a model wrapped as tff.learning.Model for use with TFF, we can let TFF construct a Federated Averaging algorithm by invoking the helper function tff.learning.build_federated_averaging_process, as follows.

iterative_process = tff.learning.build_federated_averaging_process( model_fn, client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.02), # Add server optimizer here! server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0))

What just happened? TFF has constructed a pair of federated computations and packaged them into a tff.templates.IterativeProcess in which these computations are available as a pair of properties initialize and next.

An iterative process will usually be driven by a control loop like:

def initialize(): ... def next(state): ... iterative_process = IterativeProcess(initialize, next) state = iterative_process.initialize() for round in range(num_rounds): state = iterative_process.next(state)

Let's invoke the initialize computation to construct the server state.

state = iterative_process.initialize()

The second of the pair of federated computations, next, represents a single round of Federated Averaging, which consists of pushing the server state (including the model parameters) to the clients, on-device training on their local data, collecting and averaging model updates, and producing a new updated model at the server.

Let's run a single round of training and visualize the results. We can use the federated data we've already generated above for a sample of users.

# Run one single round of training. state, metrics = iterative_process.next(state, federated_train_data) print('round 1, metrics={}'.format(metrics['train']))

Let's run a few more rounds. As noted earlier, typically at this point you would pick a subset of your simulation data from a new randomly selected sample of users for each round in order to simulate a realistic deployment in which users continuously come and go, but in this interactive notebook, for the sake of demonstration we'll just reuse the same users, so that the system converges quickly.

NUM_ROUNDS = 11 for round_num in range(2, NUM_ROUNDS): state, metrics = iterative_process.next(state, federated_train_data) print('round {:2d}, metrics={}'.format(round_num, metrics['train']))

Training loss is decreasing after each round of federated training, indicating the model is converging. There are some important caveats with these training metrics, however, see the section on Evaluation later in this tutorial.

##Displaying model metrics in TensorBoard Next, let's visualize the metrics from these federated computations using Tensorboard.

Let's start by creating the directory and the corresponding summary writer to write the metrics to.

#@test {"skip": true} import os import shutil logdir = "/tmp/logs/scalars/training/" if os.path.exists(logdir): shutil.rmtree(logdir) # Your code to create a summary writer: summary_writer = tf.summary.create_file_writer(logdir) state = iterative_process.initialize()

Plot the relevant scalar metrics with the same summary writer.

#@test {"skip": true} with summary_writer.as_default(): for round_num in range(1, NUM_ROUNDS): state, metrics = iterative_process.next(state, federated_train_data) for name, value in metrics['train'].items(): tf.summary.scalar(name, value, step=round_num)

Start TensorBoard with the root log directory specified above. It can take a few seconds for the data to load.

#@test {"skip": true} %tensorboard --logdir /tmp/logs/scalars/ --port=0

In order to view evaluation metrics the same way, you can create a separate eval folder, like "logs/scalars/eval", to write to TensorBoard.

Evaluation

To perform evaluation on federated data, you can construct another federated computation designed for just this purpose, using the tff.learning.build_federated_evaluation function, and passing in your model constructor as an argument.

# Construct federated evaluation computation here: evaluation = tff.learning.build_federated_evaluation(model_fn)

Now, let's compile a test sample of federated data and rerun evaluation on the test data. The data will come from a different sample of users, but from a distinct held-out data set.

import random shuffled_ids = emnist_test.client_ids.copy() random.shuffle(shuffled_ids) sample_clients = shuffled_ids[0:NUM_CLIENTS] federated_test_data = make_federated_data(emnist_test, sample_clients) len(federated_test_data), federated_test_data[0]
# Run evaluation on the test data here, using the federated model produced from # training: test_metrics = evaluation(state.model, federated_test_data)
str(test_metrics)

This concludes the tutorial. We encourage you to play with the parameters (e.g., batch sizes, number of users, epochs, learning rates, etc.), to modify the code above to simulate training on random samples of users in each round, and to explore the other tutorials we've developed.

Build your own FL algorithms

In the previous tutorials, we learned how to set up model and data pipelines, and use these to perform federated training using the tff.learning API.

Of course, this is only the tip of the iceberg when it comes to FL research. In this tutorial, we are going to discuss how to implement federated learning algorithms without deferring to the tff.learning API. We aim to accomplish the following:

Goals:

  • Understand the general structure of federated learning algorithms.

  • Explore the Federated Core of TFF.

  • Use the Federated Core to implement Federated Averaging directly.

Preparing the input data

We first load and preprocess the EMNIST dataset included in TFF. We essentially use the same code as in the first tutorial.

emnist_train, emnist_test = tff.simulation.datasets.emnist.load_data()
NUM_CLIENTS = 10 BATCH_SIZE = 20 def preprocess(dataset): def batch_format_fn(element): """Flatten a batch of EMNIST data and return a (features, label) tuple.""" return (tf.reshape(element['pixels'], [-1, 784]), tf.reshape(element['label'], [-1, 1])) return dataset.batch(BATCH_SIZE).map(batch_format_fn)
client_ids = np.random.choice(emnist_train.client_ids, size=NUM_CLIENTS, replace=False) federated_train_data = [preprocess(emnist_train.create_tf_dataset_for_client(x)) for x in client_ids ]

Preparing the model

We use the same model as the first tutorial, which has a single hidden layer, followed by a softmax layer.

def create_keras_model(): return tf.keras.models.Sequential([ tf.keras.layers.InputLayer(input_shape=(784,)), tf.keras.layers.Dense(10, kernel_initializer='zeros'), tf.keras.layers.Softmax(), ])

We wrap this Keras model as a tff.learning.Model.

def model_fn(): keras_model = create_keras_model() return tff.learning.from_keras_model( keras_model, input_spec=federated_train_data[0].element_spec, loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])

Cutomizing FL Algorithm

While the tff.learning API encompasses many variants of Federated Averaging, there are many other algorithms that do not fit neatly into this framework. For example, you may want to add regularization, clipping, or more complicated algorithms such as federated GAN training. You may also be instead be interested in federated analytics.

For these more advanced algorithms, we'll have to write our own custom FL algorithm.

In general, FL algorithms have 4 main components:

  1. A server-to-client broadcast step.

  2. A local client update step.

  3. A client-to-server upload step.

  4. A server update step.

In TFF, we generally represent federated algorithms as an IterativeProcess. This is simply a class that contains an initialize_fn and a next_fn. The initialize_fn will be used to initialize the server, and the next_fn will perform one communication round of federated averaging. Let's write a skeleton of what our iterative process for FedAvg should look like.

First, we have an initialize function that simply creates a tff.learning.Model, and returns its trainable weights.

def initialize_fn(): model = model_fn() return model.weights.trainable

This function looks good, but as we will see later, we will need to make a small modification to make it a TFF computation.

We also want to sketch the next_fn.

def next_fn(server_weights, federated_dataset): # Broadcast the server weights to the clients. server_weights_at_client = broadcast(server_weights) # Each client computes their updated weights. client_weights = client_update(federated_dataset, server_weights_at_client) # The server averages these updates. mean_client_weights = mean(client_weights) # The server updates its model. server_weights = server_update(mean_client_weights) return server_weights

We'll focus on implementing these four components separately. We'll first focus on the parts that can be implemented in pure TensorFlow, namely the client and server update steps.

TensorFlow Blocks

Client update

We will use our tff.learning.Model to do client training in essentially the same way you would train a TF model. In particular, we will use tf.GradientTape to compute the gradient on batches of data, then apply these gradient using a client_optimizer.

Note that each tff.learning.Model instance has a weights attribute with two sub-attributes:

  • trainable: A list of the tensors corresponding to trainable layers.

  • non_trainable: A list of the tensors corresponding to non-trainable layers.

For our purposes, we will only use the trainable weights (as our model only has those!).

@tf.function def client_update(model, dataset, server_weights, client_optimizer): """Performs training (using the server model weights) on the client's dataset.""" # Initialize the client model with the current server weights. client_weights = model.weights.trainable # Assign the server weights to the client model. tf.nest.map_structure(lambda x, y: x.assign(y), client_weights, server_weights) # Use the client_optimizer to update the local model. for batch in dataset: with tf.GradientTape() as tape: # Compute a forward pass on the batch of data outputs = model.forward_pass(batch) # Compute the corresponding gradient grads = tape.gradient(outputs.loss, client_weights) grads_and_vars = zip(grads, client_weights) # Apply the gradient using a client optimizer. client_optimizer.apply_gradients(grads_and_vars) return client_weights

Server Update

The server update will require even less effort. We will implement vanilla federated averaging, in which we simply replace the server model weights by the average of the client model weights. Again, we will only focus on the trainable weights.

@tf.function def server_update(model, mean_client_weights): """Updates the server model weights as the average of the client model weights.""" model_weights = model.weights.trainable # Assign the mean client weights to the server model. tf.nest.map_structure(lambda x, y: x.assign(y), model_weights, mean_client_weights) return model_weights

Note that the code snippet above is clearly overkill, as we could simply return mean_client_weights. However, more advanced implementations of Federated Averaging could use mean_client_weights with more sophisticated techniques, such as momentum or adaptivity.

So far, we've only written pure TensorFlow code. This is by design, as TFF allows you to use much of the TensorFlow code you're already familiar with. However, now we have to specify the orchestration logic, that is, the logic that dictates what the server broadcasts to the client, and what the client uploads to the server.

This will require the "Federated Core" of TFF.

Introduction to the Federated Core

The Federated Core (FC) is a set of lower-level interfaces that serve as the foundation for the tff.learning API. However, these interfaces are not limited to learning. In fact, they can be used for analytics and many other computations over distributed data.

At a high-level, the federated core is a development environment that enables compactly expressed program logic to combine TensorFlow code with distributed communication operators (such as distributed sums and broadcasts). The goal is to give researchers and practitioners expliict control over the distributed communication in their systems, without requiring system implementation details (such as specifying point-to-point network message exchanges).

One key point is that TFF is designed for privacy-preservation. Therefore, it allows explicit control over where data resides, to prevent unwanted accumulation of data at the centralized server location.

Federated data

Similar to "Tensor" concept in TensorFlow, which is one of the fundamental concepts, a key concept in TFF is "federated data", which refers to a collection of data items hosted across a group of devices in a distributed system (eg. client datasets, or the server model weights). We model the entire collection of data items across all devices as a single federated value.

For example, suppose we have client devices that each have a float representing the temperature of a sensor. We could represent it as a federated float by

federated_float_on_clients = tff.type_at_clients(tf.float32)

Federated types are specified by a type T of its member constituents (eg. tf.float32) and a group G of devices. We will focus on the cases where G is either tff.CLIENTS or tff.SERVER. Such a federated type is represented as {T}@G, as shown below.

str(federated_float_on_clients)

Why do we care so much about placements? A key goal of TFF is to enable writing code that could be deployed on a real distributed system. This means that it is vital to reason about which subsets of devices execute which code, and where different pieces of data reside.

TFF focuses on three things: data, where the data is placed, and how the data is being transformed. The first two are encapsulated in federated types, while the last is encapsulated in federated computations.

Federated computations

TFF is a strongly-typed functional programming environment whose basic units are federated computations. These are pieces of logic that accept federated values as input, and return federated values as output.

For example, suppose we wanted to average the temperatures on our client sensors. We could define the following (using our federated float):

@tff.federated_computation(tff.type_at_clients(tf.float32)) def get_average_temperature(client_temperatures): return tff.federated_mean(client_temperatures)

You might ask, how is this different from the tf.function decorator in TensorFlow? The key answer is that the code generated by tff.federated_computation is neither TensorFlow nor Python code; It is a specification of a distributed system in an internal platform-independent glue language.

While this may sound complicated, you can think of TFF computations as functions with well-defined type signatures. These type signatures can be directly queried.

str(get_average_temperature.type_signature)

This tff.federated_computation accepts arguments of federated type {float32}@CLIENTS, and returns values of federated type {float32}@SERVER. Federated computations may also go from server to client, from client to client, or from server to server. Federated computations can also be composed like normal functions, as long as their type signatures match up.

To support development, TFF allows you to invoke a tff.federated_computation as a Python function. For example, we can call

get_average_temperature([68.5, 70.3, 69.8])

Non-eager computations and TensorFlow

There are two key restrictions to be aware of. First, when the Python interpreter encounters a tff.federated_computation decorator, the function is traced once and serialized for future use. Therefore, TFF computations are fundamentally non-eager. This behavior is somewhat analogous to that of the tf.function decorator in TensorFlow.

Second, a federated computation can only consist of federated operators (such as tff.federated_mean), they cannot contain TensorFlow operations. TensorFlow code must be confined to blocks decorated with tff.tf_computation. Most ordinary TensorFlow code can be directly decotrated, such as the following function that takes a number and adds 0.5 to it.

@tff.tf_computation(tf.float32) def add_half(x): return tf.add(x, 0.5)

These also have type signatures, but without placements. For example, we can call

str(add_half.type_signature)

Here we see an important difference between tff.federated_computation and tff.tf_computation. The former has explicit placements, while the latter does not.

We can use tff.tf_computation blocks in federated computations by specifying placements. Let's create a function that adds half, but only to federated floats at the clients. We can do this by using tff.federated_map, which applies a given tff.tf_computation, while preserving the placement.

@tff.federated_computation(tff.type_at_clients(tf.float32)) def add_half_on_clients(x): return tff.federated_map(add_half, x)

This function is almost identical to add_half, except that it only accepts values with placement at tff.CLIENTS, and returns values with the same placement. We can see this in its type signature:

str(add_half_on_clients.type_signature)

In summary:

  • TFF operates on federated values.

  • Each federated value has a federated type, with a type (eg. tf.float32) and a placement (eg. tff.CLIENTS).

  • Federated values can be transformed using federated computations, which must be decorated with tff.federated_computation and a federated type signature.

  • TensorFlow code must be contained in blocks with tff.tf_computation decorators.

  • These blocks can then be incorporated into federated computations.

Building your own FL Algorithm (Part 2)

Now that we've peeked at the Federated Core, we can build our own federated learning algorithm. Remember that above, we defined an initialize_fn and next_fn for our algorithm. The next_fn will make use of the client_update and server_update we defined using pure TensorFlow code.

However, in order to make our algorithm a federated computation, we will need both the next_fn and initialize_fn to be tff.federated_computations.

TensorFlow Federated blocks

Creating the initialization computation

The initialize function will be quite simple: We will create a model using model_fn. However, remember that we must separate out our TensorFlow code using tff.tf_computation.

@tff.tf_computation def server_init(): model = model_fn() return model.weights.trainable

We can then pass this directly into a federated computation using tff.federated_value.

@tff.federated_computation def initialize_fn(): return tff.federated_value(server_init(), tff.SERVER)

Creating the next_fn

We now use our client and server update code to write the actual algorithm. We will first turn our client_update into a tff.tf_computation that accepts a client datasets and server weights, and outputs an updated client weights tensor.

We will need the corresponding types to properly decorate our function. Luckily, the type of the server weights can be extracted directly from our model.

whimsy_model = model_fn() tf_dataset_type = tff.SequenceType(whimsy_model.input_spec)

Let's look at the dataset type signature. Remember that we took 28 by 28 images (with integer labels) and flattened them.

str(tf_dataset_type)

We can also extract the model weights type by using our server_init function above.

model_weights_type = server_init.type_signature.result

Examining the type signature, we'll be able to see the architecture of our model!

str(model_weights_type)

We can now create our tff.tf_computation for the client update.

@tff.tf_computation(tf_dataset_type, model_weights_type) def client_update_fn(tf_dataset, server_weights): model = model_fn() client_optimizer = tf.keras.optimizers.SGD(learning_rate=0.01) return client_update(model, tf_dataset, server_weights, client_optimizer)

The tff.tf_computation version of the server update can be defined in a similar way, using types we've already extracted.

@tff.tf_computation(model_weights_type) def server_update_fn(mean_client_weights): model = model_fn() return server_update(model, mean_client_weights)

Last, but not least, we need to create the tff.federated_computation that brings this all together. This function will accept two federated values, one corresponding to the server weights (with placement tff.SERVER), and the other corresponding to the client datasets (with placement tff.CLIENTS).

Note that both these types were defined above! We simply need to give them the proper placement using `tff.type_at_{server/clients}``.

federated_server_type = tff.type_at_server(model_weights_type) federated_dataset_type = tff.type_at_clients(tf_dataset_type)

Remember the 4 elements of an FL algorithm?

  1. A server-to-client broadcast step.

  2. A local client update step.

  3. A client-to-server upload step.

  4. A server update step.

Now that we've built up the above, each part can be compactly represented as a single line of TFF code. This simplicity is why we had to take extra care to specify things such as federated types!

@tff.federated_computation(federated_server_type, federated_dataset_type) def next_fn(server_weights, federated_dataset): # Broadcast the server weights to the clients. server_weights_at_client = tff.federated_broadcast(server_weights) # Each client computes their updated weights. client_weights = tff.federated_map( client_update_fn, (federated_dataset, server_weights_at_client)) # The server averages these updates. mean_client_weights = tff.federated_mean(client_weights) # The server updates its model. server_weights = tff.federated_map(server_update_fn, mean_client_weights) return server_weights

We now have a tff.federated_computation for both the algorithm initialization, and for running one step of the algorithm. To finish our algorithm, we pass these into tff.templates.IterativeProcess.

federated_algorithm = tff.templates.IterativeProcess( initialize_fn=initialize_fn, next_fn=next_fn )

Let's look at the type signature of the initialize and next functions of our iterative process.

str(federated_algorithm.initialize.type_signature)

This reflects the fact that federated_algorithm.initialize is a no-arg function that returns a single-layer model (with a 784-by-10 weight matrix, and 10 bias units).

str(federated_algorithm.next.type_signature)

Here, we see that federated_algorithm.next accepts a server model and client data, and returns an updated server model.

Evaluating the algorithm

Let's run a few rounds, and see how the loss changes. First, we will define an evaluation function using the centralized approach discussed in the second tutorial.

We first create a centralized evaluation dataset, and then apply the same preprocessing we used for the training data.

Note that we only take the first 1000 elements for reasons of computational efficiency, but typically we'd use the entire test dataset.

central_emnist_test = emnist_test.create_tf_dataset_from_all_clients().take(1000) central_emnist_test = preprocess(central_emnist_test)

Next, we write a function that accepts a server state, and uses Keras to evaluate on the test dataset. If you're familiar with tf.Keras, this will all look familiar, though note the use of set_weights!

def evaluate(server_state): keras_model = create_keras_model() keras_model.compile( loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()] ) keras_model.set_weights(server_state) keras_model.evaluate(central_emnist_test)

Now, let's initialize our algorithm and evaluate on the test set.

server_state = federated_algorithm.initialize() evaluate(server_state)

Let's train for a few rounds and see if anything changes.

for round in range(15): server_state = federated_algorithm.next(server_state, federated_train_data)
evaluate(server_state)

We see a slight decrease in the loss function. While the jump is small, note that we've only performed 10 training rounds, and on a small subset of clients. To see better results, we may have to do hundreds if not thousands of rounds.

Modifying our algorithm

At this point, let's stop and think about what we've accomplished. We've implemented Federated Averaging directly by combining pure TensorFlow code (for the client and server updates) with federated computations from the Federated Core of TFF.

To perform more sophisticted learning, we can simply alter what we have above. In particular, by editing the pure TF code above, we can change how the client performs training, or how the server updates its model.

Challenge: Add gradient clipping to the client_update function.