Reviewing the privacy-utility trade-off in federated learning
In the previous section, we examined the effectiveness of federated learning and looked at the model performance over multiple communication rounds. However, to quantify the effectiveness, we need to compare this against two benchmarks:
- A model trained on the entire data with no federation involved
- A local model trained on its own data only
The differences in accuracy in these three cases (federated, global only, and local only) will indicate the trade-offs we are making and the gains we achieve. In the previous section, we looked at the accuracy we obtain via federated learning. To understand the utility-privacy trade-off, let us discuss two extreme cases – a fully global and a fully local model.
Global model (no privacy)
When we train a global model directly, we use all the data to train a single model. Thus, all parties involved would be publicly sharing their data with each other. The...