1) Describe in a sentence the key difference between model poisoning and data poisoning in the federated learning setting. 2) Adversarial examples are inputs to neural networks that look similar to "normal" inputs, but are misclassified by the neural network. The most efficient methods of generating adversarial examples requires access to the weights of the model, and often requires expending large amounts of computation. Explain in a sentence how federated model poisoning can be used to make constructing adversarial examples easier.