[0. Summary of both papers and how they compare at a high level.] 1. Could a prediction API attack be used to glean information about the data a model was trained on? Explain. 2. Could changing the training data have a similar outcome (changes a prediction from x to x') to the robustness attack on neural networks? Explain.