1) Data poisoning only changes the data that the malicious user contributes to training the model, whereas model poisoning directly changes the model weights. 2) The attacker could poison the model to force a particular image to be misclassified.