Machine Learning Course lab 9-Solved
.Problem 1 (Adversarial training for linear models):
It can be often very insightful to analyze what a method corresponds to in a simple setting of linear models.
Assume we have input points xi ∈ Rd and binary labels yi ∈ {−1,1}. Let ℓ be a monotonically decreasing marginbased loss function, for example the hinge loss ℓ(z) = max{0,1 − z} or logistic loss ℓ(z) = log(1 + exp(−z)) that you have seen before.
Consider the adversarial training objective for a linear model f(x) = w⊤x with respect to ℓ2 adversarial perturbations:
n min
.
w
• Find a closed-form solution of the inner maximization problem and the minimizer
.
• In case of the hinge loss, ℓ(z) = max{0,1−z}, what is the connection between ℓ2 adversarial training and the primal formulation of the soft-margin SVM?
• What if instead of ℓ2 adversarial training, we performed ℓ∞ adversarial training, how would the solution of the inner maximization problem change? Does the maximizer for ℓ∞-perturbations resemble the Fast Gradient Sign Method (FGSM)?
Problem 2 (Adversarial training on MNIST):
In this problem you will:
1. Learn how to make small modifications in handwritten digit images that result in dramatic errors by ML models. However, humans can still recognize these adversarial examples.
2. Implement a simple defense against this attack.
Setup It is the easiest to run this notebook in Google Colab. You can make use of a free GPU there to train the models faster. If you want to run the notebook locally, you can also use template/ex09.ipynb. However, expect to have much longer running time if you don’t have GPUs.
1. Open the colab link for the lab 09:
https://colab.research.google.com/github/epfml/ML_course/blob/master/labs/ex09/template/ex09.ipynb
2. To save your progress, click on “File > Save a Copy in Drive” to get your own copy of the Notebook.
3. Click ‘connect’ on top right to make the notebook executable (or ‘open in playground’)
4. Start solving the missing parts.