Adversarial attacks against CIFAR-10 and MNIST. The notebooks use IBM's Adversarial Robustness Toolbox (ART) to generate adversarial examples to attack PyTorch models. Might include more methods against more datasets in the future.
antoninodimaggio/PyTorch-Adversarial-Examples
Folders and files
| Name | Name | Last commit date | ||
|---|---|---|---|---|