This repository contains the core adversarial perturbation algorithms used in Hope:RE to protect digital art from unauthorized AI training.
The algorithms implement CLIP-based adversarial attacks to inject imperceptible noise into images, specifically targeting training pipelines of Diffusion models and other generative AI architectures.
The notebooks follow a sequential pipeline for development, testing, and ONNX export:
- 0_setup_colab.ipynb: Environment setup and dependency installation for Google Colab.
- 1_clip_to_jax.ipynb: Integration of OpenAI CLIP models with the JAX framework for high-performance gradient computation.
- 2_noise_algorithm.ipynb: Implementation of the base adversarial perturbation engine.
- 3_glaze_algorithm.ipynb: Specialized algorithm for style protection (inspired by Glaze).
- 4_nightshade_algorithm.ipynb: Specialized algorithm for concept "poisoning" (inspired by Nightshade).
- 5_export_onnx.ipynb: Conversion of JAX/Python models to ONNX format for deployment in the Hope:RE desktop application.
To run these notebooks, it is recommended to use Google Colab with a GPU runtime (T4, A100, or V100).
Primary dependencies include:
- JAX & Flax
- PyTorch (for CLIP weight loading)
- OpenAI CLIP
- ONNX & ONNX Runtime
Full list of requirements can be found in equirements.txt.
MIT