Unified AI-generated content detection for images, videos, and deepfake audio.
A powerful, modular Python toolkit that combines image/video AI detection (CLIP & ViT) and audio deepfake detection (spectral analysis & RandomForest) into one unified CLI and web interface.
Warning
Detection results may not always be accurate. AI‑generated content detection is an evolving field — always verify with additional methods when necessary.
| Module | Capabilities |
|---|---|
| 🖼️ Image Detection | CLIP & ViT model classification, noise analysis, texture analysis, Fourier Transform pattern detection, metadata inspection, invisible watermark detection |
| 🎬 Video Detection | Frame‑by‑frame analysis using the same image detection pipeline |
| 🎵 Audio Deepfake Detection | Spectral feature extraction (MFCCs, spectral centroid, chroma, ZCR) with a RandomForest classifier |
| 🌐 Gradio Web UI | Tabbed interface for image and audio detection — upload & analyze in your browser |
| ⌨️ CLI | Scriptable command‑line interface for quick or batch processing |
| Requirement | Details |
|---|---|
| Python | >= 3.11 |
| PyTorch | >= 2.5 (CUDA‑enabled GPU recommended for faster inference) |
| OS | Windows / Linux / macOS |
git clone https://github.com/iamsrishanth/AI-Detector.git
cd AI-Detectorpython -m venv venv
# Windows
venv\Scripts\activate
# Linux / macOS
source venv/bin/activateUsing pip:
pip install -e .Using uv (fast Python package manager):
uv syncUsing requirements.txt:
pip install -r requirements.txtTip
If you have an NVIDIA GPU, ensure you install the CUDA‑enabled version of PyTorch for significantly faster inference. Visit pytorch.org/get-started for platform-specific instructions.
# Analyze an image
ai-detect --image path/to/image.jpg
# Analyze a video
ai-detect --video path/to/video.mp4
# Analyze an audio file
ai-detect --audio path/to/audio.wav
# Launch the Gradio web interface
ai-detect --guifrom ai_detector.image import ImageDetector
from ai_detector.audio import AudioProcessor, DeepfakeDetector
# ── Image / Video Detection ──────────────────────────────
detector = ImageDetector()
detector.load_models()
result = detector.process_image("photo.jpg") # single image
result = detector.process_video("clip.mp4") # video
# ── Audio Deepfake Detection ─────────────────────────────
processor = AudioProcessor()
det = DeepfakeDetector()
det.load_model()
features = processor.extract_features("audio.wav")
result = det.predict(features)
print(f"Deepfake probability: {result['deepfake_probability'] * 100:.1f}%")Launch the web interface and open it in your browser:
ai-detect --guiThe UI provides a tabbed interface with separate tabs for Image Detection and Audio Detection — simply upload a file and click Analyze.
AI-Detector/
├── ai_detector/
│ ├── __init__.py # Package init
│ ├── app.py # Gradio web interface
│ ├── cli.py # CLI entry point
│ ├── image/
│ │ ├── __init__.py
│ │ └── detector.py # Image & video detection (CLIP + ViT)
│ └── audio/
│ ├── __init__.py
│ ├── config.py # Audio detection settings
│ ├── models.py # Pydantic response models
│ ├── processor.py # Audio feature extraction
│ └── detector.py # Deepfake detection (RandomForest)
├── tests/
│ ├── test_image_detector.py
│ ├── test_audio_detector.py
│ └── test_audio_processor.py
├── pyproject.toml # Project metadata & dependencies
├── requirements.txt # Pip requirements
├── LICENSE # Apache License 2.0
└── README.md
| Deep Learning | PyTorch · Transformers (HuggingFace) · CLIP · ViT |
| Audio Analysis | Librosa · SoundFile · SciPy |
| ML | scikit‑learn (RandomForest) |
| Computer Vision | OpenCV · Pillow |
| Web UI | Gradio |
| Data Validation | Pydantic |
# Using pytest
pytest
# Verbose output
pytest -vContributions are welcome! Feel free to open an issue or submit a pull request.
- Fork the repository
- Create a feature branch —
git checkout -b feature/amazing-feature - Commit your changes —
git commit -m "Add amazing feature" - Push to your branch —
git push origin feature/amazing-feature - Open a Pull Request
This project is licensed under the Apache License 2.0 — see the LICENSE file for details.