ANI-GN (Adaptive Neuro-Immune Graph Network) is a research prototype designed to demonstrate an autonomous, self-healing cyber-resilience architecture inspired by biological immune systems and neuroplasticity.
This repository provides a complete, CPU-only, end-to-end prototype that showcases how graph learning, evolutionary optimization, and adaptive structural mechanisms can be integrated into a single system.
The primary goal of this prototype is to demonstrate system behavior, not to achieve state-of-the-art detection performance.
Specifically, ANI-GN aims to show:
- How edge-centric GNNs can model network flow behavior
- How evolutionary optimization (CYO++) can optimize model parameters without gradient-based training
- How neuroplasticity (Ψ_Dropin / Ψ_Pruning) can dynamically adapt system structure
- How immunological memory can retain stable system states and prevent degradation
- How self-healing cycles can autonomously regulate system health using adaptive thresholds
ANI-GN integrates multiple biologically inspired components into a single pipeline:
-
Synthetic NetFlow Generator
Generates realistic network flow behavior with controlled attack patterns. Eliminates dependency on external datasets. -
Edge-Centric Graph Neural Network
Directly processes edge (flow) features. Optimized for CPU execution. -
CYO++ Evolutionary Optimizer
Custom evolutionary algorithm with chaotic initialization, adaptive parameters, and archive-based memory. -
Neuroplasticity Engine
Dynamic structural adaptation:- Ψ_Dropin: Neurogenesis (add capacity)
- Ψ_Pruning: Neuroapoptosis (remove ineffective structure)
-
Immunological Memory
Stores high-performing parameter states with context for later recovery. -
Self-Healing Controller
Monitors anomaly levels with adaptive thresholds and triggers structural or parametric healing when needed.
ANI-GN/
├── anign_prototype.py # Complete single-file prototype implementation
├── requirements.txt # Python dependencies
├── LICENSE
└── README.md # This file
This prototype exclusively uses synthetically generated NetFlow data.
- No real-world network traffic or external datasets are used
- Attack patterns are algorithmically injected into normal traffic
- Detection metrics (e.g., AUC-ROC) are illustrative only and expected to be near random (~0.5)
The synthetic data validates system integration, adaptive behavior, and stability, not real-world detection performance.
- Python 3.9 or higher
- CPU-only environment (no GPU required)
# Install PyTorch CPU version first
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
# Install PyTorch Geometric and other dependencies
pip install torch_geometric
pip install numpy scipy scikit-learn matplotlib networkx levy
# Or use requirements.txt (may need manual torch installation first)
pip install -r requirements.txtpython anign_prototype.pyThe script runs the complete ANI-GN pipeline:
- Generates synthetic NetFlow dataset (~10k flows)
- Builds edge-centric graph representation
- Trains the GNN using CYO++ evolutionary optimization
- Performs initial anomaly detection evaluation
- Executes self-healing demonstration cycles (forced for visibility)
- Applies neuroplasticity and immunological memory mechanisms
- Generates comprehensive diagnostic visualizations
- Saves results to
ani_gn_complete_results.png
All stages are logged in detail to the console.
Successful execution produces:
-
Detailed console logs showing:
- Training progress and optimizer convergence
- Healing events and structural changes
- System health and memory status
-
Visualization file (
ani_gn_complete_results.png) illustrating:- CYO++ convergence and adaptive parameters
- Anomaly score distributions
- Structural adaptation history
- Threshold evolution
- Memory usage
Due to the synthetic dataset:
- AUC-ROC values near 0.5 (random guessing) are expected and normal
- Limited separation between normal and attack flows is intentional
- Few or no spontaneous healing events indicate correct conservative behavior
Primary indicators of success:
- Stable, non-collapsing training loss
- Controlled CYO++ convergence
- Proper subsystem interactions
- Correct triggering of demonstration healing cycles
- Accurate logging of neuroplasticity and memory events
- Synthetic dataset only
- No evaluation on real-world traffic
- Detection performance not optimized
- CPU-only execution
- Prototype-level scalability
These limitations are intentional and aligned with the research demonstration goals.
All components are fully integrated in a single executable file for maximum transparency and ease of study. Feel free to experiment, modify, and extend the prototype