The mathematical proof of AI Model Collapse via Semantic Contraction.
-
Updated
Jan 10, 2026 - Jupyter Notebook
The mathematical proof of AI Model Collapse via Semantic Contraction.
Project on Model Collapse in LLMs – Big Data Engineering (MSc), supervised by Prof. V. Moscato, PhD G. M. Orlando and PhD D. Russo (2025)
Governance + provenance framework to prevent AI model collapse via semantic fingerprinting, cryptographic lineage, and federated trust. DOI: https://osf.io/ufek5
🛡️ Framework de défense contre le Vandalisme Cognitif et l'empoisonnement de données dans les LLMs. Analyse quantitative du révisionnisme historique, métriques de dérive morale et implémentation de preuves de réalité par hachage temporel (C2PA/Blockchain)
Experiments for my Bachelor's thesis on fine-tuning language models and analyzing model collapse on synthetic generational data.
The codebase for the project "watch me kill my language" where an LLM is iteratively finetuned on the prompts that users send, leading to a slow decline to insanity.
A Python framework for measuring and enforcing semantic stability in LLMs through Geometric Information Theory. Implements Origin Node Invariance and Causal Language Syntax (CLS) to detect and prevent model collapse.
Add a description, image, and links to the model-collapse topic page so that developers can more easily learn about it.
To associate your repository with the model-collapse topic, visit your repo's landing page and select "manage topics."