Internal Safety Collapse: Turning the LLM or an AI Agent into a sensitive data generator.
-
Updated
Apr 1, 2026 - Python
Internal Safety Collapse: Turning the LLM or an AI Agent into a sensitive data generator.
Autonomous AI researcher that probes where frontier models disagree — with TEE-verified independent responses on the OpenGradient Network
Human-as-API for frontier models — compile prompts, deliver via Telegram, inject replies back into Pi
Interactive visualization of METR AI agent time horizon benchmark with exponential projections at 3, 6, 12, 18, 24, and 36 months. Tracks p50/p80 task-completion horizons across 22 frontier models (2019-2026).
Add a description, image, and links to the frontier-models topic page so that developers can more easily learn about it.
To associate your repository with the frontier-models topic, visit your repo's landing page and select "manage topics."