Agent Indoctrination – AI Safety, Bias, Fairness, Ethics & Compliance Testing Framework 🚀
-
Updated
Nov 25, 2025 - Python
Agent Indoctrination – AI Safety, Bias, Fairness, Ethics & Compliance Testing Framework 🚀
An auditing framework to evaluate LLMs in local government reporting. Compares AI-generated headlines and topic prioritization against professional journalistic standards. Submitted to CHI 2026.
Recon-Level Audit of Claude 4 – Obfuscated, Ethical & Technically Precise
LLM 服务商诚信度检测工具(模型真伪 / token 对账 / 缓存合规 / 性能衰减) · 由 15code 出品
AI agent that transforms existing codebases — no migrations, no rewrites, directly on production code.
🐙 Ethical red-team audit of Claude 4 with clear introspection and policy visibility. Includes JSON data and Python tooling; Mermaid diagrams map model behavior.
Add a description, image, and links to the llm-audit topic page so that developers can more easily learn about it.
To associate your repository with the llm-audit topic, visit your repo's landing page and select "manage topics."