Fully automatic censorship removal for language models
-
Updated
Mar 11, 2026 - Python
Fully automatic censorship removal for language models
Make abliterated models with transformers, easy and fast
Enhanced fork of Heretic (an automated LLM de-censoring tool) optimized for macOS (Apple Silicon) with checkpoint system and LM Studio integration
Powerful no-code LLM fine-tuner: upload data → train → deploy in minutes. Unsloth 2-5× acceleration · QLoRA/DPO/RLHF/PPO/ORPO · Reward Model training · GGUF export · vLLM inference · BLEU/ROUGE/BERTScore · full CLI · Heretic Mode to unlock full model potential
modify a language model's behavior by abliterating its weights.
Layer-by-layer model training and modification for 80B+ MoE models on consumer GPUs. Abliteration, LongRoPE, LoRA merge, weight visualization. Built because nothing else could do it. https://justcalljon.pro
Genre Mimicry in Academic Writing: Abliterated LLMs and Genre-Dependent Safety Behaviors | DP-2503 | Dissensus AI Discussion Paper
Lobopy is a lightweight PyTorch/HuggingFace library for analysing, steering/abliteration of causal language models.
Local Offline Abliterated CLI of a distilled gguf
MLX-native toolkit for understanding and reshaping how language models behave on Apple Silicon
Uncensoring LLMs via Albiteration and rehabilitating via RLVR/GRPO with small post training corpus
🚀 Train and modify 80B+ parameter Mixture of Experts models layer-by-layer on consumer GPUs using Python with AEGIS AI Trainer.
Add a description, image, and links to the abliteration topic page so that developers can more easily learn about it.
To associate your repository with the abliteration topic, visit your repo's landing page and select "manage topics."