Skip to content
#

llm-benchmarks

Here are 4 public repositories matching this topic...

Language: All
Filter by language

This project aims to address this gap by conducting a systematic, controlled study of human versus LLM-generated text detectability using paired question–answer datasets. Rather than proposing a novel detection architecture, the focus is on analyzing detection robustness, failure modes, and the impact of adversarial humanization strategies.

  • Updated Mar 19, 2026
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the llm-benchmarks topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the llm-benchmarks topic, visit your repo's landing page and select "manage topics."

Learn more