Political bias scoring of Swiss news articles using local LLMs via Ollama.
polibias scrapes RTS/Federalist/Jacobin articles, scores them across four bias dimensions with local models, and produces CSV/statistics/HTML outputs.
Live Streamlit dashboard:
| Dimension | What it measures |
|---|---|
subject_bias |
Does the topic selection itself lean left or right? |
framing_bias |
Is the narrative or tone left- or right-leaning? |
treatment_bias |
Does the article treat one side more favorably? |
guests_bias |
Are quoted voices more left or more right? |
Scores are in [-1.0, +1.0] where negative means left-leaning and positive means right-leaning.
- Ollama running locally (
ollama serve) - Python 3.10+
Pull the default models from src/polibias/config.py (example):
ollama pull llama3.2:latest
ollama pull gemma2:latest
ollama pull phi3:mini
ollama pull qwen2.5:3b-instruct
ollama pull gemma3:4bpython -m venv .venv && source .venv/bin/activate
pip install -e .--run-dir selects the output folder name under data/runs/.
If omitted, it defaults to run_results.
polibias run --run-dir temp_02_ctx2k
polibias run --run-dir temp_08_ctx4kYou can run individual grouped commands too:
polibias scrape --source rts
polibias scrape --source the_federalist --limit 20
polibias scrape --source jacobin --limit 20
polibias scrape --source watson --limit 20
polibias scrape --source protestinfo --limit 20
polibias scrape --source cathinfo --limit 20
polibias score --run-dir exp_a --source all
polibias score --run-dir exp_a --source rts
polibias score --run-dir exp_a --source the_federalist
polibias score --run-dir exp_a --source jacobin
polibias score --run-dir exp_a --source watson
polibias score --run-dir exp_a --source protestinfo
polibias score --run-dir exp_a --source cathinfo
polibias analyze --run-dir exp_a
polibias stats --run-dir exp_a
polibias export --run-dir exp_a
polibias bambi analyze --run-dir exp_a
polibias bambi viz --run-dir exp_a
polibias viz --run-dir exp_a # report.html
polibias viz --run-dir exp_a --source rts
polibias viz --run-dir exp_a --source the_federalist
polibias viz --run-dir exp_a --source jacobin
polibias viz --run-dir exp_a --source watson
polibias viz --run-dir exp_a --source protestinfo
polibias viz --run-dir exp_a --source cathinfo
polibias viz --run-dir exp_a --source all # report_all.html
polibias check --run-dir exp_aLegacy commands (all, analyse, score-rts, viz-fed, etc.) are still accepted.
Bayesian audit extras (optional):
pip install -e "[bayes]"
polibias bambi --run-dir exp_aShared scraped content:
data/webdata/rts/*.jsondata/webdata/the_federalist/*.jsondata/webdata/jacobin/*.jsondata/webdata/watson/*.jsondata/webdata/protestinfo/*.jsondata/webdata/cathinfo/*.json
Per-run outputs:
data/runs/<run_dir>/rts_results/<model>/<run>/*.jsondata/runs/<run_dir>/the_federalist_results/<model>/<run>/*.jsondata/runs/<run_dir>/jacobin_results/<model>/<run>/*.jsondata/runs/<run_dir>/watson_results/<model>/<run>/*.jsondata/runs/<run_dir>/protestinfo_results/<model>/<run>/*.jsondata/runs/<run_dir>/cathinfo_results/<model>/<run>/*.jsondata/runs/<run_dir>/errors/errors.jsonldata/runs/<run_dir>/errors/raw_outputs/*.txt(raw failed model responses)data/runs/<run_dir>/bias_data.csvdata/runs/<run_dir>/web_data.csvdata/runs/<run_dir>/stats_report.csvdata/runs/<run_dir>/report.htmldata/runs/<run_dir>/report_rts.htmldata/runs/<run_dir>/report_fed.htmldata/runs/<run_dir>/report_jacobin.htmldata/runs/<run_dir>/report_watson.htmldata/runs/<run_dir>/report_protestinfo.htmldata/runs/<run_dir>/report_cathinfo.htmldata/runs/<run_dir>/report_all.htmldata/runs/<run_dir>/article_summaries.csvdata/runs/<run_dir>/bias_table.tex
Edit src/polibias/config.py to change models, runs, timeouts, and Ollama options.
Prompt template:
src/polibias/prompt.md
Input URL file:
data/input_files/rts_links.csv(orrts_links.txt)data/input_files/watson_links.txtdata/input_files/protestinfo_links.txtdata/input_files/cathinfo_links.txt
Install dev deps and run:
pip install -e "[dev]"
python -m pytest -qMIT