Skip to content

AI fairness auditing platform with bias detection (AIF360), explainability (LIME), drift monitoring (Prometheus/Grafana), and automated compliance reporting for ethical ML systems.

Notifications You must be signed in to change notification settings

27HarshalPatel/TrustCheckAI

Repository files navigation

πŸš€ TrustCheckAI β€” Bias and Compliance Detaction Platform

TrustCheckAI is an end-to-end bias and compliance auditing, explainability, and model-monitoring platform designed to evaluate bias, mitigate discrimination, explain model decisions, and continuously monitor deployed machine learning systems using Prometheus & Grafana.

It provides real-time dashboards, fairness metrics, model explainability (LIME), drift detection, automated PDF reporting, and user feedback collection β€” all wrapped in a modern Streamlit UI and containerized for seamless deployment.


πŸ“‘ Table of Contents

  • ✨ Features
  • πŸ“š Project Structure
  • 🧰 Technical Stack
  • πŸ“š Supported Datasets
  • πŸ— System Architecture
  • πŸ” Compliance, Fairness & Security
  • 🎨 Human-Centered Design (HCI)
  • βš™οΈ Installation & Setup
  • πŸ§ͺ Usage Workflow
  • πŸ“Š Prometheus Metrics & Grafana Dashboards
  • πŸ“‰ Drift Detection
  • πŸ“˜ PDF Report Generation
  • πŸŽ₯ Demonstration
  • πŸ›£ Roadmap
  • πŸ“„ Citations
  • 🀝 Acknowledgements

✨ Features

🟣 Bias Detection (AIF360)

  • Statistical Parity Difference
  • Disparate Impact

🧠 Explainability (XAI)

  • LIME – local explanations per prediction

πŸ“‰ Drift Detection

  • Kolmogorov–Smirnov (KS) Test

βš™ Model Training & Evaluation

  • Logistic Regression
  • Random Forest
  • 5-fold cross-validation

πŸ”Ž Real-Time Monitoring & Alerting

  • Prometheus metric exporter
  • Grafana dashboards
  • Automated Slack alerts for accuracy/fairness drift

πŸ“„ Automatic PDF Reporting

  • Full bias report
  • Model performance summary

πŸ–₯ Modern Streamlit UI

  • Clean, intuitive layout
  • File upload, analysis, visualization
  • User feedback

🐳 Fully Containerized

  • Streamlit
  • Prometheus
  • Grafana
  • Docker Compose orchestration

πŸ“š Project Structure

TrustCheckAI/
β”œβ”€β”€ .ipynb_checkpoints/        # Auto-generated Jupyter checkpoints
β”œβ”€β”€ __pycache__/               # Python bytecode cache
β”œβ”€β”€ .DS_Store                  # macOS system metadata
β”œβ”€β”€ Dockerfile                 # Docker build instructions
β”œβ”€β”€ Final Report.pdf           # Final Project Report Template 
β”œβ”€β”€ README.md                  # Project documentation
β”œβ”€β”€ TrustCheckAI-Demo.mp4      # Full application demo video
β”œβ”€β”€ TrustCheckAI-demo.gif      # GIF preview for README
β”œβ”€β”€ compas-scores-two-years.csv # COMPAS dataset for fairness analysis
β”œβ”€β”€ docker-compose.yml         # Multi-service orchestration (Streamlit + Prometheus + Grafana)
β”œβ”€β”€ feedback.log               # Logs for user feedback & events
β”œβ”€β”€ prometheus.yml             # Prometheus scraping config
β”œβ”€β”€ requirements.txt           # Python dependencies
└── streamlit_app.py           # Main Streamlit application

🧰 Technical Stack

ML & Fairness

  • Python 3.9+
  • Scikit-learn
  • AIF360
  • LIME

Monitoring & Observability

  • Prometheus
  • Grafana

Frontend

  • Streamlit

DevOps

  • Docker
  • Docker Compose
  • GitHub

πŸ“š Supported Datasets

COMPAS – Criminal Justice

User Uploaded Structured Dataset

Each dataset includes at least one protected attribute such as race, gender, or age that is used for fairness auditing.

πŸ— System Architecture

The high-level architecture of TrustCheckAI is shown below:

                   +---------------------------+
                   |        User (UI)         |
                   |  β€’ Upload CSV dataset    |
                   |  β€’ Configure analysis    |
                   +-------------+------------+
                                 |
                                 v
                     +-----------+-----------+
                     |  Streamlit Application |
                     |  β€’ Orchestration       |
                     |  β€’ UX & controls       |
                     +-----------+------------+
                                 |
         +-----------------------+------------------------+
         |                        |                        |
         v                        v                        v
+----------------+     +-----------------------+   +---------------------+
| Preprocessing  |     | Bias & Fairness       |   | Model Training &    |
| & Validation   |---->| Analysis (AIF360)     |-->| Evaluation (SKL)    |
| β€’ Cleaning     |     | β€’ Metrics & thresholds|   | β€’ LR / RF           |
+----------------+     +-----------------------+   +---------------------+
                                                             |
                                                             v
                                              +-----------------------------+
                                              | Explainability (LIME)       |
                                              +-----------------------------+
                                                             |
                                                             v
                                              +-----------------------------+
                                              | Drift Detection (KS)        |
                                              +-----------------------------+
                                                             |
                                                             v
                                           +-----------------+-----------------+
                                           | Prometheus Metrics Exporter      |
                                           | β€’ upload_counter, accuracy_gauge |
                                           +-----------------+-----------------+
                                                             |
                                                             v
                                     +------------------------+---------------------+
                                     |          Grafana Dashboards & Alerts        |
                                     |  β€’ Accuracy / fairness panels               |
                                     |  β€’ Slack / email alerts                     |
                                     +----------------------------------------------+

Component summary:

  • Streamlit App – central controller for data upload, analysis steps, and visualization.
  • AIF360 Module – computes fairness metrics and applies mitigation algorithms.
  • Model Training – trains ML models and logs metrics.
  • XAI Module – generates LIME explanations for transparency.
  • Drift Detection – monitors changes in data and predictions over time.
  • Prometheus & Grafana – collect, visualize, and alert on key metrics.

🧩 Protected Attribute

In TrustCheckAI, the protected attribute is a sensitive feature such as race, gender, age, or ethnicity that represents groups we want to protect from unfair treatment.

Why it is important:

  • πŸ“ Fairness metrics are defined with respect to protected groups.
    Measures like Statistical Parity Difference, Disparate Impact, and Equal Opportunity compare outcomes between protected and non‑protected groups. Without a protected attribute, these metrics cannot be computed.

  • πŸ§ͺ Bias detection requires group-wise comparison.
    By conditioning on the protected attribute, TrustCheckAI can reveal whether the model treats one group systematically worse than another (e.g., lower approval rates or higher false-positive rates).

  • πŸ›‘ Used for auditing, not for discrimination.
    In a responsible workflow, the protected attribute is often excluded from the model features used for prediction, but retained in the evaluation pipeline so that fairness can be audited post‑hoc.

  • πŸ“œ Regulatory and ethical compliance.
    Many regulations (EEOC, GDPR β€œspecial categories”, anti‑discrimination laws) explicitly refer to protected characteristics. Correctly identifying and handling the protected attribute is essential for demonstrating compliance.

TrustCheckAI makes the protected attribute explicit in the UI and in the generated reports so that stakeholders clearly understand which groups are being evaluated for fairness and how mitigation affects them.


πŸ” Compliance, Fairness & Security

  • Regulatory alignment (EEOC, Justice fairness)
  • Differential privacy
  • Ethical AI lifecycle tracking
  • Secure isolated containers

🎨 HCI Principles

  • Accessible charts
  • Colorblind-safe design
  • Clear fairness/performance separation
  • Prototyped user flows

βš™οΈ Installation & Setup

git clone https://github.com/27HarshalPatel/TrustCheckAI.git
cd TrustCheckAI
docker-compose up --build

Access:


πŸ§ͺ Usage Workflow

  1. Upload CSV
  2. Select "Protected" Attribute
  3. Select "Target" Variable
  4. Run "Analyze Dataset"
  5. View the Bias and Compliance Check Result along with Accuracy in Predicting the Results
  6. View LIME Analyses
  7. Generate PDF
  8. Monitor in Grafana
  9. Receive alerts if Accuracy falls below 70%

πŸ“Š Prometheus Metrics

  • upload_counter
  • analysis_counter
  • accuracy_gauge
  • feedback_ratings_counter
  • feedback_comments_counter

πŸ“‰ Drift Detection

  • KS Test

πŸ“˜ PDF Report Generation

Includes fairness metrics and performance summary.


πŸŽ₯ Demonstration


πŸ›£ Roadmap

  • Fairlearn integration
  • Kubernetes deployment
  • Extended fairness metrics

πŸ“„ Citations

  • IBM AIF360
  • COMPAS Dataset

🀝 Acknowledgements

  • University of Florida
  • HiPerGator Computing
  • Open-source community

About

AI fairness auditing platform with bias detection (AIF360), explainability (LIME), drift monitoring (Prometheus/Grafana), and automated compliance reporting for ethical ML systems.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •