Detect and redact PII locally with SOTA performance
-
Updated
Mar 25, 2025 - Python
Detect and redact PII locally with SOTA performance
Extract structured data from local or remote LLM models
A chrome extention for quering a local llm model using llama-cpp-python, includes a pip package for running the server, 'pip install local-llama' to install
entirely oss and locally running version of recall (originally revealed by msft for copilot+pcs)
A simple framework for using Claude Code or Codex CLI as the frontend to any cloud or local LLM on Apple Silicon. Connect locally via LiteLLM + MLX or LM Studio, or remotely via Z.AI, Gemini/Google AI Studio, DeepSeek, or OpenRouter.
A small VLM that sees everything
Bell inequalities and local models via Frank-Wolfe algorithms
A 💅 stylish 💅 local multi-model AI assistant and API.
The AI-OS in userspace.
Main code chunks used for models in the publication "Exploring the Potential of Adaptive, Local Machine Learning (ML) in Comparison ton the Prediction Performance of Global Models: A Case Study from Bayer's Caco-2 Permeability Database"
Extracting complete webpage articles from a screen recording using local models
ODK: An open-source AI shell to control your computer with natural language.
Codes for a published work "Global or local modeling for XGBoost in geospatial studies upon simulated data and German COVID-19 infection forecasting"
Running the Multimodal AI Chat App with LM Studio using a locally loaded model
A comprehensive learning repository for Model Context Protocol (MCP) - from simple tools to complex agentic workflows using local Ollama models
Vision-based avatar, reads Google News and extracts news by itself using only local models
Desktop application for generating AI-powered educational case studies with support for OpenAI, Anthropic, Google Gemini, and local Ollama models
A streamlined interface for interacting with local Large Language Models (LLMs) using Streamlit. Features interactive chat, configurable model parameters, and more.
Running the Multimodal AI Chat App with Ollama using a locally loaded model
A PowerShell bridge that hard-links Ollama GGUF models into LM Studio’s models folder so LM Studio can use them without duplicating disk usage.
Add a description, image, and links to the local-models topic page so that developers can more easily learn about it.
To associate your repository with the local-models topic, visit your repo's landing page and select "manage topics."