Implement backend support for Ollama model usage through LangChain, including key features like model verification, chat functionality, and local model status. This allows users to interact with local models securely and only when they’re installed.
🔧 Key Features
✅ 1. LangChain + Ollama Integration
- Use LangChain’s Ollama integration to interact with local LLMs.
- Allow any installed Ollama model to be used for chat (e.g.,
llama3, mistral, etc.)
✅ 2. Check if Ollama Model is Installed
Before processing a chat request:
- Check if the requested Ollama model exists on the local system.
- If not found, return:
{
"error": true,
"message": "The Ollama model is not installed on your local system. Please install it before using."
}
Use Ollama's API (GET /api/tags) to check for installed models.
✅ 3. Secure Access via Local Mind API Key
All routes must require the following HTTP header:
local-mind-api-key: <your-local-key>
Ollama API Routes
Chart with Ollama
- Chat with a locally installed Ollama model using LangChain.
Headers:
x-mind-api-key: <your-local-mind-api-key>
Content-Type: application/json
Request Body
{
"model": "llama3",
"messages": [
{ "role": "user", "content": "Hello, who are you?" }
]
}
Success Response:
{
"model": "llama3",
"response": "Hello! I'm a helpful assistant running locally.",
"timestamp": "2025-10-12T15:30:00Z"
}
Fetch Ollama models
Fetch a list of all locally installed Ollama models.
Headers:
local-mind-api-key: <your-local-key>
Response:
{
"models": [
"llama3",
"mistral",
"codellama:7b"
]
}
Check if model is Install or not
Check if a specific Ollama model is installed locally before allowing usage.
🔐 Headers
x-mind-api-key: <your-local-mind-api-key> // Required
🌐 Endpoint
GET /api/ollama/check/:modelName
Success Response (if installed):
{
"model": "llama3",
"installed": true
}
Implement backend support for Ollama model usage through LangChain, including key features like model verification, chat functionality, and local model status. This allows users to interact with local models securely and only when they’re installed.
🔧 Key Features
✅ 1. LangChain + Ollama Integration
llama3,mistral, etc.)✅ 2. Check if Ollama Model is Installed
Before processing a chat request:
{ "error": true, "message": "The Ollama model is not installed on your local system. Please install it before using." }Use Ollama's API (GET /api/tags) to check for installed models.
✅ 3. Secure Access via Local Mind API Key
All routes must require the following HTTP header:
Validate this against known or configured secure keys.
Deny access if missing or invalid.
Ollama API Routes
Chart with Ollama
Headers:
Request Body
{ "model": "llama3", "messages": [ { "role": "user", "content": "Hello, who are you?" } ] }Success Response:
{ "model": "llama3", "response": "Hello! I'm a helpful assistant running locally.", "timestamp": "2025-10-12T15:30:00Z" }Fetch Ollama models
Fetch a list of all locally installed Ollama models.
Headers:
Response:
{ "models": [ "llama3", "mistral", "codellama:7b" ] }Check if model is Install or not
Check if a specific Ollama model is installed locally before allowing usage.
🔐 Headers
x-mind-api-key: <your-local-mind-api-key> // Required🌐 Endpoint
Success Response (if installed):
{ "model": "llama3", "installed": true }