Skip to content

abhinav25232354/CodingAgent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Offline Ollama Assistant

offline-ollama-assistant is a fully offline Python project that turns your local Ollama models into a configurable coding assistant and general-purpose AI tool.

What It Can Do

  • Discover local Ollama models automatically.
  • Route prompts to different model roles using JSON settings.
  • Read, summarize, and edit local files with built-in guardrails.
  • Extract text from PDF and DOCX files when optional dependencies are installed.
  • Keep prompts, model selection, permissions, and routing rules in simple config files.

Prerequisites

Before you start, make sure you have:

  • Python 3.10 or newer
  • Ollama installed and running locally
  • At least one Ollama model pulled on your machine

Example model install:

ollama pull llama3.2:1b

Check that Ollama is running:

ollama list

Setup

Follow these steps to set up the project on your machine.

1. Clone the project

git clone https://github.com/abhinav25232354/CodingAgent.git
cd CodingAgent

2. Create a virtual environment

Windows PowerShell:

python -m venv .venv
.venv\Scripts\Activate.ps1

macOS/Linux:

python3 -m venv .venv
source .venv/bin/activate

3. Install the project

Basic install:

pip install -e .

Install with document support for PDF and DOCX:

pip install -e .[docs]

4. Confirm your local model is available

If you have not already downloaded a model, pull one now:

ollama pull llama3.2:1b

Then verify:

ollama list

5. Review the configuration

Open settings/settings.json and check these values:

  • ollama.base_url: should usually stay as http://127.0.0.1:11434
  • model_selection.active_model: set this to a model you already pulled
  • permissions.read_roots: directories the assistant can read
  • permissions.write_roots: directories the assistant can modify

Default example:

"model_selection": {
  "mode": "fixed",
  "active_model": "llama3.2:1b",
  "allow_multi_model_fallback": false,
  "default_role": "general"
}

Run The Assistant

You can start the assistant in either of these ways:

python main.py

or:

offline-assistant

Create Your Own Coding Environment

If you want to use this project as your own local coding assistant setup, use this flow:

1. Create a dedicated workspace

Make a folder for the projects you want the assistant to work with.

Example:

DevWorkspace/
|-- CodingAgent/
|-- my-project-1/
|-- my-project-2/

2. Decide what the assistant can access

In settings/settings.json, update:

  • read_roots to include folders the assistant may inspect
  • write_roots to include folders the assistant may edit

If you want the assistant to work only inside this repository, keep both as ".".

If you want it to help inside another local project, point those settings to that project folder.

3. Pick a model for your use case

Suggested starting point:

  • Small local setup: llama3.2:1b
  • Stronger coding help: choose a larger coding-capable Ollama model you already have installed

Set the chosen model in:

settings/settings.json -> model_selection.active_model

4. Adjust prompts and behavior

You can customize:

  • settings/system_prompts.json for assistant behavior
  • settings/settings.json for routing, permissions, and tool access

This lets you create your own coding environment for:

  • code explanation
  • refactoring help
  • local file editing
  • document summarization
  • offline project assistance

5. Start the assistant inside the project you want to work on

Activate your virtual environment, make sure Ollama is running, and launch the assistant.

Recommended First-Time Checklist

Use this checklist if you are setting up for the first time:

  1. Install Python 3.10+
  2. Install Ollama
  3. Pull at least one model with ollama pull <model-name>
  4. Create and activate a virtual environment
  5. Run pip install -e .
  6. Optionally run pip install -e .[docs]
  7. Update settings/settings.json
  8. Run python main.py

Project Structure

.
|-- pyproject.toml
|-- README.md
|-- main.py
|-- settings/
|   |-- settings.json
|   `-- system_prompts.json
|-- src/
|   `-- offline_assistant/
|       |-- __init__.py
|       |-- assistant.py
|       |-- cli.py
|       |-- config.py
|       |-- models.py
|       |-- ollama_client.py
|       |-- prompting.py
|       |-- routing.py
|       |-- utils.py
|       `-- tools/
|           |-- __init__.py
|           |-- base.py
|           |-- documents.py
|           |-- files.py
|           |-- registry.py
|           |-- textops.py
|           `-- web.py
`-- examples/
    |-- sample_session.txt
    `-- tasks.md

Main Configuration Files

settings/settings.json

Controls:

  • Ollama connection
  • model selection
  • routing rules
  • read/write permissions
  • tool availability
  • generation settings

settings/system_prompts.json

Controls the assistant's behavior and system prompt templates.

Notes

  • The assistant works even if you only have one local model.
  • Document extraction features need pip install -e .[docs].
  • The project is designed for local, offline usage with Ollama running on your machine.

About

offline-ollama-assistant is a fully offline Python project that turns your local Ollama models into a configurable coding assistant and general-purpose AI tool.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages