-
|
I run Ollama locally with Llama 3 and DeepSeek Coder. Can I route these through OpenACP instead of paying for API keys? Privacy is a concern for my client's codebase. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
|
OpenACP connects to agents, not models directly. The model choice depends on which agent you use:
So yes — use an agent that supports local models, and OpenACP will bridge it to your chat platform like any other agent. The privacy benefit is real: your code never leaves your machine. Performance note: local models on consumer hardware are noticeably slower than cloud APIs for complex coding tasks. A 70B model on a 4090 is usable but not fast. For privacy-sensitive work, it's a worthy tradeoff. |
Beta Was this translation helpful? Give feedback.
-
|
Yes, you can! Ollama exposes an OpenAI-compatible API endpoint (usually Quick setup:
For privacy-sensitive projects, this is actually a solid approach — everything stays on your machine, no data leaves your network. Just keep in mind local models might be slower than cloud APIs depending on your hardware. One thing to watch: context window size. DeepSeek Coder handles longer contexts better than Llama 3 for code tasks, so you might want to route different request types to different models based on that. |
Beta Was this translation helpful? Give feedback.
OpenACP connects to agents, not models directly. The model choice depends on which agent you use:
So yes — use an agent that supports local models, and OpenACP will bridge it to your chat platform like any other agent. The privacy benefit is real: your code never leaves your machine.
Performance note: local models on consumer …