Skip to content

Spike/PoC utilizing local LLM to enable better understanding OpenTDF #239

@jrschumacher

Description

@jrschumacher

As we approach running a Spike/PoC we want to break it down into steps.

Hypothesis

We believe that if we can create the ability to embed an CPU based LLM into the CLI we can enable users to get tailored support with running the OpenTDF platform. This support will enable them to deploy and administrate the platform quickly without needing specific guidance from a human.

The benefit of this approach is that it enables humans with limited knowledge to quickly learn how to do a process without having to invest vast quantities of time reading or scouring resources. This is especially true for platforms that have limited documentation and/or examples that may not fit the exact problem at hand. Additionally, this approach will satisfy the environmental constraints such as air-gapped environments, need-to-know limitations, and limited connectivity.

Solution

Implement a LLM solution based on the work of https://github.com/ollama/ollama to load a user provided pre-installed model.

Approach

  1. Utilizing langchaingo and ollama get a chat interface working in the CLI
  2. Implement some simple prompt-engineering to focus the LLM
  3. Investigate RAG with an embedded vector db

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions