-
Notifications
You must be signed in to change notification settings - Fork 0
Build LlamaIndex Workflow #11
Description
One of the issues in our results generally and in the revision loop in particular is that we are not taking advantage of any memory. In LlamaIndex memory is a component of an agent. But none of the built in agents can provide the sort of multimodal features that we want. They also tend to make query tool calls that are too general relative to the user in put question.
Additionally, we don't necessarily need an automated reasoning. Instead we will break down our steps in to a Workflow - which we love because it's based on async message passing and not a DAG.
Why do we need memory?
We don't need to keep retrieving the same nodes when we're analyzing a particular plot. Also, we want to balance revising answers as we drill down with remembering the correct answers so far. As mentioned, we currently do not converge on correct answers fast enough because when we do get some information about the current belief state, we don't know which features have already been changed - and maybe more importantly, the reasons they were changed.
To Do
- determine type(s) of memory to implement
- sketch Workflow using memory
- enable better passing of images between steps
- move sketch from notebook into modules