Warning
These agents are for demonstration purposes only and are not suitable for production use.
In addition to supporting OpenAI API compatible agents, Sentient Chat supports a custom, open source event system for agent responses. These events can be rendered in Sentient Chat to provide a richer user experience. This particularly useful for streaming responses from an AI agent, when you might want to show the agent's work while the response is being generated, rather than having the user wait for the final response. Documentation for the event system is not yet publically available, but it is coming soon.
This repo will contain examples of simple agents that serve Sentient Chat events using the Sentient Agent Framework. The first example is a search agent. It uses a Flask server that can be used to query the agent and that streams the agent's response (events) to a client using Server-Sent Events (SSE). The most important part of the example is the agent.py file, which demonstrates how to create and serve Sentient Chat events.
Note
A python package that provides an agent framework for builing agents that serve Sentient Chat events is currently in beta and is available on PyPI. The framework/pacakge repo can be found here.
To understand how to create and serve Sentient Chat events, review agent.py. A ResponseHandler is responsible for creating the events to send to the Sentient Chat client. It abstracts away the event system and provides a simple interface for sending events to the client. It is initialized with your agent's Sentient Chat Identity and with a Hook that is used to direct the events to the client.
pip install sentient-agent-framework
A ResponseHandler is initialized with an agent's Identity and a Hook. A new ResponseHandler is created for every agent query. See agent.py line 33:
response_handler = DefaultResponseHandler(self._identity, DefaultHook(self._response_queue))Once initialized, the ResponseHandler is used to create events that are emitted using the Hook.
Text events are used to send single, complete messages to the client. See agent.py lines 36-38:
await response_handler.emit_text_block(
"PLAN", "Rephrasing user query..."
)JSON events are used to send JSON objects to the client. See agent.py lines 50-52:
await response_handler.emit_json(
"SOURCES", {"results": search_results["results"]}
)Error events are used to send error messages to the client (no example in agent.py):
await response_handler.emit_error(
"ERROR", {"message": "An error occurred"}
)At the end of a response, response_handler.complete() is called to signal the end of the response (this will emit a DoneEvent using the Hook). See agent.py line 65:
await response_handler.complete()To stream a longer response one chunk at a time, use the response_handler.create_text_stream method. This returns a StreamEventEmitter that can be used to stream text to the client using the emit_chunk method. See agent.py lines 59-63:
final_response_stream = response_handler.create_text_stream(
"FINAL_RESPONSE"
)
for chunk in self.__process_search_results(search_results["results"]):
await final_response_stream.emit_chunk(chunk)At the end of the stream, final_response_stream.complete() is called to signal the end of the stream (this will emit a TextChunkEvent with is_complete=True). See agent.py line 64:
await final_response_stream.complete()Note
These instructions are for unix-based systems (i.e. MacOS, Linux). Before you proceed, make sure that you have installed python and pip. If you have not, follow these instructions to do so.
Create the .env file by copying the contents of .env.example. This is where you will store all of your agent's credentials.
cp .env.example .env
Add your Fireworks API key to the .env file (you can also use any other OpenAI compatible inference provider).
Add your Tavily API key to the .env file.
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
python3 flask_sse_server.py
8. Use a tool like CuRL or Postman to query the server. It exposes a single :query endpoint that can be used to query the agent:
curl --location --request GET 'http://127.0.0.1:5000/query' \
--header 'Content-Type: application/json' \
--data '{
"query": "Who is Lionel Messi?"
}'
Expected output:
data: content_type=<EventContentType...
data: content_type=<EventContentType...
...
