A toy example of how absurdly easy it is to write a working agent. I wrote this to highlight a few things that are easy to miss when you only interact with LLMs through polished UIs. The core architecture is smaller than this README.
At the core, every agent is just:
- An LLM
- A loop
- Some tools the model can call
- Context
- deciding when to act
- calling tools
- fixing arguments
- retrying
- planning
- analyzing results
- deciding when to stop
You hand the model a list of functions:
pingcurlascii_art- anything else you expose
The LLM chooses when to call them and what parameters to use. If a tool returns something unexpected, it tries another approach. The LLM does the reasoning, orchestration, and reflects on its own.
Toss in a few tools and subprocess + stdin + stdout takes care of the rest:
read_filewrite_fileruntestsearchrefactor
You don't even need to fork VS Code, just run it on your terminal.
They’re stateless. All their memory lives in the loop that feeds from the prior message + reasoning context. The agent feels stateful only because you keep resending the entire conversation back to the model.
- Interactive conversation with GPT-5
- Tools:
ping: Ping hosts to check connectivitycurl: Execute curl commands to validate HTTPS or fetch headersascii: Generate simple ascii art
- Conversation context management
- Function calling with automatic tool execution
- Python 3.10 or higher
- OpenAI API key
uvpackage manager (or pip)
- Install dependencies using
uv:
uv syncOr using pip:
pip install -r requirements.txt- Set your OpenAI API key in
simple_agent.py:- Open
simple_agent.py - Replace the empty string on line 6 with your API key:
client = OpenAI(api_key="your-api-key-here")
- Open
python simple_agent.pyThe agent will process your input and can execute tools as needed. For example, you can ask it to:
- Ping a host: Ping google.com
- Check HTTPS headers: "Check the headers for https://example.com
- Create simple ascii art
You can also use standard assistant output as the script preserves the reasoning contract.
- tell me a joke
- The agent maintains a conversation context that includes user messages, assistant responses, and tool call results.
- When you provide input, the agent sends it to GPT-5 along with available tools.
- If GPT-5 decides to use a tool, the agent executes it and adds the results back to the context.
- The process continues until GPT-5 provides a final response without tool calls.
- The final response is displayed to the user.