Skip to content

[GenAI Application Development Framework] 🚀 Build GenAI application quick and easy 💬 Easy to interact with GenAI agent in code using structure data and chained-calls syntax 🧩 Use Event-Driven Flow *TriggerFlow* to manage complex GenAI working logic 🔀 Switch to any model without rewrite application code

License

Notifications You must be signed in to change notification settings

AgentEra/Agently

Repository files navigation

image

Agently 4 🚀

Build production‑grade AI apps faster, with stable outputs and maintainable workflows.

English Introduction | 中文介绍

license PyPI version Downloads GitHub Stars Twitter Follow WeChat


🔥 Latest Docs | 🚀 5‑minute Quickstart | 💡 Core Features


📚 Quick Links

🤔 Why Agently?

Many GenAI POCs fail in production not because models are weak, but because engineering control is missing:

Common challenge How Agently helps
Output schema drifts, JSON parsing fails Contract‑first output control with output() + ensure_keys
Workflows get complex and hard to maintain TriggerFlow orchestration with to / if / match / batch / for_each
Multi‑turn state becomes unstable Session & Memo with memory, summaries, and persistence strategies
Tool calls are hard to audit Tool logs via extra.tool_logs
Switching models is expensive OpenAICompatible unified model settings

Agently turns LLM uncertainty into a stable, testable, maintainable engineering system.

✨ Core Features

1) 📝 Contract‑first Output Control

Define the structure with output(), enforce critical keys with ensure_keys.

result = (
    agent
    .input("Analyze user feedback")
    .output({
        "sentiment": (str, "positive/neutral/negative"),
        "key_issues": [(str, "issue summary")],
        "priority": (int, "1-5, 5 is highest")
    })
    .start(ensure_keys=["sentiment", "key_issues[*]"])
)

2) ⚡ Structured Streaming (Instant)

Consume structured fields as they are generated.

response = (
    agent
    .input("Explain recursion and give 2 tips")
    .output({"definition": (str, "one sentence"), "tips": [(str, "tip")]})
    .get_response()
)

for msg in response.get_generator(type="instant"):
    if msg.path == "definition" and msg.delta:
        ui.update_definition(msg.delta)
    if msg.wildcard_path == "tips[*]" and msg.delta:
        ui.add_tip(msg.delta)

3) 🧩 TriggerFlow Orchestration

Readable, testable workflows with branching and concurrency.

(
    flow.to(handle_request)
    .if_condition(lambda d: d.value["type"] == "query")
    .to(handle_query)
    .elif_condition(lambda d: d.value["type"] == "order")
    .to(check_inventory)
    .to(create_order)
    .end_condition()
)

4) 🧠 Session & Memo (Multi‑turn Memory)

Quick / Lite / Memo modes with summaries and persistence strategies.

from agently import Agently
from agently.core import Session

agent = Agently.create_agent()
session = Session(agent=agent).configure(
    mode="memo",
    limit={"chars": 6000, "messages": 12},
    every_n_turns=2,
)
agent.attach_session(session)

5) 🔧 Tool Calls + Logs

Tool selection and usage are logged in extra.tool_logs.

@agent.tool_func
def add(a: int, b: int) -> int:
    return a + b

response = agent.input("12+34=?").use_tool(add).get_response()
full = response.get_data(type="all")
print(full["extra"]["tool_logs"])

6) 🌐 Unified Model Settings (OpenAICompatible)

One config for multiple providers and local models.

from agently import Agently

Agently.set_settings(
    "OpenAICompatible",
    {
        "base_url": "https://api.deepseek.com/v1",
        "model": "deepseek-chat",
        "auth": "DEEPSEEK_API_KEY",
    },
)

🚀 Quickstart

Install

pip install -U agently

Requirements: Python >= 3.10, recommended Agently >= 4.0.7.2

5‑minute example

1. Structured output

from agently import Agently

agent = Agently.create_agent()

result = (
    agent.input("Introduce Python in one sentence and list 2 advantages")
    .output({
        "introduction": (str, "one sentence"),
        "advantages": [(str, "advantage")]
    })
    .start(ensure_keys=["introduction", "advantages[*]"])
)

print(result)

2. Workflow routing

from agently import TriggerFlow, TriggerFlowEventData

flow = TriggerFlow()

@flow.chunk
def classify_intent(data: TriggerFlowEventData):
    text = data.value
    if "price" in text:
        return "price_query"
    if "feature" in text:
        return "feature_query"
    if "buy" in text:
        return "purchase"
    return "other"

@flow.chunk
def handle_price(_: TriggerFlowEventData):
    return {"response": "Pricing depends on the plan..."}

@flow.chunk
def handle_feature(_: TriggerFlowEventData):
    return {"response": "Our product supports..."}

(
    flow.to(classify_intent)
    .match()
    .case("price_query")
    .to(handle_price)
    .case("feature_query")
    .to(handle_feature)
    .case_else()
    .to(lambda d: {"response": "What would you like to know?"})
    .end_match()
    .end()
)

print(flow.start("How much does it cost?"))

✅ Is Your App Production‑Ready? — Release Readiness Checklist

Based on teams shipping real projects with Agently, this production readiness checklist helps reduce common risks before release.

Area Check Recommended Practice
📝 Output Stability Are key interfaces stable? Define schemas with output() and lock critical fields with ensure_keys.
⚡ Real‑time UX Need updates while generating? Consume type="instant" structured streaming events.
🔍 Observability Tool calls auditable? Inspect extra.tool_logs for full arguments and results.
🧩 Workflow Robustness Complex flows fully tested? Unit test each TriggerFlow branch and concurrency limit with expected outputs.
🧠 Memory & Context Multi‑turn experience consistent? Define Session/Memo summary, trimming, and persistence policies.
📄 Prompt Management Can logic evolve safely? Version and configure prompts to keep changes traceable.
🌐 Model Strategy Can you switch or downgrade models? Centralize settings with OpenAICompatible for fast provider switching.
🚀 Performance & Scale Can it handle concurrency? Validate async performance in real web‑service scenarios.
🧪 Quality Assurance Regression tests complete? Create fixed inputs with expected outputs for core scenarios.

📈 Who Uses Agently to Solve Real Problems?

"Agently helped us turn evaluation rules into executable workflows and keep key scoring accuracy at 75%+, significantly improving bid‑evaluation efficiency." — Project lead at a large energy SOE

"Agently enabled a closed loop from clarification to query planning to rendering, reaching 90%+ first‑response accuracy and stable production performance." — Data lead at a large energy group

"Agently’s orchestration and session capabilities let us ship a teaching assistant for course management and Q&A quickly, with continuous iteration." — Project lead at a university teaching‑assistant initiative

Your project can be next.
📢 Share your case on GitHub Discussions →

❓ FAQ

Q: How is Agently different from LangChain or LlamaIndex?
A: Agently is built for production. It focuses on stable interfaces (contract‑first outputs), readable/testable orchestration (TriggerFlow), and observable tool calls (tool_logs). It’s a better fit for teams that need reliability and maintainability after launch.

Q: Which models are supported? Is switching expensive?
A: With OpenAICompatible, you can connect OpenAI, Claude, DeepSeek, Qwen and most OpenAI‑compatible endpoints, plus local models like Llama/Qwen. The same business code can switch models without rewrites, reducing vendor lock‑in.

Q: What’s the learning curve? Where should I start?
A: The core API is straightforward—you can run your first agent in minutes. Start with Quickstart, then dive into Output Control and TriggerFlow.

Q: How do I deploy an Agently‑based service?
A: Agently doesn’t lock you into a specific deployment path. It provides async APIs and FastAPI examples. The FastAPI integration example covers SSE, WebSocket, and standard POST.

Q: Do you offer enterprise support?
A: Yes. The core framework in this repository remains open‑source under Apache 2.0. Enterprise support, private extensions, managed services, and SLA-based collaboration are provided under separate commercial agreements. Contact us via the community.

Q: What is open-source vs enterprise in Agently?
A: The open-source core includes the general framework and public capabilities in this repository. Enterprise offerings (for example private extension packs, advanced governance modules, private deployment support, and SLA services) are delivered separately under commercial terms.

🧭 Docs Guide (Key Paths)

🤝 Community

📄 License

Agently follows an open-core + commercial extension model:

  • Open-source core in this repository: Apache 2.0
  • Trademark usage policy: TRADEMARK.md
  • Contributor rights agreement: CLA.md
  • Enterprise extensions and commercial services: provided under separate commercial agreements

Start building your production‑ready AI apps →
pip install -U agently

Questions? Read the docs or join the community.

About

[GenAI Application Development Framework] 🚀 Build GenAI application quick and easy 💬 Easy to interact with GenAI agent in code using structure data and chained-calls syntax 🧩 Use Event-Driven Flow *TriggerFlow* to manage complex GenAI working logic 🔀 Switch to any model without rewrite application code

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 20

Languages