SAGE (Streaming-Augmented Generative Execution) 是一个高性能、模块化的 AI 推理框架生态系统,通过数据流抽象实现透明、可扩展的 LLM 驱动系统。
SAGE is a high-performance, modular AI inference framework ecosystem that enables transparent, scalable LLM-powered systems through dataflow abstractions.
L1 sage-common
↓
L2 sage-platform sageFlownet
↓
L3 sage-kernel sage-libs
↓
L4 sage-middleware
↓
L5 sage-cli sage-dev-tools sage-studio
⚙️ sage-common
提供配置、日志、协议、通用组件等底座能力,是整个 SAGE 生态的依赖起点。
Provides foundational config, logging, protocol, and shared components used by all upper layers.
|
队列、存储、服务抽象与运行基础设施接口层。 Queue/storage/service abstractions as platform interfaces for runtime infrastructure. |
分布式通信与执行底座,作为上层运行时的重要平台能力。 Distributed communication/runtime substrate serving as a platform capability for upper layers. |
⚙️ sage-kernel流式运行时、调度器、Flow DSL、容错与 RPC。 Streaming runtime, scheduler, Flow DSL, fault tolerance, and RPC. |
⚙️ sage-libs算法接口与实现集合(Agentic / RAG / Eval / Intent)。 Algorithm interfaces and implementations (Agentic / RAG / Eval / Intent). |
- Agentic / ToolUse: sage-agentic
, sage-agentic-tooluse
, sage-agentic-tooluse-sias
, sage-agentic-tooluse-benchmark
- RAG / Refiner / Data: sage-rag
, sageRefiner
, sageData
- Eval / Intent / Safety: sage-eval
, sage-intent
, sage-safety
, sage-privacy
运行时服务组件层:向量数据库、记忆后端、联网算子等。
Runtime-bound service layer: vector DB, memory backends, and networked operators.
- Vector DB / ANNS: sageVDB
, sage-anns
, CANDOR-Bench
- Stream / TSDB / Memory: sageFlow
, sageTSDB
, neuromem
- Benchmarks: sage-benchmark
, sage-memory-benchmark
, sage-rag-benchmark
, sage-refiner-benchmark
⚙️ sage-cli统一命令行入口,连接平台能力与应用场景。 Unified CLI entrypoint connecting platform capabilities and app scenarios. |
SAGE 开发工具链(质量检查、测试、维护、报告)。 SAGE developer tooling for quality, testing, maintenance, and reports. |
- 应用体验与入口: sage-studio
, sage-edge
- 开发与发布工具: sage-pypi-publisher
, sage-github-manager
, sage-team-info
- 文档与学习资源: SAGE
, sage-docs
, sage-examples
, sage-tutorials
sageLLM 是与 SAGE 协同的独立推理引擎生态,按协议层→核心层→系统层→服务层组织。
sageLLM is an independent inference ecosystem collaborating with SAGE, organized from protocol to service layers.
L1 sagellm-protocol
↓
L2 sagellm-core sagellm-backend sagellm-comm sagellm-kv-cache sagellm-compression
↓
L3 sagellm-control-plane
↓
L4 sagellm-gateway
↓
L5 sagellm (integration) sagellm-benchmark sagellm-docs sagellm-website sagellm-dev-tools
定义 schema、错误码与跨模块协议。
Schema, error codes, and cross-module protocol definitions.
- MorphStream ⭐ 141 - [ICDE'20, SIGMOD'23, TKDE'24] 可扩展的事务性流处理引擎 | Scalable transactional stream processing engine
- AllianceDB ⭐ 16 - [SIGMOD'21] 并行数据库系统 | Parallel database system
- Sesame ⭐ 26 - [SIGMOD'23] 数据流聚类实证研究 | Data stream clustering empirical study
- PDSC - 并行数据流聚类基准 | Parallel data stream clustering benchmark
- SentiStream ⭐ 7 - [EMENLP'23] 情感分析流处理 | Sentiment analysis stream processing
- StreamLearning - 流式学习框架 | Stream learning framework
- StreamProcessing_ReadingList ⭐ 69 - 流处理文献阅读列表 | Stream processing reading list
- Awesome-Online-Continual-Learning - 在线持续学习资源 | Online continual learning resources
# PyPI 安装 | Install from PyPI
pip install isage
# 开发安装 | Development installation
git clone https://github.com/intellistream/SAGE.git
cd SAGE
./quickstart.sh --dev --yesfrom sage.kernel.api.local_environment import LocalEnvironment
from sage.libs.io.source import FileSource
from sage.middleware.operators.rag import DenseRetriever, QAPromptor, OpenAIGenerator
from sage.libs.io.sink import TerminalSink
# 创建执行环境 | Create execution environment
env = LocalEnvironment("rag_pipeline")
# 构建声明式管道 | Build declarative pipeline
(
env.from_source(FileSource, {"file_path": "questions.txt"})
.map(DenseRetriever, {"model": "sentence-transformers/all-MiniLM-L6-v2"})
.map(QAPromptor, {"template": "Answer based on: {context}\nQ: {query}\nA:"})
.map(OpenAIGenerator, {"model": "gpt-3.5-turbo"})
.sink(TerminalSink)
)
# 执行管道 | Execute pipeline
env.submit()详细文档请访问:SAGE Documentation
For detailed documentation, visit: SAGE Documentation
我们欢迎各种形式的贡献!请查看各个仓库的 CONTRIBUTING.md 文件了解详情。
We welcome contributions of all kinds! Please check the CONTRIBUTING.md file in each repository for details.
- 💬 Email: shuhao_zhang at hust.edu.cn
- 🌐 Website: intellistream.github.io
各项目许可证详见各仓库的 LICENSE 文件。大多数项目采用 MIT 或 Apache 2.0 许可证。
License details can be found in each repository's LICENSE file. Most projects use MIT or Apache 2.0 licenses.
⭐ 如果我们的项目对您有帮助,请给我们一个 Star!
If our projects help you, please give us a Star!