Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
7c032e9
feat: add in-memory rate limiter for AI requests
TerrifiedBug Mar 10, 2026
88263f3
feat: add compact VRL function reference for AI context
TerrifiedBug Mar 10, 2026
b96f142
feat: add system prompt builders for VRL and pipeline AI
TerrifiedBug Mar 10, 2026
4aa8fa8
feat: add AI configuration fields to Team model
TerrifiedBug Mar 10, 2026
592f09d
feat: add AI nav item to settings sidebar
TerrifiedBug Mar 10, 2026
365fa18
feat: add team AI config procedures (get, update, test connection)
TerrifiedBug Mar 10, 2026
8057eb6
feat: add VRL AI input component with streaming preview
TerrifiedBug Mar 10, 2026
a17e16e
feat: add AI sparkle button to flow toolbar
TerrifiedBug Mar 10, 2026
a8f23ee
feat: add AI pipeline dialog with generate and review modes
TerrifiedBug Mar 10, 2026
9592f8b
feat: add AI service for OpenAI-compatible streaming completions
TerrifiedBug Mar 10, 2026
6bd4828
feat: add SSE endpoint for VRL AI code generation
TerrifiedBug Mar 10, 2026
c13e8e2
feat: add SSE endpoint for pipeline AI generation and review
TerrifiedBug Mar 10, 2026
73f2d80
feat: add AI configuration settings page for team admins
TerrifiedBug Mar 10, 2026
bef57f8
feat: integrate AI button and panel into VRL editor
TerrifiedBug Mar 10, 2026
bf14eee
feat: wire AI pipeline dialog into pipeline editor page
TerrifiedBug Mar 10, 2026
f769511
docs: add AI suggestions documentation
TerrifiedBug Mar 10, 2026
c44134f
docs: include AI suggestions design spec on feature branch
TerrifiedBug Mar 10, 2026
c04cbc3
Revert "docs: include AI suggestions design spec on feature branch"
TerrifiedBug Mar 10, 2026
0c68ae1
fix: address code review findings for AI feature
TerrifiedBug Mar 10, 2026
bed8690
fix: address PR #86 CI errors, CodeQL, and Greptile findings
TerrifiedBug Mar 10, 2026
05d2b56
fix: expand SSRF blocklist and merge AI global config on canvas apply
TerrifiedBug Mar 10, 2026
4620191
fix: allow super admins with VIEWER membership to use AI endpoints
TerrifiedBug Mar 10, 2026
8731616
fix: allow testAiConnection before AI is enabled
TerrifiedBug Mar 10, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/public/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
* [Alerts](user-guide/alerts.md)
* [Templates](user-guide/templates.md)
* [Shared Components](user-guide/shared-components.md)
* [AI Suggestions](user-guide/ai-suggestions.md)

## Operations

Expand Down
64 changes: 64 additions & 0 deletions docs/public/user-guide/ai-suggestions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# AI Suggestions

VectorFlow includes optional AI-powered assistance for writing VRL code and generating pipeline configurations. When enabled, team members with Editor or Admin roles can use AI features in both the VRL editor and pipeline builder.

## Setup

Team admins can configure AI in **Settings → AI**. The configuration requires:

| Field | Description |
|-------|-------------|
| **Provider** | OpenAI, Anthropic, or Custom (any OpenAI-compatible endpoint) |
| **Base URL** | API endpoint — pre-filled for known providers |
| **API Key** | Provider API key — encrypted at rest using AES-256 |
| **Model** | Model identifier (e.g. `gpt-4o`, `claude-sonnet-4-20250514`) |

After saving, use **Test Connection** to verify the configuration works.

{% hint style="info" %}
VectorFlow uses the OpenAI-compatible chat completions API format (`/chat/completions`). Most providers support this format natively or via a compatibility layer. For Anthropic, use an OpenAI-compatible proxy such as LiteLLM or OpenRouter.
{% endhint %}

## VRL Assistant

In the VRL editor (opened from any remap, filter, or route transform), click the **AI** button in the tools panel to reveal the AI input.

1. Type a natural language description of what you want the VRL code to do
2. Click **Generate** — the AI streams VRL code in real time
3. When complete, choose:
- **Insert** — append the generated code after your existing code
- **Replace** — replace all existing code with the generated result
- **Regenerate** — try again with the same prompt

The AI is aware of your upstream source types and available fields, so you can reference them naturally (e.g., "parse the syslog message and extract the hostname").

## Pipeline Builder

In the pipeline editor toolbar, click the **sparkle icon** to open the AI Pipeline Builder dialog.

### Generate mode

Describe a pipeline in plain language:

> "Collect Kubernetes logs from a file source, drop debug-level events, parse JSON, and send to Elasticsearch and S3"

The AI generates a complete Vector YAML configuration. Click **Apply to Canvas** to add the generated components to your pipeline. If your canvas already has components, the new ones are positioned below the existing layout to avoid overlap.

### Review mode

Ask the AI to analyze your current pipeline configuration:

> "Are there any performance issues with my pipeline?"

The AI reviews the generated YAML and provides suggestions for improvements, best practices, and potential issues.

## Rate Limits

AI requests are rate-limited to 60 requests per hour per team to prevent excessive API usage. The limit resets on a rolling window.

## Security

- API keys are encrypted at rest using AES-256-GCM
- Keys are never exposed to the client — the settings page shows only whether a key is saved
- AI configuration changes are recorded in the audit log with the API key redacted
- Only team members with Editor or Admin roles can use AI features
16 changes: 16 additions & 0 deletions docs/public/user-guide/pipeline-editor.md
Original file line number Diff line number Diff line change
Expand Up @@ -257,3 +257,19 @@ Click the pipeline name in the top-left corner of the editor to rename it inline
{% hint style="info" %}
On Windows and Linux, use `Ctrl` instead of `Cmd` for all keyboard shortcuts.
{% endhint %}

## AI-Powered Suggestions

When AI is configured for your team (Settings → AI), two AI features become available:

### VRL Assistant
In the VRL editor, click the **AI** button in the tools panel. Type a natural language description of what you want the VRL code to do, and the AI will generate VRL code. You can **Insert** (append) or **Replace** the current code.

### Pipeline Builder
In the pipeline editor toolbar, click the **sparkle icon** to open the AI Pipeline Builder. Two modes are available:

- **Generate**: Describe a pipeline in plain language (e.g., "Collect K8s logs, drop debug, send to Datadog"). The AI generates Vector YAML config that is applied directly to your canvas.
- **Review**: Ask the AI to review your current pipeline configuration for performance, correctness, and best practices.

### Configuration
Team admins can configure AI in **Settings → AI**. VectorFlow supports any OpenAI-compatible API (OpenAI, Anthropic, Ollama, Groq, Together, etc.).
6 changes: 6 additions & 0 deletions prisma/migrations/20260310020000_add_ai_fields/migration.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
-- AlterTable: Add AI configuration fields to Team
ALTER TABLE "Team" ADD COLUMN "aiProvider" TEXT;
ALTER TABLE "Team" ADD COLUMN "aiBaseUrl" TEXT;
ALTER TABLE "Team" ADD COLUMN "aiApiKey" TEXT;
ALTER TABLE "Team" ADD COLUMN "aiModel" TEXT;
ALTER TABLE "Team" ADD COLUMN "aiEnabled" BOOLEAN NOT NULL DEFAULT false;
8 changes: 8 additions & 0 deletions prisma/schema.prisma
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,14 @@ model Team {
vrlSnippets VrlSnippet[]
alertRules AlertRule[]
availableTags Json? @default("[]") // string[] of admin-defined classification tags

// AI-powered suggestions configuration
aiProvider String? // "openai" | "anthropic" | "custom"
aiBaseUrl String? // OpenAI-compatible API endpoint
aiApiKey String? // Encrypted via crypto.ts
aiModel String? // e.g. "gpt-4o", "claude-sonnet-4-20250514"
aiEnabled Boolean @default(false)

createdAt DateTime @default(now())
}

Expand Down
21 changes: 21 additions & 0 deletions src/app/(dashboard)/pipelines/[id]/page.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -30,12 +30,14 @@ import {
import { ComponentPalette } from "@/components/flow/component-palette";
import { FlowCanvas } from "@/components/flow/flow-canvas";
import { FlowToolbar } from "@/components/flow/flow-toolbar";
import { AiPipelineDialog } from "@/components/flow/ai-pipeline-dialog";
import { DetailPanel } from "@/components/flow/detail-panel";
import { DeployDialog } from "@/components/flow/deploy-dialog";
import { SaveTemplateDialog } from "@/components/flow/save-template-dialog";
import { ConfirmDialog } from "@/components/confirm-dialog";
import { PipelineMetricsChart } from "@/components/pipeline/metrics-chart";
import { PipelineLogs } from "@/components/pipeline/pipeline-logs";
import { useTeamStore } from "@/stores/team-store";

function aggregateProcessStatus(
statuses: Array<{ status: string }>
Expand Down Expand Up @@ -130,6 +132,16 @@ function PipelineBuilderInner({ pipelineId }: { pipelineId: string }) {
const [discardOpen, setDiscardOpen] = useState(false);
const [metricsOpen, setMetricsOpen] = useState(false);
const [logsOpen, setLogsOpen] = useState(false);
const [aiDialogOpen, setAiDialogOpen] = useState(false);

const selectedTeamId = useTeamStore((s) => s.selectedTeamId);
const teamQuery = useQuery(
trpc.team.get.queryOptions(
{ id: selectedTeamId! },
{ enabled: !!selectedTeamId },
),
);
const aiEnabled = teamQuery.data?.aiEnabled ?? false;

const loadGraph = useFlowStore((s) => s.loadGraph);
const isDirty = useFlowStore((s) => s.isDirty);
Expand Down Expand Up @@ -431,6 +443,8 @@ function PipelineBuilderInner({ pipelineId }: { pipelineId: string }) {
}
gitOpsMode={pipelineQuery.data?.gitOpsMode}
onDiscardChanges={() => setDiscardOpen(true)}
aiEnabled={aiEnabled}
onAiOpen={() => setAiDialogOpen(true)}
/>
</div>
<div className="flex items-center px-3">
Expand Down Expand Up @@ -522,6 +536,13 @@ function PipelineBuilderInner({ pipelineId }: { pipelineId: string }) {
</DialogFooter>
</DialogContent>
</Dialog>
{aiEnabled && (
<AiPipelineDialog
open={aiDialogOpen}
onOpenChange={setAiDialogOpen}
environmentName={pipelineQuery.data?.environment?.name}
/>
)}
</div>
);
}
Expand Down
Loading
Loading