feat: AI-powered VRL and pipeline suggestions#86
Conversation
Add aiProvider, aiBaseUrl, aiApiKey, aiModel, and aiEnabled columns to support per-team AI provider configuration for VRL assistant and pipeline builder features.
This reverts commit c44134f.
- Strip aiApiKey from team.list response (critical security fix) - Add runtime validation for body.mode in pipeline endpoint - Fix rate limiter comment (fixed window, not sliding) - Add runtime = "nodejs" to SSE route handlers - Use __dirname instead of process.cwd() in prompts.ts - Only render AiPipelineDialog when AI is enabled - Create missing ai-suggestions.md doc page
Greptile SummaryThis PR adds an optional AI assistant feature to VectorFlow, including per-team OpenAI-compatible provider configuration with AES-256-GCM encrypted credentials, SSE streaming endpoints for VRL code generation and pipeline YAML generation/review, and corresponding UI panels in the VRL editor and pipeline toolbar. The previously flagged SSRF vulnerability, Key findings:
Confidence Score: 3/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant Browser
participant NextJS as Next.js Route Handler
participant AI_Service as ai.ts service
participant RateLimiter as rate-limiter.ts
participant DB as PostgreSQL (Prisma)
participant Provider as AI Provider (OpenAI-compat)
Browser->>NextJS: POST /api/ai/vrl (teamId, prompt, ...)
NextJS->>DB: TeamMember lookup (session + teamId)
DB-->>NextJS: membership (role check ≥ EDITOR)
NextJS->>AI_Service: streamCompletion(teamId, prompts)
AI_Service->>RateLimiter: checkRateLimit(teamId)
RateLimiter-->>AI_Service: { allowed, remaining, resetAt }
AI_Service->>DB: team.findUnique (aiEnabled, aiBaseUrl, aiApiKey, aiModel)
DB-->>AI_Service: encrypted config
AI_Service->>AI_Service: decryptApiKey() + validateBaseUrl()
AI_Service->>Provider: POST /chat/completions (stream: true)
Provider-->>AI_Service: SSE token stream
AI_Service-->>NextJS: onToken callback per chunk
NextJS-->>Browser: SSE data: {token} frames
NextJS-->>Browser: SSE data: {done: true}
|
- Refactor ai-settings.tsx to eliminate useEffect setState anti-pattern - Add SSRF validation for admin-configurable AI base URLs - Inline VRL reference as TS module (fixes __dirname in production builds) - Add withAudit middleware to testAiConnection mutation - Fix unused variable ESLint warnings in team.ts
|
@greptile fixed
|
- Block full 169.254.0.0/16 link-local range, IPv6 private/link-local prefixes (fe80::, fc00::, fd00::, ::ffff:), and unspecified address - Merge imported global config when applying AI nodes to existing canvas instead of silently dropping it
|
@greptile fixed the last issues.
|
The test-connection workflow (configure → test → enable) was broken because getTeamAiConfig rejected requests when aiEnabled was false.
Summary
Changes
Database & Service Layer
aiProvider,aiBaseUrl,aiApiKey,aiModel,aiEnabled)src/server/services/ai.ts: streaming completions service with config loading, key decryption, rate limitingsrc/lib/ai/rate-limiter.ts: in-memory fixed-window rate limitersrc/lib/ai/prompts.ts: system prompt builders for VRL assistant and pipeline buildersrc/lib/ai/vrl-reference.txt: compact VRL function reference for LLM contextAPI & tRPC
src/app/api/ai/vrl/route.ts: SSE endpoint for VRL code generationsrc/app/api/ai/pipeline/route.ts: SSE endpoint for pipeline generation/reviewsrc/server/routers/team.ts:getAiConfig,updateAiConfig,testAiConnectionprocedures;aiApiKeystripped fromteam.getandteam.listresponsesUI Components
src/components/vrl-editor/ai-input.tsx: streaming AI input with Insert/Replace/Regenerate actionssrc/components/flow/ai-pipeline-dialog.tsx: tabbed Generate/Review dialog with Apply to Canvassrc/app/(dashboard)/settings/_components/ai-settings.tsx: provider config form with test connectionaiEnabledis trueSecurity
enc:prefix via existingcrypto.tsaiApiKeyadded to audit middlewareSENSITIVE_KEYSfor redactionaiApiKeystripped from bothteam.getandteam.listtRPC responsesTest plan
aiEnabledis falseaiApiKeyis not present in team.get or team.list API responses