Skip to content

feat: VRL AI conversational chat panel#96

Merged
TerrifiedBug merged 15 commits intomainfrom
feat-vrl-ai-chat-improvements-1c5
Mar 11, 2026
Merged

feat: VRL AI conversational chat panel#96
TerrifiedBug merged 15 commits intomainfrom
feat-vrl-ai-chat-improvements-1c5

Conversation

@TerrifiedBug
Copy link
Owner

Summary

  • Replace the single-prompt VRL AI input with a full conversational AI chat panel that slides out from the right side of the VRL editor dialog
  • Add structured suggestion cards (Insert/Replace/Remove) with Apply All / Apply Selected semantics, priority badges, and outdated detection
  • Persist VRL AI conversations per pipeline component via new componentKey field on AiConversation and vrlCode snapshot on AiMessage
  • Support multi-line input (Shift+Enter), auto-scroll, streaming indicators, and conversation history across sessions

Changes

Data Layer

  • prisma/schema.prisma — add componentKey to AiConversation, vrlCode to AiMessage, composite index
  • src/server/routers/ai.ts — add getVrlConversation and markVrlSuggestionsApplied tRPC procedures; filter existing getConversation to exclude VRL conversations

API

  • src/app/api/ai/vrl-chat/route.ts — new SSE endpoint with auth, team/pipeline validation, conversation persistence, streaming via streamCompletion
  • src/lib/ai/prompts.ts — new buildVrlChatSystemPrompt() that instructs the AI to return structured JSON { summary, suggestions }

Frontend

  • src/lib/ai/vrl-suggestion-types.ts — types, parser, status computation, apply logic
  • src/hooks/use-vrl-ai-conversation.ts — React hook for state management, SSE streaming, optimistic messages, server sync
  • src/components/vrl-editor/vrl-suggestion-card.tsx — suggestion card with checkbox, code preview, badges
  • src/components/vrl-editor/vrl-ai-message.tsx — message bubble rendering user/assistant messages with suggestion cards
  • src/components/vrl-editor/vrl-ai-panel.tsx — slide-out panel with header, message list, auto-growing input
  • src/components/vrl-editor/vrl-editor.tsx — integrate AI panel, dynamic dialog width, remove old AiInput
  • src/components/flow/detail-panel.tsx — thread componentKey prop to all VrlEditor usages
  • Delete src/components/vrl-editor/ai-input.tsx (deprecated)

Test plan

  • Open a remap transform's VRL editor — AI button visible in toolbar
  • Click AI — dialog widens, slide-out panel appears
  • Type prompt + Enter — user message appears, AI responds with suggestion cards
  • Shift+Enter — inserts newline (no send)
  • Suggestion cards show code preview, type badge (Insert/Replace/Remove), priority badge
  • Click "Apply All" — VRL code updates in editor, suggestions marked "Applied"
  • Edit VRL after suggestions — replace/remove suggestions become "Outdated", insert stays actionable
  • Close + reopen VRL editor — conversation history loads
  • Click "New" — conversation resets
  • Close AI panel — dialog shrinks back
  • Filter and route VRL editors also have AI button with componentKey

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Mar 11, 2026

Greptile Summary

This PR replaces the single-prompt VRL AI input with a full conversational chat panel, adding SSE streaming, structured suggestion cards, per-component conversation persistence, and apply/undo semantics. The implementation is well-structured and addresses several issues from the prior review round.

What was fixed from previous review threads:

  • String.replacereplaceAll for replace_code / remove_code (no longer truncates at first match)
  • "New" button disabled during streaming — prevents the race that re-populated old messages
  • "Apply Selected" button now uses actionableSelectedSuggestions.length as its disabled condition and handleApplySelected also filters to actionable-only before dispatch
  • controller.enqueue after abort is now guarded by !request.signal.aborted
  • conversationId-less requests now findFirst before creating — prevents duplicate orphaned conversations

Remaining concerns:

  • After the user cancels a streaming request, the optimistic user message stays in messages and is never removed or replaced (no fetchQuery call in the abort path). The conversation appears stuck with an unanswered message until the next successful send.
  • markAppliedMutation.mutate(...) is called with no onError callback. A network failure silently leaves the server without appliedAt, so suggestions will reappear as actionable on the next panel mount.

Authorization model: The new REST endpoint (vrl-chat) correctly verifies team membership, EDITOR-or-better role, and pipeline-to-team ownership before any DB writes or AI calls. The new tRPC procedures (getVrlConversation, markVrlSuggestionsApplied) correctly use withTeamAccess and withAudit middleware per the project conventions.

Confidence Score: 4/5

  • Safe to merge with two minor follow-up items; no data-loss or security risks remain.
  • The core feature is well-implemented with correct auth, proper tRPC middleware, and the major bugs from the previous round addressed. The two remaining issues (orphaned temp message on abort, silent mutation failure with no onError) are quality-of-life correctness issues that don't cause data loss or security problems and can be addressed in a follow-up.
  • src/hooks/use-vrl-ai-conversation.ts — abort cleanup path and markAppliedMutation error handling

Important Files Changed

Filename Overview
src/app/api/ai/vrl-chat/route.ts New SSE endpoint with auth, team/pipeline verification, conversation persistence, and streaming. The controller.enqueue after abort is now guarded by request.signal.aborted. Minor: no query-cache invalidation strategy from this layer (handled client-side).
src/hooks/use-vrl-ai-conversation.ts Core hook managing SSE streaming, optimistic updates, and conversation sync. data.done breaks the inner for-loop but leaves the outer while running (correct but subtle). markAppliedMutation fires with no onSuccess invalidation, so the React Query cache retains stale appliedAt state until next mount/focus refetch. After abort, streamingContent is not cleared until the next sendMessage call.
src/components/vrl-editor/vrl-ai-message.tsx Message bubble with suggestion cards. Previous issues addressed: Apply Selected now correctly disabled when no actionable items are selected (uses actionableSelectedSuggestions), and handleApplySelected filters to actionable-only before calling onApplySelected.
src/components/vrl-editor/vrl-ai-panel.tsx Slide-out chat panel. New button correctly disabled during streaming, preventing the race condition from previous threads. Apply logic chains suggestions correctly via sequential code accumulation.
src/server/routers/ai.ts New getVrlConversation (withTeamAccess VIEWER) and markVrlSuggestionsApplied (withTeamAccess EDITOR + withAudit) procedures are correctly guarded. Existing getConversation now correctly filters componentKey: null to exclude VRL conversations.
src/lib/ai/vrl-suggestion-types.ts Types, parser, status computation, and apply logic. Now uses replaceAll (fixing single-occurrence replacement from prior review). insert_code is always actionable unless appliedAt is set, which is correct given the optimistic-update design.

Sequence Diagram

sequenceDiagram
    participant U as User
    participant P as VrlAiPanel
    participant H as useVrlAiConversation
    participant API as /api/ai/vrl-chat
    participant DB as PostgreSQL
    participant tRPC as ai.getVrlConversation

    U->>P: Open VRL Editor + AI Panel
    P->>tRPC: getVrlConversation(pipelineId, componentKey)
    tRPC-->>H: loadedConversation (history)
    H->>H: Render-time sync → setMessages(history)

    U->>P: Type prompt + Enter
    P->>H: sendMessage(prompt)
    H->>H: Add optimistic user msg (temp-user-*)
    H->>API: POST {teamId, prompt, pipelineId, componentKey, conversationId}
    API->>DB: findMember (auth check)
    API->>DB: verify pipeline → team
    API->>DB: findFirst or create AiConversation
    API->>DB: create AiMessage (user)
    API-->>H: SSE: {conversationId}
    H->>H: setConversationId
    API-->>H: SSE: {token} × N
    H->>H: setStreamingContent (accumulate)
    API->>DB: create AiMessage (assistant + suggestions JSON)
    API-->>H: SSE: {done: true}
    H->>H: Add temp assistant msg to messages
    H->>tRPC: fetchQuery (staleTime: 0) → replace temp IDs with real IDs
    H->>H: setIsStreaming(false)

    U->>P: Click "Apply All"
    P->>H: markSuggestionsApplied(messageId, ids)
    H->>H: Optimistic: set appliedAt on suggestions
    H->>tRPC: markVrlSuggestionsApplied (EDITOR + audit)
    tRPC->>DB: update AiMessage suggestions JSON
Loading

Comments Outside Diff (2)

  1. src/hooks/use-vrl-ai-conversation.ts, line 1415-1420 (link)

    Optimistic user message not removed on abort

    When the user cancels streaming, the AbortError is caught and the function returns early from the catch block. The finally block correctly clears isStreaming and abortRef, but the optimistic user message (temp-user-${Date.now()}) that was added to messages before the fetch remains there indefinitely.

    There is no path that removes it: the render-time sync guard (requiring !conversationId && messages.length === 0) won't fire because both conditions are now false. The stale temp message persists until the next successful sendMessage replaces messages via the fetchQuery refetch.

    The result is that after a cancel the user sees their message with no AI response, which is acceptable UX, but if they close and reopen the panel before sending another message the conversationQuery re-populates from the server — which also has the user message (it was persisted before the stream started) and no assistant reply — leaving the conversation in the same orphaned state. A minimal fix is to clear messages back to the pre-send snapshot on abort, or at minimum replace the temp message with its server-persisted counterpart via a fetchQuery in the abort path:

    } catch (err) {
      if (err instanceof Error && err.name === "AbortError") {
        // Re-sync with server so the orphaned temp message is replaced
        const refetched = await queryClient.fetchQuery({
          ...trpc.ai.getVrlConversation.queryOptions({ pipelineId, componentKey }),
          staleTime: 0,
        }).catch(() => null);
        if (refetched?.messages && !isNewConversationRef.current) {
          setMessages(refetched.messages.map((m) => ({ /* ... */ })));
        }
        return;
      }
      setError(err instanceof Error ? err.message : "AI request failed");
    }
  2. src/hooks/use-vrl-ai-conversation.ts, line 1474-1482 (link)

    markAppliedMutation has no error recovery

    markAppliedMutation.mutate(...) is fired with no onError callback. If the server mutation fails (network blip, 500, etc.) the optimistic appliedAt update in local messages state looks correct for the rest of the session, but the DB record is never updated. On the next panel open (or any refetch that beats the render-time sync guard), the suggestion will re-appear as "actionable".

    Consider adding a basic onError that reverts the optimistic state or shows a toast:

    markAppliedMutation.mutate(
      { pipelineId, conversationId, messageId, suggestionIds },
      {
        onError: () => {
          // Revert optimistic appliedAt
          setMessages((prev) =>
            prev.map((msg) => {
              if (msg.id !== messageId || !msg.suggestions) return msg;
              return {
                ...msg,
                suggestions: msg.suggestions.map((s) =>
                  suggestionIds.includes(s.id)
                    ? { ...s, appliedAt: undefined }
                    : s,
                ),
              };
            }),
          );
        },
      },
    );

Last reviewed commit: 304a515

@TerrifiedBug TerrifiedBug force-pushed the feat-vrl-ai-chat-improvements-1c5 branch from 304a515 to 664f8fe Compare March 11, 2026 14:50
@TerrifiedBug TerrifiedBug merged commit 266d9f8 into main Mar 11, 2026
3 checks passed
@TerrifiedBug TerrifiedBug deleted the feat-vrl-ai-chat-improvements-1c5 branch March 11, 2026 14:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant