Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 4 additions & 6 deletions .github/workflows/security.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ jobs:
python-version: '3.11'

- name: Install Bandit
run: pip install bandit bandit-exclude-templates
run: pip install bandit

- name: Run Bandit SAST
run: |
Expand Down Expand Up @@ -144,10 +144,8 @@ jobs:

- name: Run Gitleaks
uses: gitleaks/gitleaks-action@v2
with:
config-path: .gitleaks.toml
redact: true
verbose: true
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

# =============================================================================
# Dependency Review (check for vulnerable dependencies in PRs)
Expand All @@ -163,7 +161,7 @@ jobs:
- name: Dependency Review
uses: actions/dependency-review-action@v4
with:
fail-on-severity: high,critical
fail-on-severity: high

# =============================================================================
# CodeQL Analysis
Expand Down
4 changes: 4 additions & 0 deletions .jules/bolt.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,3 +6,7 @@
## 2026-03-18 - O(N) aggregation over O(N*M) filters
**Learning:** In React components with dynamically generated filter lists, using `.filter().length` inside a `.map()` results in O(N*M) time complexity, leading to sluggish renders with larger datasets.
**Action:** Use `.reduce()` or a single loop inside `useMemo` to pre-calculate category counts in an O(N) pass instead.

## 2024-04-04 - [O(N*M) to O(N+M) filtering]
**Learning:** Using `Array.prototype.includes` inside `Array.prototype.filter` creates O(N*M) complexity which causes performance drops on larger lists.
**Action:** Always convert the exclusion array into a `Set` to achieve O(1) lookups, resulting in an O(N+M) complexity for the filter pass. Use `useMemo` to cache the result.
6 changes: 3 additions & 3 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
# ============================================
# Stage 1: Backend builder
# ============================================
FROM python:3.14-slim AS backend-builder
FROM python:3.11-slim AS backend-builder

WORKDIR /app/backend

Expand All @@ -24,7 +24,7 @@ RUN pip install --no-cache-dir -r requirements.txt
# ============================================
# Stage 2: AI Engine builder
# ============================================
FROM python:3.14-slim AS ai-engine-builder
FROM python:3.11-slim AS ai-engine-builder

WORKDIR /app/ai-engine

Expand All @@ -43,7 +43,7 @@ RUN pip install --no-cache-dir -r requirements.txt
# ============================================
# Stage 3: Production backend
# ============================================
FROM python:3.14-slim
FROM python:3.11-slim

WORKDIR /app

Expand Down
2 changes: 1 addition & 1 deletion Dockerfile.fly
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ ENV VITE_API_BASE_URL=$VITE_API_BASE_URL
RUN pnpm run build

# Python dependencies build stage (consolidates all Python deps)
FROM python:3.14-slim AS python-builder
FROM python:3.11-slim AS python-builder
WORKDIR /tmp
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc g++ curl libmagic1 ffmpeg && \
Expand Down
1 change: 1 addition & 0 deletions backend/coverage.json

Large diffs are not rendered by default.

2 changes: 2 additions & 0 deletions backend/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -69,3 +69,5 @@ opentelemetry-instrumentation-redis>=0.45b0
# boto3>=1.34.0
# hvac>=2.1.0
# requests>=2.31.0
psutil>=5.9.0
PyJWT>=2.8.0
2 changes: 2 additions & 0 deletions backend/src/api/embeddings.py
Original file line number Diff line number Diff line change
Expand Up @@ -835,6 +835,7 @@ async def search_similar_embeddings_enhanced(
- Performance: Target latency < 500ms

Parameters:
----------
- use_hybrid: If True (default), combine vector + keyword search. If False, vector-only.
- use_reranker: If True (default), apply cross-encoder re-ranking to top results.
- expand_query: If True (default), expand query with synonyms and domain terms.
Expand All @@ -843,6 +844,7 @@ async def search_similar_embeddings_enhanced(
- ranking_strategy: How to combine scores ("weighted_sum", "rrf", or "ensemble").

Returns:
-------
- EnhancedSearchResponse with results, metadata, and performance metrics
"""
import time
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
import React, { useState, useEffect } from 'react';
import React, { useState, useEffect, useMemo } from 'react';
import {
Box,
Card,
Expand Down Expand Up @@ -87,9 +87,11 @@ export const TemplateSelector: React.FC<TemplateSelectorProps> = ({
]);

// Filter templates on client side for excludeTemplateIds
const filteredTemplates = state.templates.filter(
(template) => !excludeTemplateIds.includes(template.id)
);
// ⚑ Bolt optimization: Use Set for O(1) lookups to convert O(N*M) array filtering to O(N+M)
const filteredTemplates = useMemo(() => {
const excludeSet = new Set(excludeTemplateIds);
return state.templates.filter((template) => !excludeSet.has(template.id));
}, [state.templates, excludeTemplateIds]);
Comment on lines 87 to +94
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (performance): Consider memoizing the exclusion Set separately so it only recomputes when excludeTemplateIds changes.

Right now the Set is rebuilt whenever either state.templates or excludeTemplateIds changes. Splitting this into two useMemo calls lets you rebuild the Set only when excludeTemplateIds changes, which can help if templates updates more frequently:

const excludeSet = useMemo(() => new Set(excludeTemplateIds), [excludeTemplateIds]);

const filteredTemplates = useMemo(
  () => state.templates.filter((template) => !excludeSet.has(template.id)),
  [state.templates, excludeSet]
);
Suggested change
]);
// Filter templates on client side for excludeTemplateIds
const filteredTemplates = state.templates.filter(
(template) => !excludeTemplateIds.includes(template.id)
);
// ⚑ Bolt optimization: Use Set for O(1) lookups to convert O(N*M) array filtering to O(N+M)
const filteredTemplates = useMemo(() => {
const excludeSet = new Set(excludeTemplateIds);
return state.templates.filter((template) => !excludeSet.has(template.id));
}, [state.templates, excludeTemplateIds]);
]);
// Memoize exclusion set so it's only rebuilt when excludeTemplateIds changes
const excludeSet = useMemo(
() => new Set(excludeTemplateIds),
[excludeTemplateIds]
);
// Filter templates on client side for excludeTemplateIds
// ⚑ Bolt optimization: Use Set for O(1) lookups to convert O(N*M) array filtering to O(N+M)
const filteredTemplates = useMemo(
() => state.templates.filter((template) => !excludeSet.has(template.id)),
[state.templates, excludeSet]
);

Comment on lines +91 to +94
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

useMemo depends on the excludeTemplateIds array identity. When the prop is omitted, the default value [] created during destructuring is a new array every render, so this memo will recompute each render (rebuilding the Set and re-filtering) even when exclusions are effectively unchanged. Consider using a module-level constant for the empty default (so the reference is stable) and/or short-circuiting when excludeTemplateIds.length === 0 to avoid unnecessary work.

Copilot uses AI. Check for mistakes.

const handleTemplateSelect = (template: BehaviorTemplate) => {
setSelectedTemplate(template);
Expand Down
Loading