-
Notifications
You must be signed in to change notification settings - Fork 25
Feat/global record #281
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/global record #281
Conversation
…ion and error handling - Updated HomeAssistantPlugin to standardize string usage for configuration parameters. - Enhanced MCPClient to improve error handling and logging during memory operations. - Refactored OpenMemoryMCPService to streamline memory entry conversion and improve metadata handling. - Improved transcription job handling in transcription_jobs.py for better error reporting and session management. - Updated mock-services.yml to change model_url for testing compatibility with Docker environments.
- Introduced a new RecordingContext to manage audio recording state and functionality, including start/stop actions and duration tracking. - Updated various components to utilize the new RecordingContext, replacing previous audio recording hooks for improved consistency. - Added a GlobalRecordingIndicator component to display recording status across the application. - Enhanced the Layout component to include the GlobalRecordingIndicator for better user feedback during audio recording sessions. - Refactored audio-related components to accept the new RecordingContext type, ensuring type safety and clarity in props. - Implemented configuration options for managing provider segments in transcription, allowing for more flexible audio processing based on user settings. - Added raw segments JSON display in the Conversations page for better debugging and data visibility.
📝 WalkthroughWalkthroughThis PR refactors frontend recording state management from a hook-based to context-based architecture, enhances memory service operations with improved app selection and user context handling, adds dynamic transcription configuration for provider segment handling, normalizes code formatting across multiple modules, and updates mock service configuration endpoints. Changes
Sequence DiagramssequenceDiagram
participant Component as React Component
participant Provider as RecordingProvider
participant Context as RecordingContext
participant Auth as useAuth
participant WS as WebSocket
participant Mic as Microphone
Component->>Provider: Wrapped in RecordingProvider
Provider->>Auth: Get current user
Provider->>Mic: Request microphone access
alt Mic Permission Granted
Provider->>WS: Establish connection
alt WS Connected
Provider->>Context: Update state (isRecording=true)
Context->>Component: Provide recording state
Component->>Component: Render recording UI
Note over Provider: Track recordingDuration<br/>Handle analyser state
else WS Error
Provider->>Context: Transition to error state
Provider->>Provider: Cleanup resources
end
else Mic Permission Denied
Provider->>Context: Set canAccessMicrophone=false
end
Component->>Context: Call stopRecording()
Context->>Provider: Trigger stop
Provider->>WS: Close connection
Provider->>Mic: Release resources
Provider->>Context: Reset state
sequenceDiagram
participant Caller as MCPClient Caller
participant MCPClient as MCPClient.add_memories()
participant AppAPI as OpenMemory App Endpoint
participant MemAPI as OpenMemory Memory Endpoint
participant DB as Database
Caller->>MCPClient: add_memories(transcript, client_id, source_id, user_id, user_email)
MCPClient->>AppAPI: GET /apps
AppAPI-->>MCPClient: Apps list
MCPClient->>MCPClient: Select app by name<br/>(prefer most memories)
alt App Found
MCPClient->>MemAPI: POST /api/v1/memories<br/>(with client_id, source_id, user_id, metadata)
MemAPI-->>MCPClient: Memory creation result
alt Result Success
MCPClient->>DB: Log memory IDs
MCPClient-->>Caller: Return memory IDs
else Result None (Dedup)
MCPClient-->>Caller: Return empty list
end
else No App Match
MCPClient-->>Caller: Raise MCPError
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
1 similar comment
✅ Actions performedReview triggered.
|
|
| Metric | Count |
|---|---|
| ✅ Passed | 87 |
| ❌ Failed | 8 |
| 📊 Total | 95 |
📊 View Reports
GitHub Pages (Live Reports):
Download Artifacts:
- robot-test-reports-html-no-api - HTML reports
- robot-test-results-xml-no-api - XML output
|
| Metric | Count |
|---|---|
| ✅ Passed | 121 |
| ❌ Failed | 3 |
| 📊 Total | 124 |
📊 View Reports
GitHub Pages (Live Reports):
Download Artifacts:
- robot-test-reports-html-pr-labeled - HTML reports
- robot-test-results-xml-pr-labeled - XML output
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
backends/advanced/src/advanced_omi_backend/services/memory/providers/openmemory_mcp.py (1)
299-321: Shared MCP client user context is not concurrency-safe.Mutating
self.mcp_client.user_idon a shared client can cause cross-user reads/writes under concurrent requests. This is a data isolation risk.🧵 Safer approach (serialize context switch as a stopgap)
+import asyncio ... class OpenMemoryMCPService(MemoryServiceBase): def __init__(...): ... + self._client_lock = asyncio.Lock() ... - # Update MCP client user context for this operation - original_user_id = self.mcp_client.user_id - self.mcp_client.user_id = user_id or self.user_id # Use the actual Chronicle user's ID - - try: - result = await self.mcp_client.get_memory(memory_id) + async with self._client_lock: + original_user_id = self.mcp_client.user_id + self.mcp_client.user_id = user_id or self.user_id + try: + result = await self.mcp_client.get_memory(memory_id) + ... + finally: + self.mcp_client.user_id = original_user_idLonger term, consider passing
user_idexplicitly to MCPClient methods instead of mutating shared state.backends/advanced/src/advanced_omi_backend/services/memory/providers/mcp_client.py (1)
91-138: Use the resolvedapp_idin the create-memory payload.You compute an
app_idto handle duplicates (selecting the app with the most memories when names collide), but the payload still usesself.client_name, making the duplicate-safe selection ineffective. This can route memories to the wrong app when multiple apps share the same name.🐛 Suggested fix (use app_id once resolved)
payload = { "user_id": self.user_id, "text": text, - "app": self.client_name, + "app": app_id, "metadata": default_metadata, "infer": True, }backends/advanced/webui/src/contexts/RecordingContext.tsx (1)
63-173: Fix staleisRecordingclosure in mic and WebSocket event handlers.The
track.onendedhandler (line 158-166) andws.onclosehandler (line 232-248) captureisRecordingat the time the handlers are registered. Since both handlers are registered duringstartRecording()whileisRecordingis stillfalse(set totrueonly after handlers attach), they retain the stale value in their closures. If either handler fires during an active recording session, the condition fails silently and prevents error handling.Add state refs to track current recording status and suppress intentional closes:
Proposed fix
const chunkCountRef = useRef(0) const audioProcessingStartedRef = useRef(false) + const isRecordingRef = useRef(false) + const wsClosingRef = useRef(false) + + useEffect(() => { + isRecordingRef.current = isRecording + }, [isRecording]) // Cleanup function const cleanup = useCallback(() => { console.log('🧹 Cleaning up audio recording resources') @@ // Clean up WebSocket if (wsRef.current) { + wsClosingRef.current = true wsRef.current.close() wsRef.current = null } @@ - if (isRecording) { + if (isRecordingRef.current) { setError('Microphone disconnected or permission revoked') setCurrentStep('error') cleanup() setIsRecording(false) } @@ ws.onclose = (event) => { console.log('🔌 WebSocket disconnected:', event.code, event.reason) wsRef.current = null @@ + if (wsClosingRef.current) { + wsClosingRef.current = false + return + } - if (isRecording) { + if (isRecordingRef.current) { setError('WebSocket connection lost') setCurrentStep('error') cleanup() setIsRecording(false) }
🤖 Fix all issues with AI agents
In
`@backends/advanced/src/advanced_omi_backend/services/memory/providers/mcp_client.py`:
- Around line 140-145: The debug log currently prints the entire payload
(memory_logger.debug(f"Full payload: {payload}")) which can leak PII; update the
logging in mcp_client.py (where memory_logger is used) to create a redacted copy
of payload before logging by removing or replacing sensitive keys (e.g., "text",
"transcript", "email", "content", any nested user identifiers) and then log the
redacted payload or just keys/metadata and lengths instead of full values;
ensure the redaction happens right before the debug call that references payload
so the original payload sent to the server remains unchanged.
In
`@backends/advanced/src/advanced_omi_backend/services/memory/providers/openmemory_mcp.py`:
- Around line 401-408: The delete_all_user_memories implementation currently
sets self.mcp_client.user_id to self.user_id and calls delete_all_memories(),
which deletes all memories for that configured OpenMemory user; instead scope
deletion to only memories with metadata chronicle_user_id equal to the requested
user_id by using a filtered deletion flow: call the MCP list/fetch API (e.g.,
methods exposed on self.mcp_client such as list_memories or fetch_memories),
filter results where memory.metadata.get("chronicle_user_id") == user_id, then
delete only those memory IDs via the MCP delete API (or call delete on each
matching memory) rather than delete_all_memories(); ensure you still restore
original_user_id after the operation (use the existing original_user_id variable
and a finally block) and log the count of deleted memories referencing the
requested user_id.
🧹 Nitpick comments (3)
backends/advanced/src/advanced_omi_backend/services/memory/providers/mcp_client.py (1)
480-512: Consider logging failures in connection probes.The silent
except: continuemakes it hard to diagnose why all endpoints failed. A debug-level log would preserve visibility without being noisy.backends/advanced/webui/src/components/layout/GlobalRecordingIndicator.tsx (2)
47-62: Consider addingaria-labelfor better accessibility.While
titleprovides tooltip text,aria-labelis more reliably announced by screen readers.♿ Suggested accessibility improvement
<button onClick={() => navigate('/live-record')} className="p-1.5 rounded hover:bg-red-100 dark:hover:bg-red-800/50 transition-colors text-red-600 dark:text-red-400" title="Go to Live Record" + aria-label="Go to Live Record" > <Radio className="h-4 w-4" /> </button> {/* Stop button */} <button onClick={stopRecording} className="p-1.5 rounded bg-red-600 hover:bg-red-700 transition-colors text-white" title="Stop Recording" + aria-label="Stop Recording" > <Square className="h-4 w-4" /> </button>
19-21: Consider respectingprefers-reduced-motionfor the pulsing animation.The pulsing
animate-pingeffect could be disorienting for users with vestibular motion sensitivity.♿ Suggested motion-safe variant
<div className="relative flex items-center"> - <span className="absolute inline-flex h-3 w-3 rounded-full bg-red-400 opacity-75 animate-ping" /> + <span className="absolute inline-flex h-3 w-3 rounded-full bg-red-400 opacity-75 animate-ping motion-reduce:animate-none" /> <span className="relative inline-flex h-3 w-3 rounded-full bg-red-500" /> </div>
| memory_logger.info( | ||
| f"POSTing memory to {self.server_url}/api/v1/memories/ " | ||
| f"(user_id={self.user_id}, text_len={len(text)}, metadata_keys={list(default_metadata.keys())})" | ||
| ) | ||
| memory_logger.debug(f"Full payload: {payload}") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid logging full payloads with transcript/email data.
memory_logger.debug(f"Full payload: {payload}") can leak PII (transcripts, emails). Please redact sensitive fields before logging.
🛡️ Suggested redacted logging
- memory_logger.debug(f"Full payload: {payload}")
+ redacted_payload = {
+ **payload,
+ "text": f"<redacted:{len(text)} chars>",
+ "metadata": {"keys": list(default_metadata.keys())},
+ }
+ memory_logger.debug(f"Payload (redacted): {redacted_payload}")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| memory_logger.info( | |
| f"POSTing memory to {self.server_url}/api/v1/memories/ " | |
| f"(user_id={self.user_id}, text_len={len(text)}, metadata_keys={list(default_metadata.keys())})" | |
| ) | |
| memory_logger.debug(f"Full payload: {payload}") | |
| memory_logger.info( | |
| f"POSTing memory to {self.server_url}/api/v1/memories/ " | |
| f"(user_id={self.user_id}, text_len={len(text)}, metadata_keys={list(default_metadata.keys())})" | |
| ) | |
| redacted_payload = { | |
| **payload, | |
| "text": f"<redacted:{len(text)} chars>", | |
| "metadata": {"keys": list(default_metadata.keys())}, | |
| } | |
| memory_logger.debug(f"Payload (redacted): {redacted_payload}") |
🤖 Prompt for AI Agents
In
`@backends/advanced/src/advanced_omi_backend/services/memory/providers/mcp_client.py`
around lines 140 - 145, The debug log currently prints the entire payload
(memory_logger.debug(f"Full payload: {payload}")) which can leak PII; update the
logging in mcp_client.py (where memory_logger is used) to create a redacted copy
of payload before logging by removing or replacing sensitive keys (e.g., "text",
"transcript", "email", "content", any nested user identifiers) and then log the
redacted payload or just keys/metadata and lengths instead of full values;
ensure the redaction happens right before the debug call that references payload
so the original payload sent to the server remains unchanged.
| # Update MCP client user context for this operation | ||
| original_user_id = self.mcp_client.user_id | ||
| self.mcp_client.user_id = self.user_id # Use configured user ID | ||
|
|
||
| try: | ||
| count = await self.mcp_client.delete_all_memories() | ||
| memory_logger.info(f"🗑️ Deleted {count} memories for user {user_id} via OpenMemory MCP") | ||
| return count |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
delete_all_user_memories ignores the requested user.
It deletes all memories for the configured OpenMemory user, which can wipe other Chronicle users’ data in multi-user setups. Please scope deletion by chronicle_user_id metadata.
✅ Suggested approach (filter then delete)
- # Update MCP client user context for this operation
- original_user_id = self.mcp_client.user_id
- self.mcp_client.user_id = self.user_id # Use configured user ID
-
- try:
- count = await self.mcp_client.delete_all_memories()
+ try:
+ # Fetch + filter to the Chronicle user, then delete only those IDs
+ user_memories = await self.get_all_memories(user_id=user_id, limit=10_000)
+ memory_ids = [m.id for m in user_memories]
+ count = 0
+ for memory_id in memory_ids:
+ if await self.mcp_client.delete_memory(memory_id):
+ count += 1
memory_logger.info(f"🗑️ Deleted {count} memories for user {user_id} via OpenMemory MCP")
return count🧰 Tools
🪛 Ruff (0.14.14)
408-408: Consider moving this statement to an else block
(TRY300)
🤖 Prompt for AI Agents
In
`@backends/advanced/src/advanced_omi_backend/services/memory/providers/openmemory_mcp.py`
around lines 401 - 408, The delete_all_user_memories implementation currently
sets self.mcp_client.user_id to self.user_id and calls delete_all_memories(),
which deletes all memories for that configured OpenMemory user; instead scope
deletion to only memories with metadata chronicle_user_id equal to the requested
user_id by using a filtered deletion flow: call the MCP list/fetch API (e.g.,
methods exposed on self.mcp_client such as list_memories or fetch_memories),
filter results where memory.metadata.get("chronicle_user_id") == user_id, then
delete only those memory IDs via the MCP delete API (or call delete on each
matching memory) rather than delete_all_memories(); ensure you still restore
original_user_id after the operation (use the existing original_user_id variable
and a finally block) and log the count of deleted memories referencing the
requested user_id.
Summary by CodeRabbit
New Features
Improvements
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.