Conversation
|
📝 WalkthroughWalkthroughThis pull request adds screen-share audio mixing capability to the React Native SDK across Android and iOS platforms. It introduces native audio capture implementations, TypeScript hooks for lifecycle management, a manager module for cross-platform coordination, and updated UI components. The sample app demonstrates in-app screen capture with audio inclusion. Changes
Sequence DiagramsequenceDiagram
participant User
participant Button as ScreenShareToggleButton
participant Hook as useScreenShareButton
participant Manager as ScreenShareAudioMixingManager
participant NativeBridge as NativeModule
participant Audio as AudioCapture/Mixer
User->>Button: Press screen share
Button->>Hook: screenShareOptions { type: 'inApp', includeAudio: true }
alt iOS + inApp + includeAudio
Hook->>Manager: startInAppScreenCapture(includeAudio)
Manager->>NativeBridge: startInAppScreenCapture()
NativeBridge->>Audio: Initialize InAppScreenCapturer
end
Hook->>Hook: Enable screen share
alt Audio mixing enabled
Hook->>Manager: startScreenShareAudioMixing()
Manager->>NativeBridge: startScreenShareAudioMixing()
NativeBridge->>Audio: Start AudioMixer<br/>Configure audio pipeline
end
Audio->>Audio: Capture & mix screen audio
User->>Button: Stop screen share
Hook->>Manager: stopScreenShareAudioMixing()
Manager->>NativeBridge: stopScreenShareAudioMixing()
NativeBridge->>Audio: Stop mixer & restore settings
alt iOS + inApp
Hook->>Manager: stopInAppScreenCapture()
Manager->>NativeBridge: stopInAppScreenCapture()
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
packages/react-native-sdk/src/hooks/useScreenShareButton.ts (1)
156-167: Inconsistent handler invocation pattern.In the iOS in-app branch (line 163), the handler is called directly as
onScreenShareStartedHandler?.(), while in the broadcast mode event listener (line 120), it's called via the ref asonScreenShareStartedHandlerRef.current?.(). The ref pattern was intentionally used to avoid stale closure issues in the effect. Consider using the ref consistently.💡 Suggested fix for consistency
await screenShareAudioMixingManager.startInAppScreenCapture( includeAudio, ); await call?.screenShare.enable(); - onScreenShareStartedHandler?.(); + onScreenShareStartedHandlerRef.current?.(); } catch (error) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-native-sdk/src/hooks/useScreenShareButton.ts` around lines 156 - 167, The iOS in-app branch in useScreenShareButton calls the start handler directly (onScreenShareStartedHandler?.()), which is inconsistent with the broadcast branch that uses the ref to avoid stale closures; change the direct invocation to use the ref (onScreenShareStartedHandlerRef.current?.()) after successful start so both branches use the same onScreenShareStartedHandlerRef pattern used elsewhere in useScreenShareButton.packages/react-native-sdk/ios/StreamVideoReactNative.m (1)
688-727: Consider handling the case whencaptureris nil.When
activeInAppScreenCapturerisnil, the method still starts mixing and bypasses voice processing but never sets up theaudioBufferHandler. This could leave the system in a partially configured state where mixing is active but no audio data flows.💡 Suggested improvement
// Wire audio buffer handler on the active capturer → mixer.enqueue InAppScreenCapturer *capturer = options.activeInAppScreenCapturer; if (capturer) { capturer.audioBufferHandler = ^(CMSampleBufferRef sampleBuffer) { ScreenShareAudioMixer *currentMixer = [WebRTCModuleOptions sharedInstance].screenShareAudioMixer; if (currentMixer) { [currentMixer enqueue:sampleBuffer]; } }; + } else { + // No active capturer — log warning but continue; audio may start flowing later + NSLog(@"[StreamVideoReactNative] startScreenShareAudioMixing: No active capturer available"); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-native-sdk/ios/StreamVideoReactNative.m` around lines 688 - 727, The method startScreenShareAudioMixing currently starts mixing and bypasses voice processing before checking activeInAppScreenCapturer, which can leave mixing active with no audio handler; modify startScreenShareAudioMixing (and use WebRTCModuleOptions, InAppScreenCapturer, startMixing, setVoiceProcessingBypassed, audioBufferHandler) to check options.activeInAppScreenCapturer before enabling mixing/VP bypass and, if nil, do not start mixing or bypass voice processing and instead reject the promise (or return an error) with a clear code/message; alternatively, if you must start mixing first, add cleanup logic to stopMixing and restore _vpBypassedBeforeMixing (via audioDeviceModule.setVoiceProcessingBypassed:) before rejecting so the system is not left partially configured.packages/react-native-sdk/android/src/main/java/com/streamvideo/reactnative/StreamVideoReactNativeModule.kt (1)
495-545: Consider checking ifScreenAudioCapture.start()actually succeeded.After
ScreenAudioCapture(mediaProjection).also { it.start() }on line 530, theaudioRecordinside may still benullif initialization failed (as handled inScreenAudioCapture.start()). ThescreenAudioBytesProvidercallback on line 536 will then always returnnull, but the method resolves successfully, potentially misleading the caller.💡 Suggested improvement to verify capture started
- screenAudioCapture = ScreenAudioCapture(mediaProjection).also { it.start() } + val capture = ScreenAudioCapture(mediaProjection) + capture.start() + + // Verify capture actually started (audioRecord initialized successfully) + if (capture.getScreenAudioBytes(0) == null) { + // This is a lightweight check - getScreenAudioBytes returns null when audioRecord is null + Log.w(NAME, "Screen audio capture may not have initialized correctly") + } + screenAudioCapture = captureAlternatively, expose an
isActiveproperty onScreenAudioCaptureto check initialization state.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-native-sdk/android/src/main/java/com/streamvideo/reactnative/StreamVideoReactNativeModule.kt` around lines 495 - 545, The startScreenShareAudioMixing method currently assumes ScreenAudioCapture.start() succeeded; change the flow to verify the capture actually started before resolving and setting WebRTCModuleOptions.screenAudioBytesProvider: after creating ScreenAudioCapture (symbol: ScreenAudioCapture) call start() and check a success indicator (either a boolean return from start(), an exposed property like isActive on ScreenAudioCapture, or by verifying internal audioRecord != null on the instance referenced by screenAudioCapture) and if initialization failed reject the promise with a descriptive error and do not assign screenAudioBytesProvider; only assign the provider, log success, and resolve the promise when the capture is confirmed active (references: function startScreenShareAudioMixing, field screenAudioCapture, method ScreenAudioCapture.start(), and WebRTCModuleOptions.screenAudioBytesProvider).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/react-native-sdk/src/hooks/useScreenShareAudioMixing.ts`:
- Around line 107-129: The cleanup effect can miss stopping in-progress mixing
because isMixingActiveRef.current is set after an await in startMixing; ensure
the component can cancel an in-flight startMixing on unmount by adding an
AbortController (or set isMixingActiveRef.current = true before awaiting) inside
startMixing and/or the start/stop useEffect, pass the controller/signal to any
async media calls, and in the cleanup use the controller.abort() then call
screenShareAudioMixingManager.stopScreenShareAudioMixing() and
restoreNoiseCancellation() if needed; update references to startMixing,
isMixingActiveRef, ncWasEnabledRef,
screenShareAudioMixingManager.stopScreenShareAudioMixing, and
restoreNoiseCancellation to use the abort signal and ensure deterministic
cleanup.
---
Nitpick comments:
In
`@packages/react-native-sdk/android/src/main/java/com/streamvideo/reactnative/StreamVideoReactNativeModule.kt`:
- Around line 495-545: The startScreenShareAudioMixing method currently assumes
ScreenAudioCapture.start() succeeded; change the flow to verify the capture
actually started before resolving and setting
WebRTCModuleOptions.screenAudioBytesProvider: after creating ScreenAudioCapture
(symbol: ScreenAudioCapture) call start() and check a success indicator (either
a boolean return from start(), an exposed property like isActive on
ScreenAudioCapture, or by verifying internal audioRecord != null on the instance
referenced by screenAudioCapture) and if initialization failed reject the
promise with a descriptive error and do not assign screenAudioBytesProvider;
only assign the provider, log success, and resolve the promise when the capture
is confirmed active (references: function startScreenShareAudioMixing, field
screenAudioCapture, method ScreenAudioCapture.start(), and
WebRTCModuleOptions.screenAudioBytesProvider).
In `@packages/react-native-sdk/ios/StreamVideoReactNative.m`:
- Around line 688-727: The method startScreenShareAudioMixing currently starts
mixing and bypasses voice processing before checking activeInAppScreenCapturer,
which can leave mixing active with no audio handler; modify
startScreenShareAudioMixing (and use WebRTCModuleOptions, InAppScreenCapturer,
startMixing, setVoiceProcessingBypassed, audioBufferHandler) to check
options.activeInAppScreenCapturer before enabling mixing/VP bypass and, if nil,
do not start mixing or bypass voice processing and instead reject the promise
(or return an error) with a clear code/message; alternatively, if you must start
mixing first, add cleanup logic to stopMixing and restore
_vpBypassedBeforeMixing (via audioDeviceModule.setVoiceProcessingBypassed:)
before rejecting so the system is not left partially configured.
In `@packages/react-native-sdk/src/hooks/useScreenShareButton.ts`:
- Around line 156-167: The iOS in-app branch in useScreenShareButton calls the
start handler directly (onScreenShareStartedHandler?.()), which is inconsistent
with the broadcast branch that uses the ref to avoid stale closures; change the
direct invocation to use the ref (onScreenShareStartedHandlerRef.current?.())
after successful start so both branches use the same
onScreenShareStartedHandlerRef pattern used elsewhere in useScreenShareButton.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4af24a58-88d7-47a5-a055-a48049448684
📒 Files selected for processing (10)
packages/react-native-sdk/android/src/main/java/com/streamvideo/reactnative/StreamVideoReactNativeModule.ktpackages/react-native-sdk/android/src/main/java/com/streamvideo/reactnative/screenshare/ScreenAudioCapture.ktpackages/react-native-sdk/ios/StreamVideoReactNative.mpackages/react-native-sdk/src/components/Call/CallControls/ScreenShareToggleButton.tsxpackages/react-native-sdk/src/hooks/index.tspackages/react-native-sdk/src/hooks/useScreenShareAudioMixing.tspackages/react-native-sdk/src/hooks/useScreenShareButton.tspackages/react-native-sdk/src/modules/ScreenShareAudioManager.tspackages/react-native-sdk/src/providers/StreamCall/index.tsxsample-apps/react-native/dogfood/src/components/CallControlls/BottomControls.tsx
| // Start/stop audio mixing based on screen share status and audio preference | ||
| useEffect(() => { | ||
| if (isScreenSharing && audioEnabled) { | ||
| startMixing(); | ||
| } else { | ||
| stopMixing(); | ||
| } | ||
| }, [isScreenSharing, audioEnabled, startMixing, stopMixing]); | ||
|
|
||
| useEffect(() => { | ||
| return () => { | ||
| if (isMixingActiveRef.current) { | ||
| screenShareAudioMixingManager | ||
| .stopScreenShareAudioMixing() | ||
| .catch(() => {}); | ||
| isMixingActiveRef.current = false; | ||
| if (ncWasEnabledRef.current) { | ||
| restoreNoiseCancellation().catch(() => {}); | ||
| ncWasEnabledRef.current = false; | ||
| } | ||
| } | ||
| }; | ||
| }, []); |
There was a problem hiding this comment.
Potential race condition between effects.
The effect on lines 107-114 and the cleanup effect on lines 116-129 both operate on isMixingActiveRef. If the component unmounts while startMixing() is in progress (awaiting the async call), the cleanup effect runs immediately, but isMixingActiveRef.current is only set to true after the await on line 81. This could result in the cleanup not calling stopScreenShareAudioMixing().
💡 Suggested improvement using AbortController pattern
Consider setting isMixingActiveRef.current = true before the await, or using an AbortController to track component lifecycle:
const startMixing = useCallback(async () => {
if (isMixingActiveRef.current) return;
+ isMixingActiveRef.current = true; // Set early to ensure cleanup runs
try {
// Disable NC before starting mixing so screen audio is not filtered
ncWasEnabledRef.current = await disableNoiseCancellation();
logger.info('Starting screen share audio mixing');
await screenShareAudioMixingManager.startScreenShareAudioMixing();
- isMixingActiveRef.current = true;
} catch (error) {
logger.warn('Failed to start screen share audio mixing', error);
+ isMixingActiveRef.current = false; // Reset on failure
if (ncWasEnabledRef.current) {
restoreNoiseCancellation().catch(() => {});
ncWasEnabledRef.current = false;
}
}
}, []);As per coding guidelines: "Use AbortController to cancel network requests and media operations on component unmount."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/react-native-sdk/src/hooks/useScreenShareAudioMixing.ts` around
lines 107 - 129, The cleanup effect can miss stopping in-progress mixing because
isMixingActiveRef.current is set after an await in startMixing; ensure the
component can cancel an in-flight startMixing on unmount by adding an
AbortController (or set isMixingActiveRef.current = true before awaiting) inside
startMixing and/or the start/stop useEffect, pass the controller/signal to any
async media calls, and in the cleanup use the controller.abort() then call
screenShareAudioMixingManager.stopScreenShareAudioMixing() and
restoreNoiseCancellation() if needed; update references to startMixing,
isMixingActiveRef, ncWasEnabledRef,
screenShareAudioMixingManager.stopScreenShareAudioMixing, and
restoreNoiseCancellation to use the abort signal and ensure deterministic
cleanup.
💡 Overview
This PR contains implementation for screen share audio capturing and mixing.
For Android we intercept webrtc audio buffer and mix it with audio data captured by AudioRecord converted to PCM 16-bit format. (note: investigate stereo output case)
For iOS we presented new screen share mode – in app screen sharing, which limits for capturing only app content. For in app capturing mode we modify audio engine graph – we add new mixing node, which takes 2 nodes as (mic and player) and use original mic output channel as final destination. Player node is used to enqueue capture data from RPScreenRecorder.
📝 Implementation notes
Corresponding PR for webrtc package: GetStream/react-native-webrtc#28
🎫 Ticket: https://linear.app/stream/issue/RN-371/screen-share-audio-capture
📑 Docs: https://github.com/GetStream/docs-content/pull/
Summary by CodeRabbit