Ignore engine closed events for data track packets which are in flight / un delivered on engine close#1904
Ignore engine closed events for data track packets which are in flight / un delivered on engine close#1904
Conversation
🦋 Changeset detectedLatest commit: 6b35263 The changes in this PR will be included in the next version bump. Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
size-limit report 📦
|
…flight / un delivered on engine close I have noticed in running the data track end to end test, that "engine closed" gets thrown fairly often. I think the move away from the while/sleep loop in waitForBufferStatusLow (which wasn't in my data track subscription updates PR) has newly resulted in more waitForBufferStatusLow s being in flight at once so there's a much higher likelihood that the data channel won't be drained upon engine close. To fix this, I've added a param to waitForBufferStatusLow to allow ignoring engine closed events. Not sure if this is the best thing to do but it fixes the tests, and I think matches the ethos of "send packets as fast as possible" / doesn't introduce any sort of explicit locking.
6c03d0d to
0bb76f3
Compare
| switch (bufferStatusLowBehavior) { | ||
| case 'wait': | ||
| await this.waitForBufferStatusLow(kind); | ||
| await this.waitForBufferStatusLow(kind, 'ignore'); |
There was a problem hiding this comment.
Instead of continuing here, wouldn't it make more sense to try/catch and early return on an error?
There was a problem hiding this comment.
Yes, I had that actually as an earlier implementation of this before switching over to adding the extra param to waitForBufferStatusLow, mostly just because it was a little bit cumbersome to catch one variant of one error. But I don't have a strong opinion, I could go back to doing it that way.
There was a problem hiding this comment.
I would still expect dc.send further down to throw if the engine is closed, right?
so either check for engine closed further down as well for an early return or we do it here with the try/catch
There was a problem hiding this comment.
It's a good point, but I can say that the code as written here resulted in the data tracks test passing locally when I ran it, and it looks like on mdn at least, RTCDataChannel.send doesn't seem to mention throwing an error when the data channel is closed, only when readyState is connecting.
That being said, if we decide to go forward with this general approach, I'm fine with moving back over to the try/catch version.
There was a problem hiding this comment.
hm, maybe that's an omission there, here it states that sending isn't possible anymore after transitioning to closing, not a 100% sure whether it would throw an error though.
Cool, then let's do the try/catch version
There was a problem hiding this comment.
I have noticed in running the data track test, that "engine closed" gets thrown fairly often at the end of the test, AFTER the actual test case passes. I had seemingly fixed this in #1892 but after merging this on top of
main, it turns out #1877 changed the semantics of how this worked and caused it to break subtly in a different way. It looks like the same error but it's being triggered for a reason now.What I think is happening now: I think the move away from the while/sleep loop in waitForBufferStatusLow #1877 (which wasn't in my data track subscription updates PR) has newly resulted in more
waitForBufferStatusLowcalls being in flight at once so there's now a much higher likelihood that the data channel won't be fully drained upon engine close. Before, thesleep(10)inadvertently throttled this.To fix this, I've added a param to
waitForBufferStatusLowto allow ignoring engine closed events. I am not sure if this is the best thing to do, but it fixes the tests. And I think matches the ethos of "send packets as fast as possible" / doesn't introduce any sort of explicit locking - if packets aren't delivered when the engine closes, there isn't really a good reason to throw an error if the data being sent is lossy anyway.There's an argument also that maybe the better thing to do here is to make
tryPushreturn a promise that resolves once the whole pipeline (including all packets being enqueued into the data channel, and then a subsequent buffer low event occurring) completes. But the downside to that is then you're blocking extra time on eachtryPushcall which lengthens the amount of time each frame takes to send, which is antithetical to the whole data track "send data as fast as possible" / lossy delivery notion.