The log throttling layer seems to receive all events regardless of their log level, even if the envfilter has been configured to filter them out.
This may be an issue in tracing-subscriber or tracing-throttle, not sure. Essentially the summaries will emit logs like the following even if we should only be seeing INFO level logs:
2026-01-17T16:55:37.850862Z WARN tokio-runtime-worker tracing_throttle::infrastructure::layer: 546: Suppressed 1 times over 0.00s: [TRACE] log: sending server hello Message { version: TLSv1_2, payload: Handshake { parsed: HandshakeMessagePayload(ServerHello(ServerHelloPayload { legacy_version: TLSv1_2, random: c959ece5098af9a035c5bcadca591be5a2ea00c04723af8902eb35bc2737f6ca, session_id: d64fd447434940725ef00d1fb4cc411d6307d39dc433898d666943662ff9150b, cipher_suite: TLS13_AES_128_GCM_SHA256, compression_method: Null, extensions: ServerExtensions { key_share: KeyShareEntry { group: X25519, payload: 45edc3be8543a99e3df728c4c60d6e22398bccfa68bdafb25278408576883e13 }, preshared_key: 0, selected_version: TLSv1_3, unknown_extensions: {}, .. } })), encoded: 0200007c0303c959ece5098af9a035c5bcadca591be5a2ea00c04723af8902eb35bc2737f6ca20d64fd447434940725ef00d1fb4cc411d6307d39dc433898d666943662ff9150b130100003400330024001d002045edc3be8543a99e3df728c4c60d6e22398bccfa68bdafb25278408576883e13002900020000002b00020304 } } signature=bb1be88845b5d5fe count=1
2026-01-17T16:55:37.850871Z WARN tokio-runtime-worker tracing_throttle::infrastructure::layer: 546: Suppressed 1 times over 0.00s: [INFO] endpoint_libs::libs::ws::server: Accepted stream from 102.182.113.69:59614 signature=a413d0dd6602ae78 count=1
2026-01-17T16:55:37.850876Z WARN tokio-runtime-worker tracing_throttle::infrastructure::layer: 546: Suppressed 1 times over 0.00s: [INFO] endpoint_libs::libs::ws::server: Accepted stream from 102.182.113.69:59412 signature=23da750c553780d0 count=1
2026-01-17T16:55:37.850881Z WARN tokio-runtime-worker tracing_throttle::infrastructure::layer: 546: Suppressed 1 times over 0.00s: [TRACE] log: Sending frame: Frame { header: FrameHeader { is_final: true, rsv1: false, rsv2: false, rsv3: false, opcode: Data(Text), mask: None }, payload: b"{\"type\":\"Immediate\",\"method\":21205,\"seq\":24,\"params\":{\"data\":[{\"exchange\":\"DCB-perps\",\"balance\":0.0}]}}" } signature=9519f715c8255463 count=1
2026-01-17T16:55:37.850889Z WARN tokio-runtime-worker tracing_throttle::infrastructure::layer: 546: Suppressed 1 times over 0.00s: [DEBUG] endpoint_libs::libs::ws::server: New connection handshaken WsConnection { connection_id: 1313670054, user_id: 0, roles: RwLock { data: [] }, address: 102.182.113.69:59612, log_id: 1768668846162854 } signature=f83db98498152732 count=1
2026-01-17T16:55:37.850897Z WARN tokio-runtime-worker tracing_throttle::infrastructure::layer: 546: Suppressed 1 times over 0.00s: [TRACE] log: sending server hello Message { version: TLSv1_2, payload: Handshake { parsed: HandshakeMessagePayload(ServerHello(ServerHelloPayload { legacy_version: TLSv1_2, random: b1aca03baabacea7e414409e43106014c207598b78c6a40f882f06e803686e79, session_id: 5d329264940bb039cd4ddfea9983611c5d6c54bdced96b9e37ae71017b78cadb, cipher_suite: TLS13_AES_128_GCM_SHA256, compression_method: Null, extensions: ServerExtensions { key_share: KeyShareEntry { group: X25519, payload: a53fe86ef21fbecad2f6511645ea33ca2806dd77cf09734c00d02b2099997634 }, selected_version: TLSv1_3, unknown_extensions: {}, .. } })), encoded: 020000760303b1aca03baabacea7e414409e43106014c207598b78c6a40f882f06e803686e79205d329264940bb039cd4ddfea9983611c5d6c54bdced96b9e37ae71017b78cadb130100002e00330024001d0020a53fe86ef21fbecad2f6511645ea33ca2806dd77cf09734c00d02b2099997634002b00020304 } } signature=9a16fb4a8a12d0a5 count=1
2026-01-17T16:55:37.850909Z WARN tokio-runtime-worker tracing_throttle::infrastructure::layer: 546: Suppressed 1 times over 0.00s: [TRACE] log: sending server hello Message { version: TLSv1_2, payload: Handshake { parsed: HandshakeMessagePayload(ServerHello(ServerHelloPayload { legacy_version: TLSv1_2, random: 92cd8703fa3e34e15cb12bc0da5bf43035a4715cd5be22d78366c9d1a27d9d94, session_id: 4deb5bdea0022f0cd3d6dd485660433361facb18f7c9c7b9e9c6659577d55d80, cipher_suite: TLS13_AES_128_GCM_SHA256, compression_method: Null, extensions: ServerExtensions { key_share: KeyShareEntry { group: X25519, payload: 2431a1ac4544d06bdbe5c8f0cc72fbb9693df297595980a2b101fa455b933208 }, selected_version: TLSv1_3, unknown_extensions: {}, .. } })), encoded: 02000076030392cd8703fa3e34e15cb12bc0da5bf43035a4715cd5be22d78366c9d1a27d9d94204deb5bdea0022f0cd3d6dd485660433361facb18f7c9c7b9e9c6659577d55d80130100002e00330024001d00202431a1ac4544d06bdbe5c8f0cc72fbb9693df297595980a2b101fa455b933208002b00020304 } } signature=7bff288f5b371b14 count=1
2026-01-17T16:55:37.850919Z WARN tokio-runtime-worker tracing_throttle::infrastructure::layer: 546: Suppressed 1 times over 0.00s: [TRACE] log: sending new ticket HandshakeMessagePayload(NewSessionTicketTls13(NewSessionTicketPayloadTls13 { lifetime: 86400, age_add: 286900593, nonce: 3eb9aab4a382af3c0eb50e58589c698520b3f4c73bd1a215d1a4695261941478, ticket: a163d43b37eed3f27b8ed3a63cd5dbebd24bfbf58946fbfaa79a6f0984b9daa1, extensions: NewSessionTicketExtensions { .. } })) (stateless: false) signature=f89858806e747697 count=1
2026-01-17T16:55:37.850926Z WARN tokio-runtime-worker tracing_throttle::infrastructure::layer: 546: Suppressed 1 times over 0.00s: [DEBUG] log: decided upon suite TLS13_AES_128_GCM_SHA256 signature=d236d56542280e06 count=1
2026-01-17T16:55:37.850932Z WARN tokio-runtime-worker tracing_throttle::infrastructure::layer: 546: Suppressed 1 times over 0.00s:
I suggest disabling summaries for now, or possibly disabling the throttling entirely until this is fixed
The log throttling layer seems to receive all events regardless of their log level, even if the envfilter has been configured to filter them out.
This may be an issue in tracing-subscriber or tracing-throttle, not sure. Essentially the summaries will emit logs like the following even if we should only be seeing INFO level logs:
I suggest disabling summaries for now, or possibly disabling the throttling entirely until this is fixed