Skip to content

Conversation

@alltheseas
Copy link
Owner

@alltheseas alltheseas commented Jan 16, 2026

Summary

Implements Phase 2 of bugstr: Content Hash Key (CHK) chunking for large crash reports that exceed the 50KB direct transport limit.

Key Features

  • CHK Chunking: Large payloads split into 48KB chunks, each encrypted with HKDF-derived keys
  • Cross-relay Aggregation: Receiver fetches chunks from multiple relays in parallel
  • Relay Rate Limiting: 7500ms delay between posts to respect relay policies
  • Publish Verification: Confirms chunk publication with 500ms verification delay and retry
  • Relay Hints: Manifest includes relay hints for efficient chunk fetching

SDK Support

All SDKs updated with chunking support:

  • Rust (reference implementation)
  • Dart/Flutter
  • Android/Kotlin
  • React Native
  • Go
  • Python
  • Electron

Critical Fix: CHK Encryption Compatibility

Fixed a critical bug where all SDKs used incompatible encryption (AES-256-CBC) instead of hashtree-core's HKDF + AES-256-GCM. Added cross-SDK test vectors to prevent regression.

Other Fixes

  • Prevent panic on ws:// relay URLs in subscription ID generation
  • Added .beads/ to gitignore (local issue tracking)

Test Plan

  • CHK encryption test vectors validate against Rust reference
  • CI runs npm test in test-vectors/ on push/PR
  • Manual end-to-end test: large crash report → chunk → reassemble

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Chunked crash‑report transport across SDKs: automatic direct vs chunked selection, per‑relay chunk publishing with verification, manifests for reassembly, progress callbacks, and relay rate‑limit handling.
    • CHK convergent chunking implemented cross‑platform for large payload encryption/compatibility.
  • Documentation

    • Simplified size guidance (~64 KB) and compression advice; expanded symbolication guides, rate‑limiting, chunk distribution design, and contributing notes.
  • Tests

    • Added CHK encryption compatibility test vectors and validation tooling.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Jan 16, 2026

📝 Walkthrough

Walkthrough

Adds a CHK-based chunked transport (kinds 10420/10421/10422) across SDKs and the Rust receiver: chunk generation/encryption, multi-relay chunk publish/verification, manifest-driven reassembly with relay hints, per-relay rate-limiting and progress APIs, expanded symbolication docs, and cross‑SDK CHK test vectors and validators. (50 words)

Changes

Cohort / File(s) Summary
Core Transport & Public API (Rust)
rust/src/transport.rs, rust/src/lib.rs, rust/Cargo.toml
New transport module: event kind constants, size thresholds, Direct/Manifest/Chunk payload types, TransportKind helpers, hashtree-core dependency and re-exports.
Chunking Implementations (multi-platform)
rust/src/chunking.rs, electron/src/chunking.ts, dart/lib/src/chunking.dart, react-native/src/chunking.ts, android/.../Chunking.kt
New CHK chunking/reassembly: per-chunk SHA‑256, HKDF-SHA256 key derivation, AES‑256‑GCM zero-nonce encrypt/decrypt, root-hash computation, helpers and tests.
Receiver & Orchestration (Rust binary)
rust/src/bin/main.rs
Receiver updated to detect manifests, fetch chunks (hinted + broad relay fallbacks), reassemble via CHK, tighter gift-wrap/seal validation, and added fetch_chunks helpers.
Transport Types & Helpers (SDKs)
electron/src/transport.ts, dart/lib/src/transport.dart, react-native/src/transport.ts, go/bugstr.go, python/bugstr/__init__.py, android/.../Transport.kt
New transport constants/types, relay rate-limit maps, progress model types, transport selection and upload-estimate utilities across SDKs.
SDK Send & Publish Logic (multi SDKs)
dart/lib/src/bugstr_client.dart, go/bugstr.go, python/bugstr/__init__.py, react-native/src/index.ts, electron/src/sdk.ts, android/.../Nip17CrashSender.kt
Send paths split: direct vs chunked flows; chunk creation, round‑robin relay publish with rate-limits, verify/retry, manifest creation with relay hints, and progress callbacks (onProgress).
Design, Docs & Symbolication
README.md, rust/README.md, rust/CHANGELOG.md, rust/docs/*, AGENTS.md
Expanded README and rust README with symbolication guide, changelog entries for transport/chunking, CHK interoperability guidance, chunk distribution and rate-limiting design docs, and NIP audit doc.
Test Vectors & Validation
test-vectors/chk-encryption.json, test-vectors/validate.js
New CHK test vector file and JS validator exercising HKDF + AES‑GCM zero-nonce encryption/decryption and round-trip checks; test harness integration.
Packaging & Exports
electron/src/index.ts, react-native/package.json, dart/pubspec.yaml, python/pyproject.toml, .gitignore
Re-exports for transport/chunking, added crypto deps (pointycastle / @noble libs / cryptography), .beads/ ignored, small packaging updates.
Receiver Audit & Timestamp Fixes
rust/docs/NIP17_NIP59_AUDIT.md, android/.../Nip17PayloadBuilder.kt
New audit document and adjusted timestamp handling in gift-wrap/seal construction (timestamp parameter removal and unified timestamp generation).

Sequence Diagram(s)

sequenceDiagram
    participant Sender
    participant Chunker as Chunking Module
    participant Relay
    participant Receiver
    participant Storage

    Sender->>Chunker: chunk_payload(data)
    Chunker->>Chunker: split → hash → HKDF derive keys → AES‑GCM encrypt
    Chunker-->>Sender: { manifest, chunks }

    Sender->>Relay: publish ChunkPayload(s) (kind 10422)
    Sender->>Relay: publish ManifestPayload (kind 10421)

    Relay-->>Receiver: deliver ManifestPayload
    Receiver->>Relay: fetch_chunks(chunk_ids [, relay_hints])
    Relay-->>Receiver: return ChunkPayload(s)
    Receiver->>Chunker: reassemble_payload(manifest, chunks)
    Chunker->>Chunker: verify root_hash → decrypt → concat plaintext
    Chunker-->>Receiver: original data
    Receiver->>Storage: decompress/store crash report
Loading
sequenceDiagram
    participant Client
    participant Transport
    participant Receiver
    participant Storage

    Client->>Receiver: incoming Nostr event
    Receiver->>Transport: inspect event kind
    alt kind == 10420 (Direct)
        Transport->>Receiver: parse DirectPayload
        Receiver->>Storage: store nested crash
    else kind == 10421 (Manifest)
        Transport->>Receiver: parse ManifestPayload
        Receiver->>Receiver: fetch_chunks(relay_hints, chunk_ids)
        Receiver->>Transport: reassemble_payload(manifest, chunks)
        Receiver->>Storage: store reconstructed crash
    else kind == 14 (Legacy)
        Receiver->>Storage: decompress & store crash
    end
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

Poem

🐰 I split the bytes and hummed a tune,
Keys from hashes, sealed by moon,
I hopped through relays with tiny notes,
Gathered the pieces, stitched them like coats,
A cozy crash report — snug and immune.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'hashtree CHK chunking for large crash reports' directly summarizes the main change: implementing Phase 2 of the crash report transport system using CHK-based chunking for payloads exceeding 50KB.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@README.md`:
- Around line 140-152: The fenced code block showing the directory tree starting
with "mappings/" needs a language tag for proper highlighting and accessibility;
update the triple-backtick fence that encloses the block (the block containing
lines like "mappings/", "android/", "com.example.app/") to include a language
identifier (e.g., add "text" after the opening ```), ensuring the block is
rendered with the specified language and preserved formatting.
🧹 Nitpick comments (1)
README.md (1)

105-113: Consider hyphenating compound adjective.

Static analysis suggests "full-line-range" (with hyphens) when used as a compound adjective modifying "support". However, this is a minor style preference and the current form is acceptable.

✍️ Optional style improvement
-| Android | `mapping.txt` | ProGuard/R8 with full line range support |
+| Android | `mapping.txt` | ProGuard/R8 with full-line-range support |
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e0754ca and a841807.

📒 Files selected for processing (1)
  • README.md
🧰 Additional context used
🧠 Learnings (5)
📚 Learning: 2026-01-16T18:29:38.035Z
Learnt from: CR
Repo: alltheseas/bugstr PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-16T18:29:38.035Z
Learning: Applies to **/CHANGELOG.md : All user-facing changes require a CHANGELOG.md entry following the format with Added, Changed, and Fixed sections

Applied to files:

  • README.md
📚 Learning: 2026-01-16T18:29:38.035Z
Learnt from: CR
Repo: alltheseas/bugstr PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-16T18:29:38.035Z
Learning: Applies to {android,electron,dart,rust,go,python,react-native}/**/*.{kt,ts,dart,rs,go,py,tsx,jsx} : Rumor events must include `sig: ""` (empty string, not omitted) when implementing NIP-17/59

Applied to files:

  • README.md
📚 Learning: 2026-01-16T18:29:38.036Z
Learnt from: CR
Repo: alltheseas/bugstr PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-16T18:29:38.036Z
Learning: Applies to {android,electron,dart,rust,go,python,react-native}/**/*.{kt,ts,dart,rs,go,py,tsx,jsx} : Event serialization for NIP-17/59 must use order: `[0, pubkey, created_at, kind, tags, content]`

Applied to files:

  • README.md
📚 Learning: 2026-01-16T18:29:38.036Z
Learnt from: CR
Repo: alltheseas/bugstr PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-16T18:29:38.036Z
Learning: NIP-17 messages should be tested against multiple compatible clients including 0xchat and Amethyst

Applied to files:

  • README.md
📚 Learning: 2026-01-16T18:29:38.035Z
Learnt from: CR
Repo: alltheseas/bugstr PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-16T18:29:38.035Z
Learning: Applies to {android,electron,dart,rust,go,python,react-native}/**/*.{kt,ts,dart,rs,go,py,tsx,jsx} : Rumor events must include computed `id` field (SHA256 of serialized event) when implementing NIP-17/59

Applied to files:

  • README.md
🪛 LanguageTool
README.md

[uncategorized] ~107-~107: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...roid | mapping.txt | ProGuard/R8 with full line range support | | Electron/JS | `*.js.m...

(EN_COMPOUND_ADJECTIVE_INTERNAL)

🪛 markdownlint-cli2 (0.18.1)
README.md

140-140: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🔇 Additional comments (5)
README.md (5)

37-43: Clear relay configuration documentation.

The relay table and size limit guidance are well-documented. The note correctly explains the buffer needed for gift-wrap envelope overhead when staying under the 64 KB limit.


49-80: Compression strategy is well-explained.

The compression format, efficiency metrics, and size limits are clearly documented. The payload behavior table effectively communicates when compression is applied, and the compression ratios (70-90%) are realistic for stack trace data.


156-161: Contributing section appropriately references AGENTS.md.

The updated contributing section provides a clear pointer to detailed contributor guidelines with a helpful preview of the main topics covered.


103-154: Symbolication implementation verified.

All documented features are fully implemented in the Rust receiver:

  • ✓ 7-platform support with dedicated symbolication modules (Android, Electron/JS, Flutter, Go, Python, React Native, Rust)
  • ✓ Web API endpoint /api/symbolicate at rust/src/web.rs:44
  • ✓ CLI command with --mappings option for mapping file organization
  • ✓ Expected directory structure: <root>/<platform>/<app_id>/<version>/<mapping_file>
  • ✓ Version parameter handling in both API and CLI

1-165: Add CHANGELOG.md entry for user-facing documentation changes.

Per AGENTS.md § 3 Changelog, all user-facing changes require a CHANGELOG.md entry with Added/Changed/Fixed sections. Create a root-level CHANGELOG.md with an [Unreleased] section documenting:

  • Added: Symbolication feature documentation
  • Changed: Relay size limits and default relay list updates

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Comment on lines +140 to +152
```
mappings/
android/
com.example.app/
1.0.0/mapping.txt
1.1.0/mapping.txt
electron/
my-app/
1.0.0/main.js.map
flutter/
com.example.app/
1.0.0/app.android-arm64.symbols
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add language specification to fenced code block.

The directory structure example should specify a language for proper syntax highlighting and accessibility.

📝 Proposed fix
-```
+```text
 mappings/
   android/
     com.example.app/
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

140-140: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In `@README.md` around lines 140 - 152, The fenced code block showing the
directory tree starting with "mappings/" needs a language tag for proper
highlighting and accessibility; update the triple-backtick fence that encloses
the block (the block containing lines like "mappings/", "android/",
"com.example.app/") to include a language identifier (e.g., add "text" after the
opening ```), ensuring the block is rendered with the specified language and
preserved formatting.

alltheseas added a commit that referenced this pull request Jan 16, 2026
Add comprehensive symbolication documentation to README including:
- Enable symbolication CLI usage examples
- Directory structure for mapping files with `text` language tag
- Supported platforms table
- API endpoint example

Addresses PR #12 feedback about fenced code block language tags.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In @.beads/issues.jsonl:
- Line 5: Update the issue payload for id "bugstr-yla" so the description
matches the title/intent (>50KB) instead of the stale ">2MB" text: edit the
"description" field to remove or replace the "Multi-chunk hashtree transport for
crash reports >2MB." line with wording that states the threshold is >50KB (and
keep the implementation/manifest details unchanged), and ensure any summary or
comments (e.g., the SCOPE UPDATED comment) consistently reference the >50KB
threshold.
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a841807 and d80597b.

📒 Files selected for processing (2)
  • .beads/issues.jsonl
  • rust/README.md
✅ Files skipped from review due to trivial changes (1)
  • rust/README.md
🔇 Additional comments (1)
.beads/issues.jsonl (1)

1-5: Well-structured issue tracking data.

The JSONL format is correctly applied with consistent schema across entries. The issue lifecycle (open → in_progress → closed with reason) is properly tracked, and dependency relationships are clearly defined. The architecture decisions in bugstr-21z with resolved comments provide good traceability.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

{"id":"bugstr-91q","title":"Phase 4: P2P WebRTC fallback (optional)","description":"## Overview\nOptional P2P fallback when relays are slow or rate-limited.\n\n## Implementation\n- WebRTC signaling via Nostr (hashtree-rs built-in)\n- Direct peer-to-peer chunk transfer\n- Relay as fallback\n\n## When to Implement\nOnly if relay rate limits become a problem in production.\n\n## Effort\n~200 lines Rust\n\n## Risks\n- Adds significant complexity\n- WebRTC NAT traversal issues\n- May never be needed for crash reports\n\n## Recommendation\nDefer indefinitely. Monitor relay performance first.\n\n## Ship: Only if needed","status":"closed","priority":3,"issue_type":"task","created_at":"2026-01-16T12:38:26.526365-06:00","created_by":"e","updated_at":"2026-01-16T12:40:29.34263-06:00","closed_at":"2026-01-16T12:40:29.34263-06:00","close_reason":"Not implementing P2P - out of scope"}
{"id":"bugstr-mhb","title":"Phase 2: Single-chunk hashtree (50KB-2MB)","description":"## Overview\nSingle-chunk hashtree transport for crash reports 50KB-2MB.\nCovers ~95% of crash reports (cumulative with Phase 1).\n\n## Implementation\n```\nkind: 10421 (manifest)\ncontent: \u003cmetadata-json\u003e\ntags:\n [\"x\", \"\u003csha256-of-full-report\u003e\"]\n [\"size\", \"\u003csize\u003e\"]\n [\"chunk\", \"\u003cchunk-event-id\u003e\", \"0\", \"wss://relay1\", \"wss://relay2\"]\n [\"app\", \"notedeck\", \"0.5.0\"]\n\nkind: 10422 (chunk)\ncontent: \u003cbase64-chunk-data\u003e\ntags:\n [\"d\", \"\u003creport-id\u003e-chunk-0\"]\n```\n\n## Critical Review Notes\n- **Question**: Is hashtree overhead justified for single chunk?\n - Merkle tree value is verification across MULTIPLE chunks\n - For single chunk, simple sha256 reference may suffice\n - Consider: skip Phase 2, go direct from Phase 1 to Phase 3?\n\n## Effort\n~100 lines Rust (integrate hashtree-rs)\n\n## Dependencies\nhashtree = { git = \"https://github.com/mmalmi/hashtree-rs\" }\n\n## Ship: Second","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-16T12:38:04.670657-06:00","created_by":"e","updated_at":"2026-01-16T12:43:33.226644-06:00","closed_at":"2026-01-16T12:43:33.226644-06:00","close_reason":"Merged with Phase 3 - same code path handles 1-N chunks","dependencies":[{"issue_id":"bugstr-mhb","depends_on_id":"bugstr-21z","type":"blocks","created_at":"2026-01-16T12:39:05.786075-06:00","created_by":"daemon"}]}
{"id":"bugstr-o7f","title":"Phase 1: Direct event transport (≤50KB)","description":"## Overview\nDirect event transport for compressed crash reports ≤50KB.\nCovers ~80% of crash reports.\n\n## Implementation\n```\nkind: 10420\ncontent: \u003cbase64-encoded-compressed-crash-report\u003e\ntags:\n [\"size\", \"\u003coriginal_size\u003e\"]\n [\"compressed\", \"\u003ccompressed_size\u003e\"]\n [\"hash\", \"\u003csha256\u003e\"]\n [\"app\", \"notedeck\", \"0.5.0\"]\n [\"platform\", \"windows\", \"11\"]\n```\n\n## Open Questions\n- [ ] Is kind 10420 reserved? Check NIPs or use parameterized replaceable\n- [ ] Integration with existing NIP-17 gift wrap flow\n- [ ] Should this replace current approach or be additive?\n\n## Effort\n~50 lines Rust\n\n## Dependencies\nsha2, base64 (flate2 already present)\n\n## Ship: First","status":"in_progress","priority":1,"issue_type":"task","created_at":"2026-01-16T12:37:54.44786-06:00","created_by":"e","updated_at":"2026-01-16T12:45:29.004901-06:00","dependencies":[{"issue_id":"bugstr-o7f","depends_on_id":"bugstr-21z","type":"blocks","created_at":"2026-01-16T12:39:05.758635-06:00","created_by":"daemon"}],"comments":[{"id":4,"issue_id":"bugstr-o7f","author":"e","text":"QUESTIONS RESOLVED (per design decisions):\n\n- Kind 10420: Approved for direct crash reports\n- Gift wrap: Yes, kind 10420 event is the rumor inside NIP-17 gift wrap\n- Replaces: Current kind 14 rumor approach for crash-specific events\n\nFlow:\n1. Build kind 10420 with crash payload + metadata tags\n2. Gift wrap (NIP-17) to developer pubkey\n3. Publish gift wrap to relays\n\nReady for implementation.","created_at":"2026-01-16T18:45:22Z"}]}
{"id":"bugstr-yla","title":"Phase 2: Hashtree chunked transport (\u003e50KB)","description":"## Overview\nMulti-chunk hashtree transport for crash reports \u003e2MB.\nCovers ~99% of crash reports (cumulative).\nUse case: Unity crash dumps, large memory snapshots.\n\n## Implementation\n```\nkind: 10421 (manifest)\ncontent: \u003cmetadata-json\u003e\ntags:\n [\"x\", \"\u003chashtree-root-hash\u003e\"]\n [\"size\", \"\u003ctotal_size\u003e\"]\n [\"chunks\", \"\u003cnum_chunks\u003e\"]\n [\"chunk\", \"\u003cevent-id-1\u003e\", \"0\", \"wss://relay1\"]\n [\"chunk\", \"\u003cevent-id-2\u003e\", \"1\", \"wss://relay2\"]\n [\"chunk\", \"\u003cevent-id-3\u003e\", \"2\", \"wss://relay1\"]\n [\"encrypted\", \"chk\"] // if using CHK encryption\n```\n\n## Receiver Flow\n1. Receive kind:10421 manifest\n2. Parse chunk tags\n3. Fetch chunks from hinted relays\n4. Verify hashtree root\n5. Reassemble and decompress\n6. If chunks missing: try other relays\n\n## Effort\n~150 lines Rust (full hashtree-rs integration)\n\n## Ship: When Unity/large dump support needed","status":"open","priority":2,"issue_type":"task","created_at":"2026-01-16T12:38:17.809373-06:00","created_by":"e","updated_at":"2026-01-16T12:43:39.982559-06:00","dependencies":[{"issue_id":"bugstr-yla","depends_on_id":"bugstr-21z","type":"blocks","created_at":"2026-01-16T12:39:05.812864-06:00","created_by":"daemon"}],"comments":[{"id":2,"issue_id":"bugstr-yla","author":"e","text":"SCOPE UPDATED: Merged with original Phase 2\n\nNow handles all chunked transport (\u003e50KB), whether 1 chunk or N chunks.\nSame code path via hashtree-rs.\n\nThresholds:\n- ≤50KB: Phase 1 direct event (kind 10420)\n- \u003e50KB: Phase 2 hashtree (kind 10421 manifest + 10422 chunks)\n\nImplementation handles 1-N chunks uniformly.","created_at":"2026-01-16T18:43:47Z"}]}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Inconsistent size threshold: title says ">50KB" but description says ">2MB".

The title states "Hashtree chunked transport (>50KB)" but the description's first line says "Multi-chunk hashtree transport for crash reports >2MB." The comment at the end clarifies the intended threshold is ">50KB", suggesting the description text is stale after the Phase 2/3 merge.

Consider updating the description to match:

Suggested fix
-"Multi-chunk hashtree transport for crash reports >2MB.
+"Multi-chunk hashtree transport for crash reports >50KB.
🤖 Prompt for AI Agents
In @.beads/issues.jsonl at line 5, Update the issue payload for id "bugstr-yla"
so the description matches the title/intent (>50KB) instead of the stale ">2MB"
text: edit the "description" field to remove or replace the "Multi-chunk
hashtree transport for crash reports >2MB." line with wording that states the
threshold is >50KB (and keep the implementation/manifest details unchanged), and
ensure any summary or comments (e.g., the SCOPE UPDATED comment) consistently
reference the >50KB threshold.

alltheseas and others added 4 commits January 16, 2026 13:15
Replace placeholder symbolication section with actual implementation
details including:
- Supported platforms table (7 platforms)
- CLI usage examples
- Web API endpoint
- Mapping file directory structure
- Version fallback behavior

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Simplify relay limits to 64KB (remove 128KB/512KB references)
- Remove 0xchat client compatibility note (we have our own receiver)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add comprehensive symbolication documentation to README including:
- Enable symbolication CLI usage examples
- Directory structure for mapping files with `text` language tag
- Supported platforms table
- API endpoint example

Addresses PR #12 feedback about fenced code block language tags.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implements Content Hash Key (CHK) chunking for crash reports exceeding
50KB, enabling support for large payloads via Nostr relays.

Transport module:
- Kind 10420: Direct crash report transport (≤50KB)
- Kind 10421: Manifest containing root hash and chunk metadata
- Kind 10422: CHK-encrypted chunk data
- DirectPayload, ManifestPayload, ChunkPayload types
- TransportKind enum for automatic selection

Chunking module:
- chunk_payload() splits and encrypts using hashtree-core CHK
- reassemble_payload() decrypts and reconstructs original data
- Root hash ensures integrity (hash of all chunk keys)
- 5 tests covering small/large payloads and hash verification

Receiver updates:
- Supports kind 10420 with DirectPayload wrapper extraction
- Detects kind 10421 manifests (chunk fetching pending)
- Maintains backwards compatibility with legacy kind 14

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@rust/src/bin/main.rs`:
- Around line 643-687: Decompress the rumor payload before any parsing and reuse
that decompressed data for both manifest and direct-payload handling: call
decompress_payload(&rumor.content) once (assign to a variable like decompressed)
before the is_chunked_kind(rumor_kind) branch, then pass that decompressed data
into ManifestPayload::from_json(...) and DirectPayload::from_json(...) (instead
of rumor.content) so compressed manifests or direct payloads are handled
correctly; ensure the existing fallback behavior (using original or decompressed
content on errors) is preserved.

In `@rust/src/transport.rs`:
- Around line 67-170: Expand the rustdoc for the public payload APIs by adding
explicit parameter/return/error sections for each public method and top-level
struct docs: document DirectPayload::new (params: crash, return: DirectPayload),
DirectPayload::to_json/from_json (return: Result<String, serde_json::Error>,
describe possible serde errors), ManifestPayload::to_json/from_json (same Result
details and describe fields like root_hash/total_size), and
ChunkPayload::to_json/from_json (same Result details and describe
data/hash/index). Ensure each docblock follows the project's public API style
with a short description, a # Parameters section listing param names and types,
a # Returns section describing success type, and a # Errors (or # Panics if
applicable) section describing serde_json::Error cases.
🧹 Nitpick comments (3)
rust/src/chunking.rs (3)

44-46: PayloadTooSmall variant is never used.

The chunk_payload function doesn't enforce a minimum payload size, so this error variant is dead code. Either add validation in chunk_payload to return this error for payloads ≤50KB (since the module docs say chunking is for payloads exceeding 50KB), or remove the variant.

Option 1: Add validation to chunk_payload
 pub fn chunk_payload(data: &[u8]) -> Result<ChunkingResult, ChunkingError> {
     use base64::Engine;
 
+    // Chunking is for payloads exceeding direct transport limit
+    const MIN_CHUNK_PAYLOAD: usize = 50 * 1024; // 50KB
+    if data.len() <= MIN_CHUNK_PAYLOAD {
+        return Err(ChunkingError::PayloadTooSmall);
+    }
+
     let total_size = data.len() as u64;
Option 2: Remove unused variant
 pub enum ChunkingError {
-    #[error("Payload too small for chunking (use direct transport)")]
-    PayloadTooSmall,
-
     #[error("Encryption failed: {0}")]
     EncryptionError(String),

177-179: Consider avoiding full chunk clone for memory efficiency.

chunks.to_vec() clones all chunk data to sort them. For large payloads with many chunks, this temporarily doubles memory usage. Consider sorting indices first, then iterating chunks in order:

Memory-efficient alternative
-    // Sort chunks by index
-    let mut sorted_chunks = chunks.to_vec();
-    sorted_chunks.sort_by_key(|c| c.index);
+    // Build index map for ordered access without cloning
+    let mut index_map: Vec<usize> = (0..chunks.len()).collect();
+    index_map.sort_by_key(|&i| chunks[i].index);
 
     // Verify all indices are present
-    for (i, chunk) in sorted_chunks.iter().enumerate() {
-        if chunk.index != i as u32 {
+    for (i, &chunk_idx) in index_map.iter().enumerate() {
+        if chunks[chunk_idx].index != i as u32 {
             return Err(ChunkingError::MissingChunk(i as u32));
         }
     }
 
     // Decrypt and reassemble
     let mut result = Vec::with_capacity(manifest.total_size as usize);
     let mut chunk_keys: Vec<EncryptionKey> = Vec::new();
 
-    for chunk in &sorted_chunks {
+    for &chunk_idx in &index_map {
+        let chunk = &chunks[chunk_idx];

253-314: Good test coverage for core functionality.

Tests cover the main happy paths (small/large payloads), determinism, and corruption detection. Consider adding tests for edge cases:

  • Out-of-order chunks (verifying sort works)
  • Missing chunk detection
  • Root hash mismatch (corrupted during transmission)
  • Empty payload behavior
Example: Test out-of-order chunk reassembly
#[test]
fn test_reassemble_out_of_order_chunks() {
    let data: Vec<u8> = (0..100_000).map(|i| (i % 256) as u8).collect();
    let result = chunk_payload(&data).unwrap();
    
    // Reverse chunk order
    let mut shuffled_chunks = result.chunks.clone();
    shuffled_chunks.reverse();
    
    let reassembled = reassemble_payload(&result.manifest, &shuffled_chunks).unwrap();
    assert_eq!(reassembled, data);
}

#[test]
fn test_missing_chunk_detection() {
    let data: Vec<u8> = (0..100_000).map(|i| (i % 256) as u8).collect();
    let mut result = chunk_payload(&data).unwrap();
    
    // Remove a chunk
    result.chunks.pop();
    
    let err = reassemble_payload(&result.manifest, &result.chunks).unwrap_err();
    assert!(matches!(err, ChunkingError::InvalidManifest(_)));
}
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d80597b and 14a8c7c.

📒 Files selected for processing (6)
  • rust/CHANGELOG.md
  • rust/Cargo.toml
  • rust/src/bin/main.rs
  • rust/src/chunking.rs
  • rust/src/lib.rs
  • rust/src/transport.rs
🧰 Additional context used
📓 Path-based instructions (3)
rust/**/*.rs

📄 CodeRabbit inference engine (AGENTS.md)

Use rustdoc format (/// ... or //!) for docstrings in Rust code

Files:

  • rust/src/lib.rs
  • rust/src/bin/main.rs
  • rust/src/chunking.rs
  • rust/src/transport.rs
{android,electron,dart,rust,go,python,react-native}/**/*.{kt,ts,dart,rs,go,py,tsx,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

{android,electron,dart,rust,go,python,react-native}/**/*.{kt,ts,dart,rs,go,py,tsx,jsx}: All public classes, methods, and non-trivial functions must have documentation explaining purpose, behavior, parameters, return values, exceptions, and usage examples for complex APIs
Rumor events must include sig: "" (empty string, not omitted) when implementing NIP-17/59
Rumor events must include computed id field (SHA256 of serialized event) when implementing NIP-17/59
Event serialization for NIP-17/59 must use order: [0, pubkey, created_at, kind, tags, content]
Event ID computation must be lowercase hex string (64 characters) using SHA256 of serialized event: id = sha256(json([0, pubkey, created_at, kind, tags, content]))

Files:

  • rust/src/lib.rs
  • rust/src/bin/main.rs
  • rust/src/chunking.rs
  • rust/src/transport.rs
**/CHANGELOG.md

📄 CodeRabbit inference engine (AGENTS.md)

All user-facing changes require a CHANGELOG.md entry following the format with Added, Changed, and Fixed sections

Files:

  • rust/CHANGELOG.md
🧬 Code graph analysis (2)
rust/src/bin/main.rs (3)
rust/src/transport.rs (5)
  • is_crash_report_kind (206-208)
  • is_chunked_kind (211-213)
  • from_json (94-96)
  • from_json (134-136)
  • from_json (167-169)
rust/src/compression.rs (1)
  • decompress_payload (83-103)
rust/src/storage.rs (1)
  • serde_json (233-233)
rust/src/chunking.rs (2)
rust/src/lib.rs (1)
  • new (107-111)
rust/src/transport.rs (1)
  • new (84-86)
🔇 Additional comments (14)
rust/CHANGELOG.md (1)

10-24: Changelog entry captures the transport additions clearly.

No issues spotted with the new “Added” entries.

rust/src/transport.rs (4)

1-65: Module docs and constants are clear and consistent with the transport model.


172-198: TransportKind selection and kind mapping look solid.


200-213: Kind predicates align with the transport behavior.


215-270: Tests cover selection logic and serialization paths well.

rust/src/bin/main.rs (1)

6-10: Transport helpers are wired in cleanly.

rust/Cargo.toml (1)

50-52: Chunking dependency addition looks appropriate.

rust/src/lib.rs (1)

26-49: Public re-exports for chunking/transport are well organized.

rust/src/chunking.rs (6)

1-35: Well-documented module with clear security and sizing rationale.

The module documentation follows rustdoc format and covers the security model, chunk sizing constraints, and usage patterns comprehensively.


36-40: Imports are appropriate and minimal.


67-75: LGTM!

The result struct is well-documented and appropriately simple for its purpose.


93-141: Chunking logic is correct and well-documented.

The CHK encryption approach is sound—using the content hash as the key ensures integrity and allows verification during reassembly. The root hash computation via concatenation of chunk keys is deterministic and verifiable.

Minor note: the index as u32 cast on line 116 is safe in practice (would require >192TB payload to overflow), but if you want belt-and-suspenders safety, you could add a bounds check.


188-228: Reassembly logic is correct with proper integrity verification.

The function correctly decrypts chunks using their content hash as the key (CHK property) and verifies the root hash at the end. The dual verification (per-chunk via CHK + overall via root hash) provides strong integrity guarantees.


230-251: Utility functions are correct.

The ceiling division in expected_chunk_count is properly implemented. The overhead estimates in estimate_overhead are reasonable approximations as documented.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

alltheseas and others added 3 commits January 16, 2026 13:21
Completes Phase 2 hashtree support by adding relay-based chunk fetching.

Changes:
- Add fetch_chunks() async function to retrieve chunks by event ID
- Make handle_message_for_storage() async to support chunk fetching
- Update subscribe_relay_with_storage() to pass relay URLs
- Reassemble chunked payloads using reassemble_payload()
- Decompress reassembled data before storage

The receiver now fully supports:
- Kind 10420: Direct crash reports (≤50KB)
- Kind 10421: Manifest with chunk references (>50KB)
- Kind 10422: CHK-encrypted chunk data

Chunk fetching:
- Tries each relay until all chunks found
- 30s timeout per relay, 10s connect timeout
- Parallel chunk collection with deduplication
- Ordered reassembly verified against manifest

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add comprehensive documentation per AGENTS.md requirements:
- Arguments section with parameter descriptions
- Returns section describing output format
- Errors section listing failure conditions

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Refactor fetch_chunks() to query all relays in parallel and aggregate
chunks across them. This allows crash reports to be reconstructed even
when chunks are distributed across multiple relays.

- New parallel implementation using Arc<TokioMutex<HashMap>>
- fetch_chunks_from_relay() helper queries single relay
- Early termination when all chunks found
- Each relay only fetches chunks not yet retrieved

Closes: rust-1uf

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@README.md`:
- Around line 105-108: Update the Android row in the README table: change the
Notes text for the "Android" / "mapping.txt" entry from "ProGuard/R8 with full
line range support" to "ProGuard/R8 with full line-range support" (hyphenate
"line-range") to make the compound adjective clearer.

In `@rust/src/chunking.rs`:
- Around line 77-99: The function chunk_payload documents that payloads must be
>50KB but never returns ChunkingError::PayloadTooSmall; add an explicit guard at
the start of chunk_payload that checks if data.len() as u64 <= MAX_CHUNK_SIZE
(or the intended 50*1024 threshold) and return
Err(ChunkingError::PayloadTooSmall) to enforce the documented behavior, making
sure the check uses the MAX_CHUNK_SIZE constant; alternatively, if you prefer
not to enforce this, remove the PayloadTooSmall variant and update the function
docs to reflect no minimum size.
♻️ Duplicate comments (3)
README.md (1)

140-152: Add a language tag to the mapping tree code block.

The fenced block still lacks a language identifier (markdownlint MD040).

📝 Proposed fix
-```
+```text
 mappings/
   android/
     com.example.app/
rust/src/transport.rs (1)

82-170: Expand rustdoc for public payload APIs and helpers.

Public methods/constructors are still missing explicit # Parameters, # Returns, and # Errors sections required for public APIs. Please extend rustdoc for DirectPayload, ManifestPayload, ChunkPayload, and the public helper methods to include those sections. As per coding guidelines, please add full rustdoc sections for public APIs.

rust/src/bin/main.rs (1)

792-848: Decompress before manifest parsing to handle compressed manifests.
ManifestPayload::from_json currently uses rumor.content directly; compressed manifests will fail to parse. Decompress once up front and reuse that buffer for both manifest and direct payload parsing.

🐛 Proposed fix
-    let rumor_kind = rumor.kind as u16;
-
-    // Handle different transport kinds
-    if is_chunked_kind(rumor_kind) {
-        // Kind 10421: Manifest for chunked crash report
-        match ManifestPayload::from_json(&rumor.content) {
+    let rumor_kind = rumor.kind as u16;
+    let decompressed = decompress_payload(&rumor.content).unwrap_or_else(|_| rumor.content.clone());
+
+    // Handle different transport kinds
+    if is_chunked_kind(rumor_kind) {
+        // Kind 10421: Manifest for chunked crash report
+        match ManifestPayload::from_json(&decompressed) {
             Ok(manifest) => {
                 println!(
                     "{} Received manifest: {} chunks, {} bytes total",
                     "📦".cyan(),
                     manifest.chunk_count,
                     manifest.total_size
                 );
                 // ...
                 return Some(ReceivedCrash {
                     event_id: event.id.to_hex(),
                     sender_pubkey: rumor.pubkey.clone(),
                     created_at: rumor.created_at as i64,
                     content: decompressed,
                 });
             }
             Err(e) => {
                 eprintln!("{} Failed to parse manifest: {}", "✗".red(), e);
                 return None;
             }
         }
     }
-
-    // Decompress if needed
-    let decompressed = decompress_payload(&rumor.content).unwrap_or_else(|_| rumor.content.clone());
🧹 Nitpick comments (1)
rust/src/bin/main.rs (1)

608-615: Expand rustdoc for fetch_chunks parameters and errors.
The docstring should describe parameters, return value, and error cases per the non‑trivial function documentation rule. As per coding guidelines, ...

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 14a8c7c and 8c39004.

📒 Files selected for processing (8)
  • README.md
  • rust/CHANGELOG.md
  • rust/Cargo.toml
  • rust/README.md
  • rust/src/bin/main.rs
  • rust/src/chunking.rs
  • rust/src/lib.rs
  • rust/src/transport.rs
🚧 Files skipped from review as they are similar to previous changes (2)
  • rust/src/lib.rs
  • rust/Cargo.toml
🧰 Additional context used
📓 Path-based instructions (3)
rust/**/*.rs

📄 CodeRabbit inference engine (AGENTS.md)

Use rustdoc format (/// ... or //!) for docstrings in Rust code

Files:

  • rust/src/transport.rs
  • rust/src/bin/main.rs
  • rust/src/chunking.rs
{android,electron,dart,rust,go,python,react-native}/**/*.{kt,ts,dart,rs,go,py,tsx,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

{android,electron,dart,rust,go,python,react-native}/**/*.{kt,ts,dart,rs,go,py,tsx,jsx}: All public classes, methods, and non-trivial functions must have documentation explaining purpose, behavior, parameters, return values, exceptions, and usage examples for complex APIs
Rumor events must include sig: "" (empty string, not omitted) when implementing NIP-17/59
Rumor events must include computed id field (SHA256 of serialized event) when implementing NIP-17/59
Event serialization for NIP-17/59 must use order: [0, pubkey, created_at, kind, tags, content]
Event ID computation must be lowercase hex string (64 characters) using SHA256 of serialized event: id = sha256(json([0, pubkey, created_at, kind, tags, content]))

Files:

  • rust/src/transport.rs
  • rust/src/bin/main.rs
  • rust/src/chunking.rs
**/CHANGELOG.md

📄 CodeRabbit inference engine (AGENTS.md)

All user-facing changes require a CHANGELOG.md entry following the format with Added, Changed, and Fixed sections

Files:

  • rust/CHANGELOG.md
🧠 Learnings (6)
📚 Learning: 2026-01-16T18:29:38.035Z
Learnt from: CR
Repo: alltheseas/bugstr PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-16T18:29:38.035Z
Learning: Applies to {android,electron,dart,rust,go,python,react-native}/**/*.{kt,ts,dart,rs,go,py,tsx,jsx} : Rumor events must include `sig: ""` (empty string, not omitted) when implementing NIP-17/59

Applied to files:

  • README.md
  • rust/README.md
📚 Learning: 2026-01-16T18:29:38.036Z
Learnt from: CR
Repo: alltheseas/bugstr PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-16T18:29:38.036Z
Learning: Applies to {android,electron,dart,rust,go,python,react-native}/**/*.{kt,ts,dart,rs,go,py,tsx,jsx} : Event serialization for NIP-17/59 must use order: `[0, pubkey, created_at, kind, tags, content]`

Applied to files:

  • README.md
📚 Learning: 2026-01-16T18:29:38.036Z
Learnt from: CR
Repo: alltheseas/bugstr PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-16T18:29:38.036Z
Learning: NIP-17 messages should be tested against multiple compatible clients including 0xchat and Amethyst

Applied to files:

  • README.md
📚 Learning: 2026-01-16T18:29:38.035Z
Learnt from: CR
Repo: alltheseas/bugstr PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-16T18:29:38.035Z
Learning: Applies to {android,electron,dart,rust,go,python,react-native}/**/*.{kt,ts,dart,rs,go,py,tsx,jsx} : Rumor events must include computed `id` field (SHA256 of serialized event) when implementing NIP-17/59

Applied to files:

  • README.md
📚 Learning: 2026-01-16T18:29:38.035Z
Learnt from: CR
Repo: alltheseas/bugstr PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-16T18:29:38.035Z
Learning: Applies to rust/**/*.rs : Use rustdoc format (`/// ...` or `//!`) for docstrings in Rust code

Applied to files:

  • rust/src/transport.rs
📚 Learning: 2026-01-16T18:29:38.035Z
Learnt from: CR
Repo: alltheseas/bugstr PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-16T18:29:38.035Z
Learning: Applies to {android,electron,dart,rust,go,python,react-native}/**/*.{kt,ts,dart,rs,go,py,tsx,jsx} : All public classes, methods, and non-trivial functions must have documentation explaining purpose, behavior, parameters, return values, exceptions, and usage examples for complex APIs

Applied to files:

  • rust/src/transport.rs
🧬 Code graph analysis (1)
rust/src/bin/main.rs (4)
rust/src/transport.rs (3)
  • is_crash_report_kind (206-208)
  • is_chunked_kind (211-213)
  • new (84-86)
rust/src/chunking.rs (1)
  • reassemble_payload (162-228)
rust/src/event.rs (1)
  • new (40-56)
rust/src/compression.rs (1)
  • decompress_payload (83-103)
🪛 LanguageTool
README.md

[uncategorized] ~107-~107: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...roid | mapping.txt | ProGuard/R8 with full line range support | | Electron/JS | `*.js.m...

(EN_COMPOUND_ADJECTIVE_INTERNAL)

🔇 Additional comments (5)
rust/src/chunking.rs (1)

192-215: Verify and implement hash verification for decrypted chunks.

CHK (Content Hash Key) integrity requires that decrypted plaintext be verified against the advertised hash. The current implementation decrypts using the key but never computes SHA-256 of the decrypted bytes to compare against chunk.hash. Add hash verification immediately after successful decryption and return ChunkHashMismatch if the computed hash does not match the stored value.

Proposed implementation
         let decrypted = decrypt_chk(&ciphertext, &key)
             .map_err(|e| ChunkingError::DecryptionError(e.to_string()))?;

+        let computed_hash = hex::encode(Sha256::digest(&decrypted));
+        if computed_hash != chunk.hash {
+            return Err(ChunkingError::ChunkHashMismatch(chunk.index));
+        }
+
         chunk_keys.push(key);
         result.extend_from_slice(&decrypted);
README.md (1)

117-124: Verify symbolicate CLI flags and align with other docs.

This example uses --input and --app-id, while rust/README.md shows --app and stdin. The actual CLI argument names in the code need to be confirmed, and both documentation files should be made consistent.

rust/src/bin/main.rs (3)

9-10: Import additions look correct.
No concerns with the new transport/chunking symbols.


479-491: Relay list propagation in serve looks good.
Cloning the full relay list for chunk fetching is clear and intentional.


558-588: Handler wiring for relay URLs is clean.
Signature and call sites are consistent.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

alltheseas and others added 16 commits January 16, 2026 13:33
Implement sender-side chunking for crash reports >50KB:
- transport.ts: Event kind constants and payload types
- chunking.ts: CHK encryption using AES-256-CBC with content hash as key
- sdk.ts: Automatic transport selection based on payload size

For large payloads:
- Chunks published to all relays for redundancy (kind 10422)
- Manifest gift-wrapped to recipient (kind 10421)
- Only recipient can decrypt chunks using root hash from manifest

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implement sender-side chunking for crash reports >50KB:
- transport.ts: Event kind constants and payload types
- chunking.ts: CHK encryption using @noble/hashes and @noble/ciphers
- index.ts: Automatic transport selection based on payload size

Uses pure JS crypto libraries for React Native compatibility:
- @noble/hashes for SHA-256
- @noble/ciphers for AES-256-CBC

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implement sender-side chunking for crash reports >50KB:
- Transport constants (KindDirect, KindManifest, KindChunk)
- CHK encryption using crypto/aes and crypto/cipher
- Automatic transport selection based on payload size
- Parallel chunk publishing to all relays for redundancy

Uses standard library crypto for zero additional dependencies.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implement sender-side chunking for crash reports >50KB:
- Transport constants (KIND_DIRECT, KIND_MANIFEST, KIND_CHUNK)
- CHK encryption using cryptography library (AES-256-CBC)
- Automatic transport selection based on payload size
- Parallel chunk publishing to all relays

Added cryptography dependency for AES encryption.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add transport constants and chunking modules for Android and Dart SDKs:

Android:
- Transport.kt: Event kinds and payload types
- Chunking.kt: CHK encryption using javax.crypto (AES-256-CBC)

Dart:
- transport.dart: Event kinds and payload types
- chunking.dart: CHK encryption using encrypt package (AES-256-CBC)

Integration with send flow pending - modules provide the building blocks
for chunked crash report delivery.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Complete the chunking integration for Android and Dart SDKs:

Android:
- Nip17CrashSender now checks payload size and routes to sendDirect or sendChunked
- Added publishChunk method to NostrEventPublisher interface
- Chunks published to all relays for redundancy

Dart:
- bugstr_client.dart updated with full chunking integration
- _sendToNostr now uses transport kind selection
- Helper methods for building chunk events and publishing

Both SDKs now have full send flow integration matching
Electron, React Native, Go, and Python.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Mitigate relay rate limiting (e.g., strfry/noteguard's posts_per_minute)
by adding a small delay between chunk event publications.

Affects: Android, Dart, Electron, Go, Python, React Native SDKs.
Also adds rust/docs/RATE_LIMITING.md documenting the research.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Lock file updates from CHK chunking crypto dependencies.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implement round-robin chunk distribution across relays to maximize
throughput while respecting strfry+noteguard rate limits (8 posts/min).

Changes:
- Add per-relay rate limits (7500ms between posts to same relay)
- Round-robin distribution: chunks distributed across relays
- HIG-compliant progress callbacks with determinate progress
- Relay hints in manifest for optimized fetching
- Non-blocking async sending (existing behavior preserved)

With 3 relays, effective throughput increases 3x:
- 1MB report: 3 min → 1 min
- 500KB report: 90 sec → 30 sec

Also adds rust/docs/CHUNK_DISTRIBUTION_DESIGN.md with full design.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implement round-robin chunk distribution across relays to maximize
throughput while respecting strfry+noteguard rate limits (8 posts/min).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implement round-robin chunk distribution across relays to maximize
throughput while respecting strfry+noteguard rate limits (8 posts/min).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Receiver now uses chunk_relays hints from manifests to optimize
chunk retrieval. For each chunk, tries the hinted relay first
before falling back to all relays, reducing unnecessary network
requests and improving fetch speed.

- Added chunk_relays field to ManifestPayload
- Updated fetch_chunks to accept and use relay hints
- Two-phase fetching: hinted relays first, then fallback

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Each SDK now verifies chunks exist after publishing and retries on
different relays if verification fails. This ensures chunk delivery
reliability without requiring bidirectional communication.

Changes across all SDKs (Go, Python, Dart, Android, React Native):
- Added verifyChunkExists() to query relay for event
- Added publishChunkWithVerify() that publishes, verifies, and retries
- Updated chunk loops to use verified publish with round-robin fallback
- 100ms delay between publish and verify to allow relay processing

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
100ms was too quick for reliable relay response verification.
Increased to 500ms across all SDKs.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Added probability table showing 30-chunk report success rates
at various relay reliability levels (80%-98%). With 4-relay
retry, even 80% relay reliability yields 95%+ report success.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

docs: add reliability enhancement options comparison table

Compares current approach vs redundant publishing, erasure coding,
bidirectional requests, and hybrid approaches across reliability,
upload time, bandwidth, complexity, and crash-tolerance dimensions.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
All SDKs were using incompatible CHK encryption (AES-256-CBC with
random IV), causing decryption failures when Rust receiver used
hashtree-core's HKDF + AES-256-GCM with zero nonce.

Fixed implementations:
- Dart: pointycastle HKDF + GCMBlockCipher
- Android/Kotlin: manual HKDF + javax.crypto AES/GCM/NoPadding
- React Native: @noble/hashes hkdf + @noble/ciphers gcm
- Go: golang.org/x/crypto/hkdf + cipher.NewGCM
- Python: cryptography HKDF + AESGCM
- Electron: Node.js hkdfSync + aes-256-gcm

Added "Lessons Learned" section to AGENTS.md documenting the correct
algorithm and verification checklist for future implementations.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
alltheseas and others added 3 commits January 16, 2026 15:00
The subscription ID generation used a fixed offset [6..] assuming
wss:// prefix, which would panic for ws:// URLs (5 chars) or
invalid/short URLs.

Now safely handles both schemes using strip_prefix with fallback.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Addresses residual risk of cross-SDK byte-for-byte interop not being
verified in tests.

- Add chk-encryption.json with test vector generated from Rust
  hashtree-core reference implementation
- Update validate.js to verify Node.js CHK implementation produces
  byte-identical ciphertext to Rust
- Add generate_chk_test_vector test in Rust for regenerating vectors

The test verifies:
1. SHA256 content hash matches
2. HKDF key derivation + AES-256-GCM encryption is byte-identical
3. Decryption round-trip succeeds

CI automatically runs these tests on push/PR via schema-validation.yml.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Beads is a local issue tracking tool that should not be committed.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@alltheseas alltheseas changed the title docs: update README with symbolication feature feat: Phase 2 - CHK chunking for large crash reports Jan 16, 2026
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🤖 Fix all issues with AI agents
In
`@android/bugstr-nostr-crypto/src/main/kotlin/com/bugstr/nostr/crypto/Nip17CrashSender.kt`:
- Around line 49-60: In sendDirect (function sendDirect and the construction of
directPayload where JSONObject(request.plaintext) is used) guard against
malformed JSON by wrapping the JSONObject(request.plaintext) creation in a
try/catch for JSONException (or generic Exception), and on failure log the error
and return a failed Result (e.g., Result.failure(exception)) or convert
plaintext to a safe JSON string (e.g., put a raw string under "crash") depending
on desired behavior; ensure you update directRequest only after successful
parsing and keep messageKind assignment as before.

In `@dart/lib/src/chunking.dart`:
- Around line 119-126: The return is doing an unnecessary copy and the comment
is wrong: since cipher.getOutputSize(data.length) already returns the plaintext
length for decrypt (it excludes the GCM tag), remove the plaintext.sublist(0,
data.length - _tagSize) and return the allocated plaintext buffer directly
(i.e., return plaintext), and update the comment near the Uint8List allocation
to state that getOutputSize excludes the tag on decrypt rather than includes it;
reference symbols: Uint8List, cipher.getOutputSize, _tagSize, plaintext.sublist,
chkDecrypt.

In `@go/bugstr.go`:
- Around line 624-644: Compute the sleep duration while holding the mutex to
avoid the race: in waitForRateLimit, keep lastPostTimeMu locked, read
lastTime/existence from lastPostTime, call GetRelayRateLimit(relayURL) and
compute deadline := lastTime.Add(rateLimit) (or sleep := time.Until(deadline)),
then unlock lastPostTimeMu and call time.Sleep(sleep) only if sleep > 0;
reference symbols: waitForRateLimit, lastPostTime, lastPostTimeMu,
GetRelayRateLimit.

In `@python/bugstr/__init__.py`:
- Around line 649-653: The code computes a compressed "content" via
_maybe_compress(plaintext) but then sends an uncompressed payload via
payload.to_dict() in the DIRECT path; fix by using the previously computed
compressed content when building the direct gift: replace payload.to_dict() with
the compressed envelope (the variable produced by _maybe_compress, e.g. content
or its JSON form) when constructing direct_payload passed to
_build_gift_wrap(KIND_DIRECT), and ensure the receiver code that handles
KIND_DIRECT understands and unwraps the compression envelope; alternatively, if
direct gifts must remain uncompressed, remove the _maybe_compress call and
related compression logic before the DIRECT_SIZE_THRESHOLD check so size
decisions and sent data remain consistent (refer to symbols: _maybe_compress,
payload_size, DIRECT_SIZE_THRESHOLD, payload.to_dict(), _build_gift_wrap,
KIND_DIRECT, _publish_to_relays).
- Around line 582-597: The _verify_chunk_exists function is calling
Client.get_events_of with a literal timeout parameter which is invalid; change
the call to pass a timedelay EventSource via
EventSource.relays(timedelta(seconds=5)). Specifically, import
datetime.timedelta and EventSource from nostr_sdk and replace the call to
client.get_events_of([filter], timeout=5) with client.get_events_of([filter],
EventSource.relays(timedelta(seconds=5))). Keep the surrounding uses of Filter,
Client(Keys.generate()), Kind(KIND_CHUNK), and client.disconnect() intact.

In `@react-native/src/index.ts`:
- Around line 420-426: In the loop that calls publishChunkWithVerify (where
chunkEvent is published and chunkRelays is set), detect when
publishChunkWithVerify returns null and handle it explicitly: either throw an
error to fail-fast, or push chunkEvent.id into a manifest-level failedChunkIds
array (and include that array in the manifest metadata before publishing), and
at minimum log a warning including the chunk index/ID; update the logic around
the publishChunkWithVerify call and the manifest creation so the manifest
carries failedChunkIds (or the function throws) instead of silently skipping the
chunk.

In `@rust/docs/CHUNK_DISTRIBUTION_DESIGN.md`:
- Around line 246-254: The onProgress call is using an invalid phase string
'chunks' that doesn't match the BugstrProgress union ('preparing' | 'uploading'
| 'finalizing'); update the phase value in the onProgress invocation to
'uploading' so it aligns with the BugstrProgress type (the call inside the loop
that reports current/i+1/total/percent/estimatedSecondsRemaining), and run type
checking to ensure onProgress and any consumers accept the updated 'uploading'
phase.

In `@rust/docs/RATE_LIMITING.md`:
- Around line 115-121: Update the inconsistent default chunk publish delay by
changing the constant DEFAULT_CHUNK_PUBLISH_DELAY_MS in RATE_LIMITING.md from
100 to 7500 to match the recommendation in CHUNK_DISTRIBUTION_DESIGN.md (and/or
explicitly note the rationale) so both documents present the same default for
strfry + noteguard relays; ensure the README entry for
DEFAULT_CHUNK_PUBLISH_DELAY_MS and any nearby explanatory text reflect the
7500ms value and its intended use-case.
♻️ Duplicate comments (3)
rust/src/transport.rs (1)

82-97: Rustdoc for public payload methods still needs expansion.

The public methods new, to_json, and from_json on DirectPayload, ManifestPayload, and ChunkPayload have brief one-line docs but are missing the explicit # Parameters, # Returns, and # Errors sections requested in a previous review. This applies to lines 82-97, 134-143, and 167-177.

rust/src/bin/main.rs (1)

909-964: Manifest parsing still happens before decompression—compressed manifests will fail.

This repeats the concern from the past review: if a manifest is compressed before being gift-wrapped, ManifestPayload::from_json(&rumor.content) will fail on the raw compressed content. The decompression call at line 967 only happens after the chunked branch returns.

Consider decompressing once at the start and using the result for both branches:

Suggested fix
     let rumor_kind = rumor.kind as u16;

+    // Decompress once for all transport kinds (safe no-op if not compressed)
+    let decompressed = decompress_payload(&rumor.content).unwrap_or_else(|_| rumor.content.clone());
+
     // Handle different transport kinds
     if is_chunked_kind(rumor_kind) {
         // Kind 10421: Manifest for chunked crash report
-        match ManifestPayload::from_json(&rumor.content) {
+        match ManifestPayload::from_json(&decompressed) {
             Ok(manifest) => {
                 // ... rest unchanged
             }
         }
     }

-    // Decompress if needed
-    let decompressed = decompress_payload(&rumor.content).unwrap_or_else(|_| rumor.content.clone());
rust/src/chunking.rs (1)

44-46: ChunkingError::PayloadTooSmall is defined but never returned.

The error variant exists and the docs state payloads "should be >50KB", but chunk_payload accepts any size without checking. This inconsistency was flagged in a previous review. Either add the guard or remove the variant.

Option 1: Add the guard
+use crate::transport::{ChunkPayload, ManifestPayload, DIRECT_SIZE_THRESHOLD, MAX_CHUNK_SIZE};
-use crate::transport::{ChunkPayload, ManifestPayload, MAX_CHUNK_SIZE};

 pub fn chunk_payload(data: &[u8]) -> Result<ChunkingResult, ChunkingError> {
     use base64::Engine;

+    if data.len() <= DIRECT_SIZE_THRESHOLD {
+        return Err(ChunkingError::PayloadTooSmall);
+    }
+
     let total_size = data.len() as u64;
Option 2: Remove the variant if enforcement is intentionally deferred
 pub enum ChunkingError {
-    #[error("Payload too small for chunking (use direct transport)")]
-    PayloadTooSmall,
-
     #[error("Encryption failed: {0}")]
     EncryptionError(String),

Also applies to: 93-98

🧹 Nitpick comments (16)
rust/docs/RATE_LIMITING.md (1)

47-56: Add language identifier to fenced code block.

The code block lacks a language identifier, which helps with syntax highlighting and accessibility.

Suggested fix
 ### 1. Staggered Publishing (Recommended)
 Add delay between chunk publications:
-```
+```text
 delay_ms = 60_000 / posts_per_minute_limit
 # For 8/min limit: 7.5 seconds between chunks
</details>

</blockquote></details>
<details>
<summary>react-native/src/chunking.ts (1)</summary><blockquote>

`176-179`: **Type inconsistency between ChunkData and reassemblePayload parameter.**

The `ChunkData` type (line 27-31) uses `encrypted: Uint8Array`, but `reassemblePayload` expects `{ index: number; hash: string; data: string }` with base64-encoded `data`. This is likely intentional (pre-encoding vs wire format), but the discrepancy could be confusing.

Consider either:
1. Documenting the distinction in the JSDoc, or
2. Creating a separate type like `ChunkWireFormat` for the reassemble input.

</blockquote></details>
<details>
<summary>electron/src/sdk.ts (1)</summary><blockquote>

`270-281`: **Consider extracting the chunk publish delay constant.**

`CHUNK_PUBLISH_DELAY_MS` is defined inline within the function. For consistency with other SDKs (which use configurable rate limits) and easier tuning, consider moving this to module-level or to `transport.ts` alongside other transport constants.


<details>
<summary>Suggested refactor</summary>

```diff
+/** Delay between chunk publishes to avoid relay rate limits (ms). */
+const CHUNK_PUBLISH_DELAY_MS = 100;
+
 async function sendToNostr(payload: BugstrPayload): Promise<void> {
   // ...
-    const CHUNK_PUBLISH_DELAY_MS = 100; // Delay between chunks to avoid relay rate limits
     for (let i = 0; i < chunkEvents.length; i++) {
android/bugstr-nostr-crypto/src/main/kotlin/com/bugstr/nostr/crypto/Chunking.kt (1)

206-210: Consider using ByteArrayOutputStream for hash concatenation.

The fold with byte array concatenation creates intermediate arrays for each chunk. For large payloads with many chunks, using ByteArrayOutputStream would be more efficient.

More efficient hash concatenation
-        // Compute root hash from all chunk hashes
-        val rootHashInput = chunkHashes.fold(ByteArray(0)) { acc, h -> acc + h }
-        val rootHash = sha256(rootHashInput).toHex()
+        // Compute root hash from all chunk hashes
+        val rootHashBytes = java.io.ByteArrayOutputStream(chunkHashes.size * 32)
+        chunkHashes.forEach { rootHashBytes.write(it) }
+        val rootHash = sha256(rootHashBytes.toByteArray()).toHex()
dart/lib/src/bugstr_client.dart (1)

333-357: Consider connection reuse for chunk verification.

Each verification call creates a new Ndk instance and connection. For uploads with many chunks, this adds overhead. Consider passing the relay connection or reusing the NDK instance.

android/bugstr-nostr-crypto/src/main/kotlin/com/bugstr/nostr/crypto/Transport.kt (1)

125-133: Minor: completed() uses Finalizing phase instead of a distinct Completed phase.

The completed() factory sets phase to Finalizing but with fractionCompleted = 1.0. While this works (consumers can check fractionCompleted == 1f), consider adding a Completed phase to the enum for clearer semantics, or document this pattern.

dart/lib/src/chunking.dart (1)

35-59: Add class-level dartdoc to public data classes.

ChunkData and ChunkingResult are public classes but lack class-level documentation. As per coding guidelines, all public classes should have documentation explaining their purpose.

📝 Suggested documentation
+/// Encrypted chunk data ready for publishing.
+///
+/// Each chunk contains its index, content hash (for verification),
+/// and the CHK-encrypted payload.
 class ChunkData {
   final int index;
   final Uint8List hash;
   final Uint8List encrypted;
   ...
 }

+/// Result of chunking a payload into CHK-encrypted chunks.
+///
+/// Contains the root hash (needed for decryption), original size,
+/// and all encrypted chunks ready for transport.
 class ChunkingResult {
   final String rootHash;
   final int totalSize;
   final List<ChunkData> chunks;
   ...
 }
react-native/src/index.ts (2)

340-370: Consider connection reuse for chunk publishing.

Each chunk iteration opens a relay connection for publishing (line 353) and another for verification (line 359). For large payloads with many chunks, this creates significant connection overhead. Consider keeping a single connection open per relay during the upload phase.

This is a nice-to-have optimization and can be deferred if the current approach works acceptably.


266-282: Remove or implement unused publishToAllRelays function.

This function is defined at lines 266-282 but never called in the codebase. If chunk redundancy across all relays is a planned feature, wire it into the chunked publishing path; otherwise, remove the dead code.

dart/lib/src/transport.dart (1)

108-118: Consider adding a completed phase to the enum.

BugstrProgress.completed factory sets phase: BugstrProgressPhase.finalizing even though it represents completion. Adding a completed value to BugstrProgressPhase would improve semantic clarity. This is a minor nitpick and the current implementation works correctly.

rust/src/bin/main.rs (2)

629-650: Silently dropping invalid event IDs loses diagnostic context.

When EventId::from_hex fails, the code silently filters out the invalid ID via filter_map. This makes debugging harder when manifests contain malformed chunk IDs. Consider logging a warning or returning an error that identifies which chunk ID was invalid.

     // Parse event IDs
     let event_ids: Vec<EventId> = chunk_ids
         .iter()
-        .filter_map(|id| EventId::from_hex(id).ok())
+        .filter_map(|id| {
+            match EventId::from_hex(id) {
+                Ok(eid) => Some(eid),
+                Err(e) => {
+                    eprintln!("    {} Invalid chunk ID {}: {}", "⚠".yellow(), &id[..16.min(id.len())], e);
+                    None
+                }
+            }
+        })
         .collect();

690-692: Silently discarding task join results hides spawn panics.

If a spawned fetch task panics, the JoinError is discarded. Consider logging when tasks fail unexpectedly.

     // Wait for hinted relay fetches
     for handle in handles {
-        let _ = handle.await;
+        if let Err(e) = handle.await {
+            eprintln!("    {} Chunk fetch task panicked: {:?}", "⚠".yellow(), e);
+        }
     }
android/bugstr-nostr-crypto/src/main/kotlin/com/bugstr/nostr/crypto/Nip17CrashSender.kt (1)

174-179: Lost chunks are silently ignored—receiver will detect but sender has no feedback.

If publishChunkWithVerify returns null (all relays failed), the chunk is lost but sending continues. The comment acknowledges this, but the sender never learns which chunks failed. Consider tracking failed chunks for diagnostic logging.

             val successRelay = publishChunkWithVerify(signedChunk, relays, index % relays.size)
             if (successRelay != null) {
                 chunkRelays[signedChunk.id] = listOf(successRelay)
+            } else {
+                // Log warning - chunk publication failed on all relays
+                android.util.Log.w("Nip17CrashSender", "Chunk $index failed to publish to any relay")
             }
-            // If all relays failed, chunk is lost - receiver will report missing chunk
python/bugstr/__init__.py (3)

33-33: Remove unused import.

The secrets module is imported but never used anywhere in this file.

Proposed fix
-import secrets

516-523: Consider logging exceptions for debuggability.

Silent except: pass makes it difficult to diagnose relay connection failures. Even in a "don't crash" scenario, consider logging at debug level.

Proposed fix
+import logging
+
+_logger = logging.getLogger(__name__)
+
 def _publish_to_relays(event: Event) -> None:
     """Publish an event to the first successful relay."""
     client = Client(Keys.generate())
     for relay_url in _config.relays:
         try:
             client.add_relay(relay_url)
-        except Exception:
-            pass
+        except Exception as e:
+            _logger.debug("Failed to add relay %s: %s", relay_url, e)

735-737: Silent exception swallowing hides all errors.

The bare except Exception: pass at the top level of _send_to_nostr silently swallows all errors including unexpected ones like KeyboardInterrupt (though that inherits from BaseException). Consider logging exceptions at debug level to aid troubleshooting while preserving the "don't crash" behavior.

Proposed fix
     except Exception:
-        # Silent failure - don't crash the app
-        pass
+        # Silent failure - don't crash the app, but log for debugging
+        _logger.debug("Failed to send crash report", exc_info=True)

@alltheseas alltheseas changed the title feat: Phase 2 - CHK chunking for large crash reports hashtree CHK chunking for large crash reports Jan 16, 2026
Rust changes:
- Decompress rumor payload once before branching (handles compressed manifests)
- Expand rustdoc for DirectPayload, ManifestPayload, ChunkPayload with
  proper parameter/return/error sections
- Add PayloadTooSmall guard in chunk_payload (enforces >50KB requirement)
- Fix test to expect 2 chunks for minimum payload (50KB < MAX_CHUNK_SIZE)

SDK fixes:
- Android: Guard against malformed JSON in sendDirect with try-catch
- Dart: Remove unnecessary sublist copy in chkDecrypt, fix comment
- Go: Fix race condition in waitForRateLimit by computing sleep under lock
- Python: Fix compression logic (only compress for chunked, not direct)
- Python: Fix _verify_chunk_exists to use EventSource.relays(timedelta)
- React Native: Fail fast when publishChunkWithVerify returns null

Documentation:
- README: Hyphenate "line-range" in Android symbolication table
- CHUNK_DISTRIBUTION_DESIGN: Fix onProgress phase 'chunks' → 'uploading'
- RATE_LIMITING: Update default delay from 100ms to 7500ms

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
README.md (1)

37-80: Document the CHK chunking mechanism for large crash reports.

The PR implements CHK-based chunking (kinds 10420/10421/10422) with 48KB chunks for payloads exceeding 50KB, but the README doesn't document this critical transport mechanism. Lines 47–80 describe compression and truncation as the primary approach for large payloads, leaving readers unclear about:

  • How reports exceeding 64 KB are actually handled (chunking, not truncation)
  • When CHK chunking is triggered (>50KB threshold)
  • How chunks are encrypted, published, and reassembled
  • The relationship between compression (up to 200KB truncation) and chunking (>50KB payloads)

Add a section explaining CHK chunking transport, when it's used, and how it complements compression.

🤖 Fix all issues with AI agents
In
`@android/bugstr-nostr-crypto/src/main/kotlin/com/bugstr/nostr/crypto/Nip17CrashSender.kt`:
- Around line 182-186: The code silently drops a chunk when
publishChunkWithVerify(...) returns null causing unrecoverable uploads; update
the upload flow in Nip17CrashSender so that if
publishChunkWithVerify(signedChunk, relays, ...) returns null you abort the
entire upload (e.g., throw or return an error) and do not proceed to publish the
manifest or mark chunkRelays for that chunk; ensure the calling method that
publishes the manifest checks for and propagates this failure instead of
continuing on null results.
- Around line 51-56: The catch block that wraps non-JSON plaintext in
Nip17CrashSender (the crashJson creation where JSONObject(request.plaintext) is
in a try/catch) swallows the JSONException; update the catch to log the
exception before falling back so you retain failure context — use the existing
logging approach in this class (e.g., Android Log or the class logger) to log
the caught org.json.JSONException and include request.plaintext/context in the
log message, then continue to create the fallback JSONObject().apply {
put("raw", request.plaintext) }.
- Around line 263-278: buildChunkEvent currently sets pubKey from
RandomSource().randomPrivateKeyHex(), which mismatches the private key used to
sign the event (request.senderPrivateKeyHex) and causes signature verification
to fail; modify buildChunkEvent so the pubKey is derived from the actual private
key used for signing (i.e., derive pubKeyHex from request.senderPrivateKeyHex)
or accept a privateKeyHex parameter and derive the pubKey there (update call
sites accordingly); reference buildChunkEvent,
RandomSource().randomPrivateKeyHex(), QuartzPubKeyDeriver().derivePubKeyHex(),
and request.senderPrivateKeyHex when making the change.

In `@go/bugstr.go`:
- Around line 776-783: When publishChunkWithVerify(ctx, relays, i%numRelays,
chunkEvent) returns an error, record that chunkEvent.ID as failed and stop the
upload (or at minimum log and track failed IDs) instead of silently continuing;
modify the block handling err so it appends chunkEvent.ID to a failedChunks
slice (declare/initialize failedChunks earlier in the same function), ensure
chunkRelays is only populated for successful uploads, and call the logger (or
return an error) to abort the overall upload if failedChunks is non-empty so the
manifest does not claim chunks that were never uploaded.

In `@python/bugstr/__init__.py`:
- Around line 717-726: The manifest's total_size is currently set to the
uncompressed payload_size but the chunks are built from compressed
payload_bytes; update the manifest creation so total_size reflects the
compressed data length (e.g., len(payload_bytes) or the computed sum of chunk
sizes) before calling _build_gift_wrap(KIND_MANIFEST, ...) and
_publish_to_relays; locate where manifest is constructed and replace
payload_size with the compressed size variable or computed compressed size to
ensure total_size matches the reassembled compressed data.
♻️ Duplicate comments (5)
README.md (1)

140-152: Add language specification to fenced code block.

The directory structure example should specify a language for proper syntax highlighting and accessibility.

📝 Proposed fix
-```
+```text
 mappings/
   android/
     com.example.app/
rust/docs/RATE_LIMITING.md (1)

104-107: Inconsistent delay value in SDK recommendations.

Line 104 recommends "100ms delay between chunks" but line 118-119 defines DEFAULT_CHUNK_PUBLISH_DELAY_MS: 7500. This contradicts the analysis in CHUNK_DISTRIBUTION_DESIGN.md which states 100ms is "way too fast." Update the recommendation to align with the 7500ms default.

📝 Suggested fix
-1. **Default behavior**: Publish chunks to all relays with 100ms delay between chunks
+1. **Default behavior**: Publish chunks to all relays with 7500ms delay between chunks (strfry+noteguard: 8 posts/min)
python/bugstr/__init__.py (2)

582-598: Past review issue addressed: get_events_of API usage is now correct.

Line 593 now uses EventSource.relays(timedelta(seconds=5)) instead of a direct timeout parameter, matching the nostr-sdk Python API.


649-653: Direct payload path intentionally skips compression.

The past review flagged that compression was computed but not used for direct payloads. Looking at the current code, _maybe_compress is only called in the large payload branch (line 656), not for direct payloads (lines 649-653). This is intentional - direct payloads ≤50KB don't need compression. The logic is now consistent: small payloads go uncompressed, large payloads get compressed before chunking.

go/bugstr.go (1)

624-641: Past review issue addressed: race condition in waitForRateLimit is fixed.

The sleep duration is now computed while holding the lock (lines 627-635), then sleep occurs after releasing the lock (lines 638-640). This prevents the TOCTOU race where another goroutine could update the timestamp between read and sleep calculation.

🧹 Nitpick comments (7)
rust/docs/RATE_LIMITING.md (1)

49-52: Missing language specifier on fenced code block.

Add a language identifier for better rendering and accessibility.

📝 Suggested fix
-```
+```text
 delay_ms = 60_000 / posts_per_minute_limit
 # For 8/min limit: 7.5 seconds between chunks
</details>

</blockquote></details>
<details>
<summary>dart/lib/src/chunking.dart (1)</summary><blockquote>

`196-199`: **Consider edge case for zero payload size.**

`estimateChunkCount(0)` returns 0, which is correct, but callers should be aware that an empty payload produces no chunks. This is a minor consideration for defensive coding.

</blockquote></details>
<details>
<summary>rust/docs/CHUNK_DISTRIBUTION_DESIGN.md (1)</summary><blockquote>

`35-41`: **Missing language specifier on fenced code block.**

Add a language identifier for consistent rendering.


<details>
<summary>📝 Suggested fix</summary>

```diff
-```
+```text
 Chunk 0 → relay.damus.io
 Chunk 1 → nos.lol
 Chunk 2 → relay.primal.net
 Chunk 3 → relay.damus.io (cycle)
 ...
</details>

</blockquote></details>
<details>
<summary>react-native/src/index.ts (1)</summary><blockquote>

`310-319`: **Add timeout protection to relay connections.**

`Relay.connect()` has no built-in timeout. If a relay is unresponsive, this will block indefinitely. Wrap the call with a timeout mechanism (e.g., Promise.race with setTimeout) to prevent hanging. This applies to `publishChunkToRelay`, `verifyChunkExists`, `publishToRelays`, and `publishToAllRelays`.

</blockquote></details>
<details>
<summary>rust/src/bin/main.rs (2)</summary><blockquote>

`643-650`: **Silent filtering of invalid event IDs loses debugging context.**

Using `filter_map` silently drops invalid hex event IDs. When the length check fails on line 648, the error message is generic. Consider logging which specific IDs failed to parse for easier debugging.


<details>
<summary>💡 Suggested improvement</summary>

```diff
     // Parse event IDs
-    let event_ids: Vec<EventId> = chunk_ids
-        .iter()
-        .filter_map(|id| EventId::from_hex(id).ok())
-        .collect();
-
-    if event_ids.len() != chunk_ids.len() {
-        return Err("Invalid chunk event IDs in manifest".into());
+    let mut event_ids: Vec<EventId> = Vec::with_capacity(chunk_ids.len());
+    for id in chunk_ids {
+        match EventId::from_hex(id) {
+            Ok(eid) => event_ids.push(eid),
+            Err(_) => return Err(format!("Invalid chunk event ID: {}", id).into()),
+        }
     }

761-767: Missing rustdoc for fetch_chunks_from_relay helper.

Per coding guidelines, all public classes, methods, and non-trivial functions must have documentation. While this is a private helper, it's complex enough to warrant a brief rustdoc explaining its purpose and the meaning of the expected_count parameter.

📝 Suggested documentation
+/// Fetches chunks from a single relay into the shared chunks map.
+///
+/// # Arguments
+/// * `relay_url` - WebSocket URL of the relay to query
+/// * `event_ids` - Event IDs to fetch
+/// * `chunks` - Shared map to store fetched chunks by index
+/// * `expected_count` - Total chunks expected (for early exit when complete)
 async fn fetch_chunks_from_relay(
     relay_url: &str,
     event_ids: &[EventId],
     chunks: Arc<tokio::sync::Mutex<std::collections::HashMap<u32, ChunkPayload>>>,
     expected_count: usize,
 ) {
python/bugstr/__init__.py (1)

512-523: Silent exception swallowing reduces debuggability.

The try-except-pass blocks at lines 518-519 and 538-539 silently swallow all exceptions. While this prevents crashes during error reporting, it makes debugging relay connectivity issues impossible. Consider at minimum logging these errors at debug level. Based on static analysis hints from Ruff (S110, BLE001).

💡 Suggested improvement
+import logging
+
+logger = logging.getLogger(__name__)
+
 def _publish_to_relays(event: Event) -> None:
     """Publish an event to the first successful relay."""
     client = Client(Keys.generate())
     for relay_url in _config.relays:
         try:
             client.add_relay(relay_url)
-        except Exception:
-            pass
+        except Exception as e:
+            logger.debug("Failed to add relay %s: %s", relay_url, e)

Also applies to: 526-546

…ation

Android:
- Abort upload when publishChunkWithVerify returns null (fail fast)
- Add TAG constant and log JSONException before fallback to raw string
- Fix buildChunkEvent pubKey derivation to use actual signing key instead
  of random key (was causing signature verification failures)

Go:
- Abort upload on first chunk publish failure with error logging
- Return wrapped error with chunk index for debugging

Python:
- Fix manifest total_size to use compressed payload_bytes length
  instead of uncompressed payload_size (must match reassembled data)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
go/bugstr.go (1)

21-44: Missing log import causes compilation failure.

Line 780 calls log.Printf(...) but the log package is not imported. This will result in a compilation error.

🐛 Proposed fix
 import (
 	"bytes"
 	"compress/gzip"
 	"context"
 	"crypto/aes"
 	"crypto/cipher"
 	"crypto/sha256"
 	"encoding/base64"
 	"encoding/hex"
 	"encoding/json"
 	"fmt"
 	"io"
+	"log"
 	mathrand "math/rand"
 	"regexp"
 	"runtime"
 	"strings"
 	"sync"
 	"time"
🤖 Fix all issues with AI agents
In
`@android/bugstr-nostr-crypto/src/main/kotlin/com/bugstr/nostr/crypto/Nip17CrashSender.kt`:
- Around line 68-71: The code creates directRequest by calling request.copy and
unconditionally sets messageKind = Nip17MessageKind.Chat, which overwrites any
caller-provided kind; update the construction in Nip17CrashSender (the
directRequest creation) to preserve the original request.messageKind unless a
deliberate override is required—use the existing request.messageKind when
copying (or add a clear doc/parameter if Chat must be forced) so you do not
discard non-default kinds set by callers.
- Around line 44-66: The size decision is using request.plaintext but the actual
sent payload is the wrapped DirectPayload; instead, construct the DirectPayload
JSON (as done in sendDirect) first, convert it to a UTF-8 byte array and measure
its size against Transport.DIRECT_SIZE_THRESHOLD, then choose between sendDirect
and sendChunked based on that measured wrapped size; modify the caller that
currently computes payloadSize from request.plaintext to build/serialize the
same JSONObject structure (or expose/sendDirect to accept the prebuilt payload)
so the exact serialized bytes are reused for sending and for the size check
(refer to sendDirect, sendChunked, and the
payloadSize/Transport.DIRECT_SIZE_THRESHOLD logic).
- Around line 231-239: manifestRequest is being built manually and omits request
fields (subject, replyToEventId, replyRelayHint, messageKind); instead, create
manifestRequest by copying the original Nip17Request (the incoming request
variable) and only overriding plaintext and expirationSeconds so all other
fields propagate—locate the Nip17Request construction of manifestRequest and
replace it with a copy call that preserves subject, replyToEventId,
replyRelayHint, messageKind and other fields while setting plaintext =
manifestJson.toString() and expirationSeconds = request.expirationSeconds,
following the same pattern used by sendDirect.

In `@python/bugstr/__init__.py`:
- Around line 686-703: The loop currently continues when
_publish_chunk_with_verify(chunk_event, relays, i % num_relays) returns
success=False, leaving missing chunk IDs and unreassemblable manifests; change
the logic so that if success is False you abort the upload immediately (e.g.,
raise an exception or return a failure result from the enclosing upload
function) instead of continuing, so chunk_relays is only populated when a chunk
actually succeeded; ensure any cleanup or final progress callback via
_config.on_progress/Progress (and use of estimate_upload_seconds) happens for
the aborted path as well so callers get a clear failure rather than a partial
manifest.
- Around line 549-558: The function _wait_for_rate_limit has a race because it
releases _last_post_time_lock before computing elapsed; hold the lock while
reading last_time and computing elapsed to prevent another thread from updating
_last_post_time between those operations. Modify _wait_for_rate_limit to call
get_relay_rate_limit(relay_url) if needed but ensure elapsed = time.time() -
last_time is computed inside the with _last_post_time_lock block (using the same
_last_post_time and _last_post_time_lock symbols) so the subsequent time.sleep
uses a correct, stable elapsed value.
🧹 Nitpick comments (3)
go/bugstr.go (1)

77-81: Potential division by zero if numRelays is 0.

If numRelays is 0, this will cause integer division by zero. While current callers always pass at least the default relays, consider adding a guard for robustness.

♻️ Defensive fix
 func EstimateUploadSeconds(totalChunks, numRelays int) int {
+	if numRelays <= 0 {
+		numRelays = 1
+	}
 	msPerChunk := int(DefaultRelayRateLimit.Milliseconds()) / numRelays
 	return (totalChunks * msPerChunk) / 1000
 }
android/bugstr-nostr-crypto/src/main/kotlin/com/bugstr/nostr/crypto/Nip17CrashSender.kt (2)

28-29: Consider protecting lastPostTime if send can run concurrently.

lastPostTime is a mutable map accessed from suspend functions; concurrent sends could race and violate per‑relay limits. Consider synchronizing access (or documenting single‑threaded usage) to avoid races.

Also applies to: 97-113


292-303: Add KDoc for new public API surface (relays + publisher).

The new public relays field and NostrEventPublisher interface additions should have class‑level KDoc and @param/@return/@throws details per repo standards. As per coding guidelines, please add KDoc for the public data class and interface.

Also applies to: 338-371

Critical (Rust receiver):
- Add gift_wrap.verify() signature verification
- Add seal.verify() signature verification
- Add seal.kind == 13 validation

Medium (spec violations):
- Fix rumor timestamp: use actual time, only randomize seal/gift-wrap
  (Android, Go, Python, React Native, Dart, Electron)
- Fix Android seal tags: remove expiration from seal (must be empty per NIP-59)

Android Nip17CrashSender fixes:
- Preserve caller's messageKind instead of overwriting to Chat
- Size check now uses wrapped DirectPayload size
- manifestRequest now copies all fields from original request

Python fixes:
- Fix race condition in _wait_for_rate_limit (compute sleep inside lock)
- Abort upload on chunk failure instead of publishing partial manifest

Documentation:
- Add comprehensive NIP17_NIP59_AUDIT.md

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
android/bugstr-nostr-crypto/src/main/kotlin/com/bugstr/nostr/crypto/Nip17PayloadBuilder.kt (1)

345-351: Incomplete JSON escape handling.

The JSON specification requires escaping all control characters (U+0000 through U+001F). This implementation is missing \b (backspace) and \f (form feed), and doesn't handle other control characters. For crash report content that may contain arbitrary data, this could produce invalid JSON that fails parsing on the receiver side.

🔧 Suggested fix
 private fun String.escapeJson(): String =
     this
         .replace("\\", "\\\\")
         .replace("\"", "\\\"")
         .replace("\n", "\\n")
         .replace("\r", "\\r")
         .replace("\t", "\\t")
+        .replace("\b", "\\b")
+        .replace("\u000C", "\\f")
+        .replace(Regex("[\\x00-\\x1F]")) { match ->
+            "\\u${match.value[0].code.toString(16).padStart(4, '0')}"
+        }
react-native/src/index.ts (1)

246-262: Add explicit timeouts to relay connections to prevent indefinite blocking.

The JavaScript nostr-tools library's Relay.connect() has no built-in timeout and will wait indefinitely for unresponsive relays. This affects publishToRelays() (line 246), publishToAllRelays() (line 267), publishChunkToRelay() (line 311), and verifyChunkExists() (line 325). Wrap Relay.connect() with a timeout promise (e.g., Promise.race() with a configurable timeout) to ensure crash reports don't hang during relay failures.

go/bugstr.go (1)

467-485: Gzip errors are silently ignored.

gz.Write and gz.Close() can return errors that indicate compression failures. Ignoring these could result in corrupted or incomplete payloads being sent.

Suggested fix
 func maybeCompress(plaintext string) string {
 	if len(plaintext) < 1024 {
 		return plaintext
 	}

 	var buf bytes.Buffer
 	gz := gzip.NewWriter(&buf)
-	gz.Write([]byte(plaintext))
-	gz.Close()
+	if _, err := gz.Write([]byte(plaintext)); err != nil {
+		return plaintext // Fall back to uncompressed
+	}
+	if err := gz.Close(); err != nil {
+		return plaintext // Fall back to uncompressed
+	}

 	envelope := CompressedEnvelope{
🤖 Fix all issues with AI agents
In `@dart/lib/src/bugstr_client.dart`:
- Around line 438-457: The loop in bugstr_client.dart that iterates over chunks
must abort the entire upload when _publishChunkWithVerify returns null
(indicating all relays failed); modify the for-loop that builds chunkEvent via
_buildChunkEvent and calls _publishChunkWithVerify so that on null it reports
failure via _onProgress?.call(BugstrProgress.failed(...)) (or an equivalent
failure progress), avoids adding the chunk id to chunkIds/chunkRelays, and
immediately terminates the upload flow (return early or throw) so the manifest
is not sent with missing chunks. Ensure references to chunkIds, chunkRelays,
_onProgress and totalChunks are handled consistently when aborting.

In `@electron/src/sdk.ts`:
- Around line 244-251: The transport decision uses compressed size (payloadBytes
via maybeCompressPayload) but the direct send calls createDirectPayload(payload)
with uncompressed content, causing inconsistency; fix by either (A) making the
size check use the uncompressed JSON length (call getTransportKind with
rawJson.length instead of payloadBytes.length) so direct sending of the original
payload is valid, or (B) keep the compressed-based decision and change the
direct path to send the compressed form (use the compressed string/bytes when
constructing the direct payload instead of the original payload) — update the
call sites around maybeCompressPayload, getTransportKind, and
createDirectPayload accordingly.
- Line 273: CHUNK_PUBLISH_DELAY_MS is currently set to 100ms which violates
strfry+noteguard relay limits; update the constant CHUNK_PUBLISH_DELAY_MS in
sdk.ts to 7500 (ms) and adjust its inline comment to mention the 8 posts/minute
relay rate limit so chunk publishing waits ~7.5s between posts.
- Around line 285-302: Update the manifest to include an optional chunk_relays
field and populate it with the relay URLs used for each published chunk: extend
the ManifestPayload type to add chunk_relays?: string[][], then when
constructing the manifest (the manifest constant in electron/src/sdk.ts) build a
chunk_relays array that aligns with chunkIds by collecting the relay URLs
returned/used when publishing each chunk (the same relays used with
publishToRelays for chunks), and include chunk_relays in the object passed to
buildGiftWrap (keep using buildGiftWrap and publishToRelays as before). Ensure
chunk_relays is an array of arrays of relay URLs matching chunks and is omitted
if empty.

In `@go/bugstr.go`:
- Around line 779-783: The code calls log.Printf when handling a failed chunk
publish (the snippet using chunkEvent, i, totalChunks and fmt.Errorf) but the
log package isn't imported; either add the standard "log" import to the file's
import block or replace the log.Printf call with an existing imported option
(e.g., fmt.Printf or fmt.Fprintf(os.Stderr,...)) and remove the log usage;
update the import list accordingly if you add "log" so the build succeeds.
- Around line 594-623: Remove the unused function publishToAllRelays from the
codebase: delete the entire publishToAllRelays(ctx context.Context, relays
[]string, event nostr.Event) implementation since it is never referenced and the
codebase uses publishToRelays instead; ensure no remaining references to
publishToAllRelays exist and run `go build`/tests to verify no regressions.

In `@python/bugstr/__init__.py`:
- Around line 527-547: Remove the unused dead function _publish_to_all_relays
from the module: delete the entire function definition (including its inner
publish_to_relay closure, thread creation and join logic) and remove any
now-unreferenced symbols/imports that were only used by it (e.g., threading,
Client, Keys) to avoid unused-import warnings; ensure no other code references
_publish_to_all_relays before deleting.

In `@rust/src/bin/main.rs`:
- Around line 867-873: Add a comprehensive rustdoc comment for the async
function handle_message_for_storage explaining its purpose (parses/handles an
incoming message and returns an Option<ReceivedCrash> for storage), describe all
parameters (text: &str input message, keys: &Keys credentials/context used
during processing, seen: &mut HashSet<EventId> deduplication/mutation of seen
events, relay_urls: &[String] list of relay endpoints used/queried and how it
affects behavior), explicitly note that the function is async and may perform
I/O/network calls (and therefore must be awaited), describe the return value
semantics (when Some(ReceivedCrash) is produced vs None), list any side effects
(mutates seen, uses Keys), and call out error handling/propagation behavior if
relevant so callers know how to use and await this function.
- Around line 761-767: Add a complete rustdoc comment for the async helper
function fetch_chunks_from_relay describing its purpose, parameters, behavior
and side-effects: document that it fetches chunks from a single relay URL
(relay_url), what event_ids (EventId) represents, the shared mutable chunks map
(Arc<Mutex<HashMap<u32, ChunkPayload>>>) and how it is updated, the meaning of
expected_count and how it affects early termination, any timeout/retry semantics
and error handling, what filtering is applied to incoming data, concurrency
considerations (locking behavior and potential deadlocks), and what callers can
expect on success/failure (return behavior, panics, logs). Include examples or
note invariants if relevant and document any assumptions about chunk indexing
keys and ordering.
- Around line 642-707: The manifest.chunk_ids are assumed to map position i ->
chunk index i but aren’t validated; update the code around chunk_ids/event_ids
handling to either (1) validate that each entry’s chunk index matches its
enumerate position (ensure chunk_ids are ordered sequentially 0..n-1) or (2)
stop relying on enumerate and propagate explicit (index, EventId) pairs into
relay_to_chunks and into fetch_chunks_from_relay so the fetch tasks receive the
original chunk index. Modify the loop that builds relay_to_chunks and the
spawned tasks to use and pass chunk indices (u32/usize) with EventId (and update
fetch_chunks_from_relay call signature/usage if needed) or add a check after
parsing event_ids that verifies indices are sequential and return an Err if not.
🧹 Nitpick comments (13)
android/bugstr-nostr-crypto/src/main/kotlin/com/bugstr/nostr/crypto/Nip17PayloadBuilder.kt (1)

90-96: Stale comment on line 92.

The comment states the timestamp is "overwritten per gift wrap with randomized timestamp" but the actual code at line 34 sets it to now (the actual current time). Per NIP-59, this is the correct behavior—rumors use actual timestamps while only seal/gift-wrap timestamps are randomized. Update the comment to reflect reality.

📝 Suggested fix
         return UnsignedNostrEvent(
             pubKey = request.senderPubKey,
-            createdAt = 0, // overwritten per gift wrap with randomized timestamp
+            createdAt = 0, // placeholder, overwritten with actual timestamp in buildGiftWraps
             kind = kind,
             tags = tags,
             content = request.plaintext,
         )
android/bugstr-nostr-crypto/src/main/kotlin/com/bugstr/nostr/crypto/Nip17CrashSender.kt (1)

3-3: Unused import: android.util.Base64.

The Base64 import doesn't appear to be used in this file. The chunking encoding is handled by Chunking.encodeChunkHash() and Chunking.encodeChunkData().

♻️ Suggested fix
 package com.bugstr.nostr.crypto
 
-import android.util.Base64
 import org.json.JSONArray
 import org.json.JSONObject
dart/lib/src/bugstr_client.dart (2)

334-358: Consider reusing NDK instance for verification.

A new Ndk instance is created for each chunk verification. For large uploads with many chunks, this could be inefficient. Consider caching/reusing the NDK instance for the duration of the upload operation.

This is a low-priority optimization - the current implementation is correct and follows the same pattern as _publishToRelay.


295-305: Remove unused method _publishToAllRelays.

This method is defined but never called anywhere in the codebase. The chunking logic uses _publishChunkWithVerify with round-robin single-relay distribution instead. Delete this dead code.

python/bugstr/__init__.py (3)

138-139: Consider exposing on_progress in the init() function.

The on_progress callback is added to Config but isn't exposed in the init() function signature (lines 174-182). Users would need to manually create a Config object to use progress reporting, breaking the convenience pattern established by init().

♻️ Proposed fix to expose on_progress in init()
 def init(
     developer_pubkey: str,
     relays: Optional[list[str]] = None,
     environment: Optional[str] = None,
     release: Optional[str] = None,
     redact_patterns: Optional[list[Pattern]] = None,
     before_send: Optional[Callable[[Payload], Optional[Payload]]] = None,
     confirm_send: Optional[Callable[[str, str], bool]] = None,
+    on_progress: Optional[Callable[["Progress"], None]] = None,
 ) -> None:

And in the Config construction around line 211:

         _config = Config(
             developer_pubkey=developer_pubkey,
             relays=relays or DEFAULT_RELAYS.copy(),
             environment=environment,
             release=release,
             redact_patterns=redact_patterns or DEFAULT_REDACTIONS.copy(),
             before_send=before_send,
             confirm_send=confirm_send,
+            on_progress=on_progress,
         )

570-582: Consider logging exceptions at DEBUG level for troubleshooting.

The silent exception handling is acceptable for the retry mechanism, but DEBUG-level logging would help diagnose relay connectivity issues during development. The static analysis flag (BLE001) is noted but the broad catch is appropriate here since any relay error should trigger retry.

♻️ Optional: Add debug logging
+import logging
+
+logger = logging.getLogger(__name__)
+
 def _publish_chunk_to_relay(event: Event, relay_url: str) -> bool:
     """Publish a chunk to a single relay with rate limiting."""
     _wait_for_rate_limit(relay_url)
     try:
         client = Client(Keys.generate())
         client.add_relay(relay_url)
         client.connect()
         client.send_event(event)
         client.disconnect()
         _record_post_time(relay_url)
         return True
-    except Exception:
+    except Exception as e:
+        logger.debug("Failed to publish chunk to %s: %s", relay_url, e)
         return False

756-758: Silent failure is appropriate, but consider optional debug logging.

The design choice to never crash the host app is correct for a crash reporting SDK. However, completely silent failures make integration debugging difficult. Consider logging at DEBUG level (disabled by default) so developers can opt-in to see issues.

♻️ Optional: Add debug logging for troubleshooting
+import logging
+
+logger = logging.getLogger(__name__)
+
     except Exception:
-        # Silent failure - don't crash the app
-        pass
+        # Silent failure - don't crash the app (enable DEBUG logging to see errors)
+        logger.debug("Bugstr: failed to send crash report", exc_info=True)
electron/src/sdk.ts (4)

161-212: Add JSDoc documentation for buildGiftWrap.

Per coding guidelines, public functions should have documentation explaining purpose, parameters, and return values. The brief comment on line 161-163 should be expanded to full JSDoc format.

📝 Suggested JSDoc
-/**
- * Build a NIP-17 gift-wrapped event for a rumor.
- */
-function buildGiftWrap(
+/**
+ * Build a NIP-17 gift-wrapped event for a rumor.
+ *
+ * Creates a three-layer encrypted event:
+ * 1. Rumor (unsigned inner event with empty sig)
+ * 2. Seal (kind 13, encrypted rumor)
+ * 3. Gift wrap (kind 1059, encrypted seal)
+ *
+ * `@param` rumorKind - The kind number for the inner rumor event
+ * `@param` content - The content string for the rumor
+ * `@param` senderPrivkey - The sender's private key
+ * `@param` recipientPubkey - The recipient's hex pubkey
+ * `@returns` The finalized gift-wrapped event
+ */
+function buildGiftWrap(
   rumorKind: number,
   content: string,
   senderPrivkey: Uint8Array,
   recipientPubkey: string
 ): ReturnType<typeof finalizeEvent> {

The NIP-17/59 implementation looks correct: rumor includes computed id via getEventHash and sig: "" as required. Based on learnings.


214-234: LGTM with minor suggestion.

Good privacy design using ephemeral keys per chunk and randomized timestamps. Consider adding JSDoc documentation per coding guidelines.

📝 Optional JSDoc
-/**
- * Build a public chunk event (kind 10422).
- */
-function buildChunkEvent(chunk: ChunkData): ReturnType<typeof finalizeEvent> {
+/**
+ * Build a public chunk event (kind 10422).
+ *
+ * Each chunk is published with an ephemeral keypair for unlinkability.
+ *
+ * `@param` chunk - The chunk data containing index, hash, and encrypted content
+ * `@returns` The finalized chunk event
+ */
+function buildChunkEvent(chunk: ChunkData): ReturnType<typeof finalizeEvent> {

306-325: Add JSDoc documentation.

The function logic is correct. Per coding guidelines, add JSDoc explaining the first-success publish behavior.

📝 Suggested JSDoc
-/**
- * Publish an event to the first successful relay.
- */
-async function publishToRelays(
+/**
+ * Publish an event to the first successful relay.
+ *
+ * Attempts each relay in order, returning after the first successful publish.
+ * Throws if no relay accepts the event.
+ *
+ * `@param` relays - Array of relay URLs to try
+ * `@param` event - The finalized event to publish
+ * `@throws` Error if all relays reject the event
+ */
+async function publishToRelays(

327-346: LGTM with documentation suggestion.

Good use of Promise.allSettled for resilient multi-relay publishing with proper error threshold (fails only if zero relays succeed). Consider adding JSDoc per coding guidelines.

📝 Suggested JSDoc
-/**
- * Publish an event to all relays (for chunk redundancy).
- */
-async function publishToAllRelays(
+/**
+ * Publish an event to all relays for redundancy.
+ *
+ * Publishes in parallel to all relays. Succeeds if at least one relay
+ * accepts the event.
+ *
+ * `@param` relays - Array of relay URLs to publish to
+ * `@param` event - The finalized event to publish
+ * `@throws` Error if no relay accepts the event
+ */
+async function publishToAllRelays(
rust/docs/NIP17_NIP59_AUDIT.md (2)

274-278: Remove redundant documentation recommendation.

Line 278 suggests documenting the protocol inconsistency between Android (kind 14 wrapper) and other SDKs (direct kind 10420/10421), but this is already documented at line 48:

"Note: Android wraps DirectPayload inside a kind 14 rumor (per NIP-17), while other SDKs use kind 10420/10421 directly."

Consider removing this item or marking it as completed.

📝 Proposed change
 ### Priority 4 (Low)
 - [ ] Log verified sender pubkey prominently in receiver
 - [ ] ~~Consider kind:10050 relay discovery for better deliverability~~ - Using hardcoded relay list
 - [ ] Add protocol version field to allow future evolution
-- [ ] Document the protocol inconsistency between Android (kind 14 wrapper) and other SDKs (direct kind 10420/10421)
+- [x] Document the protocol inconsistency between Android (kind 14 wrapper) and other SDKs (direct kind 10420/10421) - Already documented in line 48

282-286: Format URLs as proper markdown links.

The bare URLs should be formatted as proper markdown links for better readability.

Based on static analysis hints.

🔗 Proposed fix
 ## Specification References
 
-- **NIP-17:** https://github.com/nostr-protocol/nips/blob/master/17.md
-- **NIP-59:** https://github.com/nostr-protocol/nips/blob/master/59.md
-- **NIP-44:** https://github.com/nostr-protocol/nips/blob/master/44.md
+- **NIP-17:** [Private Direct Messages](https://github.com/nostr-protocol/nips/blob/master/17.md)
+- **NIP-59:** [Gift Wrap](https://github.com/nostr-protocol/nips/blob/master/59.md)
+- **NIP-44:** [Encrypted Payloads (Versioned)](https://github.com/nostr-protocol/nips/blob/master/44.md)

Comment on lines +438 to 457
for (var i = 0; i < totalChunks; i++) {
final chunk = result.chunks[i];
final chunkEvent = _buildChunkEvent(chunk);
chunkIds.add(chunkEvent.id);

// Publish with verification and retry (starts at round-robin relay)
final successRelay =
await _publishChunkWithVerify(chunkEvent, relays, i % relays.length);
if (successRelay != null) {
chunkRelays[chunkEvent.id] = [successRelay];
}
// If all relays failed, chunk is lost - receiver will report missing chunk

// Report progress
final remainingChunks = totalChunks - i - 1;
final remainingSeconds =
_estimateUploadSeconds(remainingChunks, relays.length);
_onProgress
?.call(BugstrProgress.uploading(i + 1, totalChunks, remainingSeconds));
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Upload should abort when chunk publish fails.

If _publishChunkWithVerify returns null (all relays failed), the loop continues publishing subsequent chunks and ultimately sends a manifest with missing chunk data. This wastes bandwidth and guarantees the receiver cannot reassemble the report.

Per PR commit messages, the Python SDK was updated to "abort upload on chunk failure" - the Dart implementation should match this behavior.

🐛 Proposed fix to abort on chunk failure
           final successRelay =
               await _publishChunkWithVerify(chunkEvent, relays, i % relays.length);
           if (successRelay != null) {
             chunkRelays[chunkEvent.id] = [successRelay];
+          } else {
+            debugPrint('Bugstr: chunk $i failed on all relays, aborting upload');
+            _onProgress?.call(BugstrProgress.failed('Chunk upload failed'));
+            return;
           }
-          // If all relays failed, chunk is lost - receiver will report missing chunk

           // Report progress

Note: This assumes BugstrProgress.failed() exists or can be added to the progress types. If not, adjust the progress reporting accordingly.

🤖 Prompt for AI Agents
In `@dart/lib/src/bugstr_client.dart` around lines 438 - 457, The loop in
bugstr_client.dart that iterates over chunks must abort the entire upload when
_publishChunkWithVerify returns null (indicating all relays failed); modify the
for-loop that builds chunkEvent via _buildChunkEvent and calls
_publishChunkWithVerify so that on null it reports failure via
_onProgress?.call(BugstrProgress.failed(...)) (or an equivalent failure
progress), avoids adding the chunk id to chunkIds/chunkRelays, and immediately
terminates the upload flow (return early or throw) so the manifest is not sent
with missing chunks. Ensure references to chunkIds, chunkRelays, _onProgress and
totalChunks are handled consistently when aborting.

Comment on lines +244 to +251
const rawJson = JSON.stringify(payload);
const compressed = maybeCompressPayload(rawJson);
const payloadBytes = Buffer.from(compressed, "utf-8");
const transportKind = getTransportKind(payloadBytes.length);

if (transportKind === "direct") {
// Small payload: direct gift-wrapped delivery
const directPayload = createDirectPayload(payload as Record<string, unknown>);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Size check uses compressed data but direct path sends uncompressed payload.

The transport kind decision is based on compressed payload size (line 246-247), but the direct path (line 251) uses the original uncompressed payload object via createDirectPayload. This creates inconsistency:

  • A payload that compresses to <50KB but is larger uncompressed will be sent as "direct" but the actual event content will exceed the threshold.

For consistency, either check the uncompressed size or send the compressed payload in the direct path.

🐛 Suggested fix: Use compressed payload consistently
   if (transportKind === "direct") {
     // Small payload: direct gift-wrapped delivery
-    const directPayload = createDirectPayload(payload as Record<string, unknown>);
+    const directPayload = { v: 1, payload: compressed };
     const giftWrap = buildGiftWrap(
       KIND_DIRECT,
       JSON.stringify(directPayload),
       senderPrivkey,
       developerPubkeyHex
     );

Alternatively, check rawJson.length instead of payloadBytes.length for the transport decision if you want to keep the direct path sending uncompressed data.

🤖 Prompt for AI Agents
In `@electron/src/sdk.ts` around lines 244 - 251, The transport decision uses
compressed size (payloadBytes via maybeCompressPayload) but the direct send
calls createDirectPayload(payload) with uncompressed content, causing
inconsistency; fix by either (A) making the size check use the uncompressed JSON
length (call getTransportKind with rawJson.length instead of
payloadBytes.length) so direct sending of the original payload is valid, or (B)
keep the compressed-based decision and change the direct path to send the
compressed form (use the compressed string/bytes when constructing the direct
payload instead of the original payload) — update the call sites around
maybeCompressPayload, getTransportKind, and createDirectPayload accordingly.


// Publish chunks to all relays with delay to avoid rate limiting
const chunkIds: string[] = [];
const CHUNK_PUBLISH_DELAY_MS = 100; // Delay between chunks to avoid relay rate limits
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, let's look at the specific lines in question
cd /tmp && cat -n electron/src/sdk.ts | sed -n '240,310p'

Repository: alltheseas/bugstr

Length of output: 112


🏁 Script executed:

# Search for delay-related constants and rate limit mentions across SDKs
rg -n "CHUNK.*DELAY|chunk.*delay|RATE.*LIMIT|rate.*limit|7500|100" electron/src/ -A2 -B2

Repository: alltheseas/bugstr

Length of output: 1470


🏁 Script executed:

# Look for any PR description, README, or documentation files mentioning rate limits
rg -n "rate limit|relay.*delay|7500|chunk.*delay" --type md --type txt -C3

Repository: alltheseas/bugstr

Length of output: 7669


🏁 Script executed:

# Check if there are other references to timing or delay strategies in the sendToNostr function
rg -n "sendToNostr|publish.*delay" electron/src/sdk.ts -A50

Repository: alltheseas/bugstr

Length of output: 3788


Increase CHUNK_PUBLISH_DELAY_MS to 7500ms to respect relay rate limits.

The current 100ms delay is insufficient for relays using strfry+noteguard rate limiting (8 posts/minute = 7500ms between posts). Documentation confirms the default relays (relay.damus.io, nos.lol, relay.primal.net) enforce this limit. Publishing chunks at 100ms intervals will exceed the 8 posts/minute threshold and cause publish failures.

🤖 Prompt for AI Agents
In `@electron/src/sdk.ts` at line 273, CHUNK_PUBLISH_DELAY_MS is currently set to
100ms which violates strfry+noteguard relay limits; update the constant
CHUNK_PUBLISH_DELAY_MS in sdk.ts to 7500 (ms) and adjust its inline comment to
mention the 8 posts/minute relay rate limit so chunk publishing waits ~7.5s
between posts.

Comment on lines +285 to +302
// Build and publish manifest
const manifest: ManifestPayload = {
v: 1,
root_hash: rootHash,
total_size: totalSize,
chunk_count: chunks.length,
chunk_ids: chunkIds,
};

const manifestGiftWrap = buildGiftWrap(
KIND_MANIFEST,
JSON.stringify(manifest),
senderPrivkey,
developerPubkeyHex
);

await publishToRelays(relays, manifestGiftWrap);
console.info("Bugstr: sent chunked crash report manifest");
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check ManifestPayload structure and if relay hints are used elsewhere
rg -n "relay_hints|relayHints|ManifestPayload" --type ts --type rust -C3

Repository: alltheseas/bugstr

Length of output: 8692


🏁 Script executed:

#!/bin/bash
# Check full ManifestPayload type definition in electron and react-native
echo "=== Electron ManifestPayload ===" && sed -n '30,40p' electron/src/transport.ts
echo ""
echo "=== React-native ManifestPayload ===" && sed -n '139,150p' react-native/src/transport.ts
echo ""
echo "=== React-native manifest construction ===" && sed -n '441,455p' react-native/src/index.ts

Repository: alltheseas/bugstr

Length of output: 1108


🏁 Script executed:

#!/bin/bash
# Verify if PR objectives mention relay hints
git log --all --oneline -n 20 | head -20
echo ""
echo "=== Check for PR description or manifest relay hints references ==="
rg -i "manifest.*relay|relay.*hint|efficient.*fetch" --type ts --type rust -B2 -A2 | head -50

Repository: alltheseas/bugstr

Length of output: 3877


🏁 Script executed:

#!/bin/bash
# Check if electron/src/sdk.ts builds chunkRelays before the manifest construction
sed -n '250,290p' electron/src/sdk.ts

Repository: alltheseas/bugstr

Length of output: 1610


Add relay hint support to the Electron manifest to match other SDKs.

The Rust and React-native implementations include chunk_relays in the ManifestPayload to optimize chunk fetching, but the Electron implementation omits this. The ManifestPayload type needs to include an optional chunk_relays field, and the manifest construction should collect and populate relay URLs for each published chunk to enable efficient retrieval.

🤖 Prompt for AI Agents
In `@electron/src/sdk.ts` around lines 285 - 302, Update the manifest to include
an optional chunk_relays field and populate it with the relay URLs used for each
published chunk: extend the ManifestPayload type to add chunk_relays?:
string[][], then when constructing the manifest (the manifest constant in
electron/src/sdk.ts) build a chunk_relays array that aligns with chunkIds by
collecting the relay URLs returned/used when publishing each chunk (the same
relays used with publishToRelays for chunks), and include chunk_relays in the
object passed to buildGiftWrap (keep using buildGiftWrap and publishToRelays as
before). Ensure chunk_relays is an array of arrays of relay URLs matching chunks
and is omitted if empty.

Comment on lines +594 to +623
// publishToAllRelays publishes an event to all relays for redundancy.
func publishToAllRelays(ctx context.Context, relays []string, event nostr.Event) error {
var wg sync.WaitGroup
successCount := 0
var mu sync.Mutex

for _, relayURL := range relays {
wg.Add(1)
go func(url string) {
defer wg.Done()
relay, err := nostr.RelayConnect(ctx, url)
if err != nil {
return
}
err = relay.Publish(ctx, event)
relay.Close()
if err == nil {
mu.Lock()
successCount++
mu.Unlock()
}
}(relayURL)
}

wg.Wait()
if successCount == 0 {
return fmt.Errorf("failed to publish chunk to any relay")
}
return nil
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if publishToAllRelays is used anywhere in the codebase
rg -n "publishToAllRelays" go/

Repository: alltheseas/bugstr

Length of output: 254


🏁 Script executed:

#!/bin/bash
# Double-check by searching for variations and the function body
rg -c "publishToAllRelays|publishToRelays" go/bugstr.go

Repository: alltheseas/bugstr

Length of output: 62


🏁 Script executed:

#!/bin/bash
# Check the distinction between publishToAllRelays and publishToRelays
rg -n "publishToRelays|publishToAllRelays" go/bugstr.go

Repository: alltheseas/bugstr

Length of output: 501


Remove unused function publishToAllRelays.

This function is not called anywhere in the codebase. The active code path uses publishToRelays (which publishes to the first successful relay) instead.

🤖 Prompt for AI Agents
In `@go/bugstr.go` around lines 594 - 623, Remove the unused function
publishToAllRelays from the codebase: delete the entire publishToAllRelays(ctx
context.Context, relays []string, event nostr.Event) implementation since it is
never referenced and the codebase uses publishToRelays instead; ensure no
remaining references to publishToAllRelays exist and run `go build`/tests to
verify no regressions.

Comment on lines +779 to +783
if err != nil {
// All relays failed for this chunk - abort to avoid partial/broken manifest
log.Printf("Failed to publish chunk %d/%d (id: %s): %v", i, totalChunks, chunkEvent.ID, err)
return fmt.Errorf("failed to publish chunk %d to any relay after retries: %w", i, err)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Compilation error: log package not imported.

log.Printf is used at line 781, but the log package is not in the imports. This will cause a build failure.

Fix: Add log import or use fmt

Option 1 - Add log import:

 import (
 	"bytes"
 	"compress/gzip"
 	"context"
 	"crypto/aes"
 	"crypto/cipher"
 	"crypto/sha256"
 	"encoding/base64"
 	"encoding/hex"
 	"encoding/json"
 	"fmt"
 	"io"
+	"log"
 	mathrand "math/rand"

Option 2 - Remove the log call (error is already returned):

 		if err != nil {
 			// All relays failed for this chunk - abort to avoid partial/broken manifest
-			log.Printf("Failed to publish chunk %d/%d (id: %s): %v", i, totalChunks, chunkEvent.ID, err)
 			return fmt.Errorf("failed to publish chunk %d to any relay after retries: %w", i, err)
 		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if err != nil {
// All relays failed for this chunk - abort to avoid partial/broken manifest
log.Printf("Failed to publish chunk %d/%d (id: %s): %v", i, totalChunks, chunkEvent.ID, err)
return fmt.Errorf("failed to publish chunk %d to any relay after retries: %w", i, err)
}
import (
"bytes"
"compress/gzip"
"context"
"crypto/aes"
"crypto/cipher"
"crypto/sha256"
"encoding/base64"
"encoding/hex"
"encoding/json"
"fmt"
"io"
"log"
mathrand "math/rand"
🤖 Prompt for AI Agents
In `@go/bugstr.go` around lines 779 - 783, The code calls log.Printf when handling
a failed chunk publish (the snippet using chunkEvent, i, totalChunks and
fmt.Errorf) but the log package isn't imported; either add the standard "log"
import to the file's import block or replace the log.Printf call with an
existing imported option (e.g., fmt.Printf or fmt.Fprintf(os.Stderr,...)) and
remove the log usage; update the import list accordingly if you add "log" so the
build succeeds.

Comment on lines +527 to +547
def _publish_to_all_relays(event: Event) -> None:
"""Publish an event to all relays for redundancy."""
# Use threads to publish in parallel
threads = []
for relay_url in _config.relays:
def publish_to_relay(url):
try:
client.add_relay(relay_url)
client = Client(Keys.generate())
client.add_relay(url)
client.connect()
client.send_event(event)
client.disconnect()
except Exception:
pass

t = threading.Thread(target=publish_to_relay, args=(relay_url,))
threads.append(t)
t.start()

for t in threads:
t.join(timeout=10)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

rg -n "_publish_to_all_relays" --type py

Repository: alltheseas/bugstr

Length of output: 140


🏁 Script executed:

# Double-check by searching for any indirect calls or references
rg -n "publish_to_all" --type py

Repository: alltheseas/bugstr

Length of output: 140


Remove unused _publish_to_all_relays function.

This function is defined at line 527 but never called anywhere in the codebase. It's dead code and should be removed.

🧰 Tools
🪛 Ruff (0.14.13)

539-540: try-except-pass detected, consider logging the exception

(S110)


539-539: Do not catch blind exception: Exception

(BLE001)

🤖 Prompt for AI Agents
In `@python/bugstr/__init__.py` around lines 527 - 547, Remove the unused dead
function _publish_to_all_relays from the module: delete the entire function
definition (including its inner publish_to_relay closure, thread creation and
join logic) and remove any now-unreferenced symbols/imports that were only used
by it (e.g., threading, Client, Keys) to avoid unused-import warnings; ensure no
other code references _publish_to_all_relays before deleting.

Comment on lines +642 to +707
// Parse event IDs
let event_ids: Vec<EventId> = chunk_ids
.iter()
.filter_map(|id| EventId::from_hex(id).ok())
.collect();

if event_ids.len() != chunk_ids.len() {
return Err("Invalid chunk event IDs in manifest".into());
}

let expected_count = chunk_ids.len();
let chunks: Arc<TokioMutex<HashMap<u32, ChunkPayload>>> = Arc::new(TokioMutex::new(HashMap::new()));

// Determine if we have relay hints
let has_hints = chunk_relays.map(|h| !h.is_empty()).unwrap_or(false);

if has_hints {
println!(" {} Fetching {} chunks using relay hints", "↓".blue(), expected_count);

// Phase 1: Try hinted relays first (grouped by relay for efficiency)
let mut relay_to_chunks: HashMap<String, Vec<(usize, EventId)>> = HashMap::new();

for (i, chunk_id) in chunk_ids.iter().enumerate() {
if let Some(hints) = chunk_relays.and_then(|h| h.get(chunk_id)) {
if let Some(relay) = hints.first() {
relay_to_chunks
.entry(relay.clone())
.or_default()
.push((i, event_ids[i]));
}
}
}

// Spawn parallel fetch tasks for hinted relays
let mut handles = Vec::new();
for (relay_url, chunk_indices) in relay_to_chunks {
let relay = relay_url.clone();
let ids: Vec<EventId> = chunk_indices.iter().map(|(_, id)| *id).collect();
let chunks_clone = Arc::clone(&chunks);
let expected = expected_count;

let handle = tokio::spawn(async move {
fetch_chunks_from_relay(&relay, &ids, chunks_clone, expected).await
});
handles.push(handle);
}

// Wait for hinted relay fetches
for handle in handles {
let _ = handle.await;
}

// Check if we got all chunks from hinted relays
let current_count = chunks.lock().await.len();
if current_count == expected_count {
println!(" {} All {} chunks retrieved from hinted relays", "✓".green(), expected_count);
let final_chunks = chunks.lock().await;
let mut ordered: Vec<ChunkPayload> = Vec::with_capacity(expected_count);
for i in 0..expected_count {
match final_chunks.get(&(i as u32)) {
Some(chunk) => ordered.push(chunk.clone()),
None => return Err(format!("Missing chunk at index {}", i).into()),
}
}
return Ok(ordered);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Inspect manifest creation to confirm chunk_ids ordering by chunk index.
rg -n "chunk_ids" rust/src/chunking.rs rust/src/transport.rs -C3
rg -n "ManifestPayload" rust/src/chunking.rs -C3

Repository: alltheseas/bugstr

Length of output: 4167


🏁 Script executed:

#!/bin/bash
# Find where chunk_ids is populated after publishing
rg -n "chunk_ids.*=" rust/src/bin/main.rs rust/src/lib.rs -C5
rg -n "manifest.*chunk_ids" rust/src/bin/main.rs -C3

Repository: alltheseas/bugstr

Length of output: 397


🏁 Script executed:

#!/bin/bash
# Search for where manifest is modified or chunk_ids is set
rg -n "manifest\.chunk_ids\s*=" rust/src/ -C5
rg -n "chunk_ids.*push\|chunk_ids.*collect" rust/src/ -C3

Repository: alltheseas/bugstr

Length of output: 43


🏁 Script executed:

#!/bin/bash
# Search for publish or posting chunks, and how the manifest is updated
rg -n "publish_chunks\|publish_chunk\|post_chunk" rust/src/ -C5
rg -n "ManifestPayload\s*{" rust/src/bin/main.rs -C8

Repository: alltheseas/bugstr

Length of output: 162


🏁 Script executed:

#!/bin/bash
# Fix regex and search for publish_chunks and manifest population
rg -n "publish_chunks" rust/src/ -C10
# Also look for event_ids and how they're collected in main.rs
rg -n "event_ids\|chunks_payload" rust/src/bin/main.rs -B3 -A3

Repository: alltheseas/bugstr

Length of output: 43


🏁 Script executed:

#!/bin/bash
# Read main.rs to find chunk publishing logic
# First check file size and then read relevant sections
wc -l rust/src/bin/main.rs
# Look around line 100-300 for chunk/event creation
sed -n '100,300p' rust/src/bin/main.rs | cat -n

Repository: alltheseas/bugstr

Length of output: 7710


🏁 Script executed:

#!/bin/bash
# Search for chunk creation and publishing in serve function
rg -n "fn serve\|fn listen" rust/src/bin/main.rs -C2
# Then check the area where chunks might be created and event_ids collected
sed -n '700,900p' rust/src/bin/main.rs | cat -n

Repository: alltheseas/bugstr

Length of output: 8013


🏁 Script executed:

#!/bin/bash
# Find ChunkPayload structure definition
rg -n "struct ChunkPayload\|pub index" rust/src/transport.rs -C5
# Also check how chunks are created in chunking.rs
sed -n '1,150p' rust/src/chunking.rs | cat -n

Repository: alltheseas/bugstr

Length of output: 5963


🏁 Script executed:

#!/bin/bash
# Search for where chunk_ids is populated after chunks are published
# Look for code that collects event IDs from published chunks
rg -n "chunk_ids\|event_id\|publish" rust/src/bin/main.rs -B2 -A2 | head -100
# Also check what happens in serve function around chunk publishing
grep -n "fn serve\|fn listen" rust/src/bin/main.rs

Repository: alltheseas/bugstr

Length of output: 101


🏁 Script executed:

#!/bin/bash
# Read the serve function to see chunk publishing logic
sed -n '416,600p' rust/src/bin/main.rs | cat -n

Repository: alltheseas/bugstr

Length of output: 7753


🏁 Script executed:

#!/bin/bash
# Search for code that publishes chunks and collects event IDs
rg -n "publish\|post\|Event\|EventBuilder" rust/src/ --type rs | grep -i "chunk\|manifest" -C2
# Also search for where manifest gets its chunk_ids populated
rg -n "manifest\s*=" rust/src/ --type rs -A5 | head -100

Repository: alltheseas/bugstr

Length of output: 114


🏁 Script executed:

#!/bin/bash
# Search for code that publishes chunks and collects event IDs
rg -n "publish\|post\|Event" rust/src/ -t rust | grep -i "chunk" | head -50
# Search in main.rs specifically for chunk publishing
rg -n "ChunkPayload\|event_ids" rust/src/bin/main.rs -t rust -B2 -A2

Repository: alltheseas/bugstr

Length of output: 43


🏁 Script executed:

#!/bin/bash
# Let's look at the full context around lines 642-707 in main.rs
sed -n '630,720p' rust/src/bin/main.rs | cat -n

Repository: alltheseas/bugstr

Length of output: 4273


🏁 Script executed:

#!/bin/bash
# Check transport.rs documentation for chunk_ids ordering
rg -n "chunk_ids\|chunk_count\|ManifestPayload" rust/src/transport.rs -B3 -A3 | head -80
# Check if there's any validation of ordering
rg -n "assert\|validate\|verify.*chunk\|index" rust/src/ -t rust | grep -i "chunk\|index\|order" | head -40

Repository: alltheseas/bugstr

Length of output: 43


Add validation that manifest.chunk_ids are ordered sequentially by chunk index.

The code assumes chunk_ids[i] contains the event ID for the chunk with index=i (line 677-681 enumerates positions directly). The transport documentation states "Ordered list" but doesn't enforce sequential index order. If manifest creation ever populates chunk_ids differently (e.g., [event_2, event_0, event_1]), Phase 2 fallback could skip chunks. Either validate that chunk_ids matches chunk indices before fetching, or pass explicit (index, event_id) tuples to fetch_chunks_from_relay instead of relying on enumerate position.

🤖 Prompt for AI Agents
In `@rust/src/bin/main.rs` around lines 642 - 707, The manifest.chunk_ids are
assumed to map position i -> chunk index i but aren’t validated; update the code
around chunk_ids/event_ids handling to either (1) validate that each entry’s
chunk index matches its enumerate position (ensure chunk_ids are ordered
sequentially 0..n-1) or (2) stop relying on enumerate and propagate explicit
(index, EventId) pairs into relay_to_chunks and into fetch_chunks_from_relay so
the fetch tasks receive the original chunk index. Modify the loop that builds
relay_to_chunks and the spawned tasks to use and pass chunk indices (u32/usize)
with EventId (and update fetch_chunks_from_relay call signature/usage if needed)
or add a check after parsing event_ids that verifies indices are sequential and
return an Err if not.

Comment on lines +761 to +767
/// Fetch chunks from a single relay into the shared chunks map.
async fn fetch_chunks_from_relay(
relay_url: &str,
event_ids: &[EventId],
chunks: Arc<tokio::sync::Mutex<std::collections::HashMap<u32, ChunkPayload>>>,
expected_count: usize,
) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add full rustdoc for fetch_chunks_from_relay.

This is a non-trivial helper (timeouts, filtering, shared state) and needs parameter/behavior documentation.

♻️ Suggested rustdoc expansion
-/// Fetch chunks from a single relay into the shared chunks map.
+/// Fetch chunks from a single relay into the shared chunks map.
+///
+/// # Parameters
+/// * `relay_url` - Relay WebSocket URL to query.
+/// * `event_ids` - Chunk event IDs to request from this relay.
+/// * `chunks` - Shared map populated by chunk index.
+/// * `expected_count` - Total chunk count used for early exit.
+///
+/// # Behavior
+/// - Connects with a 10s timeout and subscribes for `KIND_CHUNK`.
+/// - Inserts each unique chunk by index and logs progress.
+/// - Stops on EOSE, close, errors, or overall timeout.

As per coding guidelines, non-trivial functions require full rustdoc.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/// Fetch chunks from a single relay into the shared chunks map.
async fn fetch_chunks_from_relay(
relay_url: &str,
event_ids: &[EventId],
chunks: Arc<tokio::sync::Mutex<std::collections::HashMap<u32, ChunkPayload>>>,
expected_count: usize,
) {
/// Fetch chunks from a single relay into the shared chunks map.
///
/// # Parameters
/// * `relay_url` - Relay WebSocket URL to query.
/// * `event_ids` - Chunk event IDs to request from this relay.
/// * `chunks` - Shared map populated by chunk index.
/// * `expected_count` - Total chunk count used for early exit.
///
/// # Behavior
/// - Connects with a 10s timeout and subscribes for `KIND_CHUNK`.
/// - Inserts each unique chunk by index and logs progress.
/// - Stops on EOSE, close, errors, or overall timeout.
async fn fetch_chunks_from_relay(
relay_url: &str,
event_ids: &[EventId],
chunks: Arc<tokio::sync::Mutex<std::collections::HashMap<u32, ChunkPayload>>>,
expected_count: usize,
) {
🤖 Prompt for AI Agents
In `@rust/src/bin/main.rs` around lines 761 - 767, Add a complete rustdoc comment
for the async helper function fetch_chunks_from_relay describing its purpose,
parameters, behavior and side-effects: document that it fetches chunks from a
single relay URL (relay_url), what event_ids (EventId) represents, the shared
mutable chunks map (Arc<Mutex<HashMap<u32, ChunkPayload>>>) and how it is
updated, the meaning of expected_count and how it affects early termination, any
timeout/retry semantics and error handling, what filtering is applied to
incoming data, concurrency considerations (locking behavior and potential
deadlocks), and what callers can expect on success/failure (return behavior,
panics, logs). Include examples or note invariants if relevant and document any
assumptions about chunk indexing keys and ordering.

Comment on lines 867 to 873
/// Handle incoming message and return crash for storage.
fn handle_message_for_storage(
async fn handle_message_for_storage(
text: &str,
keys: &Keys,
seen: &mut HashSet<EventId>,
relay_urls: &[String],
) -> Option<ReceivedCrash> {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Expand rustdoc for handle_message_for_storage.

The new relay_urls parameter and async behavior should be documented for callers.

♻️ Suggested rustdoc expansion
-/// Handle incoming message and return crash for storage.
+/// Handle incoming relay message and return a crash ready for storage.
+///
+/// # Parameters
+/// * `text` - Raw relay message payload.
+/// * `keys` - Receiver keys for decrypting gift wraps.
+/// * `seen` - Dedup set updated in-place.
+/// * `relay_urls` - All relays available for chunk fetch fallback.
+///
+/// # Returns
+/// * `Some(ReceivedCrash)` when decoding succeeds.
+/// * `None` for non-event messages or failures (logged).

As per coding guidelines, non-trivial functions require full rustdoc.

🤖 Prompt for AI Agents
In `@rust/src/bin/main.rs` around lines 867 - 873, Add a comprehensive rustdoc
comment for the async function handle_message_for_storage explaining its purpose
(parses/handles an incoming message and returns an Option<ReceivedCrash> for
storage), describe all parameters (text: &str input message, keys: &Keys
credentials/context used during processing, seen: &mut HashSet<EventId>
deduplication/mutation of seen events, relay_urls: &[String] list of relay
endpoints used/queried and how it affects behavior), explicitly note that the
function is async and may perform I/O/network calls (and therefore must be
awaited), describe the return value semantics (when Some(ReceivedCrash) is
produced vs None), list any side effects (mutates seen, uses Keys), and call out
error handling/propagation behavior if relevant so callers know how to use and
await this function.

- Validate seal.tags is empty (NIP-59 MUST)
- Validate rumor.sig is empty (NIP-59: rumors are unsigned)
- Validate seal.pubkey == rumor.pubkey (NIP-17: prevent sender spoofing)

Document intentional deviation: sender gift-wrap skipped for crash reporting
(sender doesn't need to read back crash reports).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@rust/docs/NIP17_NIP59_AUDIT.md`:
- Around line 54-71: Update the doc to reflect that the Android seal-tags issue
has been remediated: change the Android row status in the compatibility table
from "⚠️ NO (VIOLATION)" to a passing indicator (e.g., "✅"), remove or update
the “⚠️ SPEC VIOLATION (Android)” header, and relocate the Kotlin snippet that
shows the old behavior (the buildList block using sealTags and expirationSeconds
from Nip17PayloadBuilder.kt) into a clearly labeled "previous implementation /
fixed" section noting that expiration tags should only be applied to the gift
wrap (not the seal).
- Around line 29-38: The table showing "⚠️ ±2 days (VIOLATION)" for SDK rumor
timestamps is ambiguous about whether it reflects the state before or after the
"Fix rumor timestamp" change; add a clarifying note immediately above the table
(the block starting with the header row "| SDK | Kind Used | ID Computed | sig |
Timestamp Randomized |") stating that the table reflects the audit findings as
of 2026-01-16 (pre-fix) and point readers to the "Fix rumor timestamp" entry
(the checklist item marked complete on 2026-01-16) for the post-fix status.

In `@rust/src/bin/main.rs`:
- Around line 948-951: The code uses String::from_utf8_lossy(&reassembled) which
can silently replace invalid bytes; change to use
String::from_utf8(reassembled.clone()) and explicitly handle the Err case: try
to convert reassembled into a UTF-8 String with String::from_utf8 and on Ok pass
that to decompress_payload(&payload_str); on Err do not silently lossy-convert —
instead add explicit handling such as calling a new
decompress_payload_bytes(&reassembled) (or change decompress_payload to accept
&[u8]) to handle binary data, and/or log the conversion error before any
fallback; update references to payload_str, decompressed, and decompress_payload
accordingly.
♻️ Duplicate comments (3)
rust/src/bin/main.rs (3)

786-794: Chunk index mismatch in needed-chunks filter.

The filter at line 791 uses the enumerate position *i as u32 to check if a chunk is missing, but ChunkPayload.index (which may differ from enumerate position) is what gets inserted into the map at line 845. If chunk_ids are not ordered by chunk index, this filter may incorrectly skip chunks that are actually needed.

Either validate that chunk_ids are ordered by index at parse time, or use the actual chunk event ID to track fetched state instead of relying on position-to-index equivalence.


761-767: Expand rustdoc for fetch_chunks_from_relay.

The current single-line doc doesn't cover the async behavior, timeouts, shared state semantics, or early-exit conditions. As per coding guidelines, non-trivial functions require full documentation.


867-873: Expand rustdoc for handle_message_for_storage.

The doc comment needs to document the relay_urls parameter, async behavior, and the manifest/chunk reassembly path.

🧹 Nitpick comments (2)
rust/src/bin/main.rs (1)

689-692: Consider logging task failures instead of silently ignoring.

let _ = handle.await discards JoinError which could hide panics in spawned tasks. While the final chunk count check will catch missing data, logging task failures would aid debugging.

♻️ Suggested improvement
         // Wait for hinted relay fetches
         for handle in handles {
-            let _ = handle.await;
+            if let Err(e) = handle.await {
+                eprintln!("    {} Chunk fetch task failed: {:?}", "⚠".yellow(), e);
+            }
         }
rust/docs/NIP17_NIP59_AUDIT.md (1)

287-289: Optional: Format URLs using markdown link syntax.

While bare URLs are functional, markdown link syntax improves readability and is flagged by linters.

♻️ Suggested formatting
-- **NIP-17:** https://github.com/nostr-protocol/nips/blob/master/17.md
-- **NIP-59:** https://github.com/nostr-protocol/nips/blob/master/59.md
-- **NIP-44:** https://github.com/nostr-protocol/nips/blob/master/44.md
+- **NIP-17:** [https://github.com/nostr-protocol/nips/blob/master/17.md](https://github.com/nostr-protocol/nips/blob/master/17.md)
+- **NIP-59:** [https://github.com/nostr-protocol/nips/blob/master/59.md](https://github.com/nostr-protocol/nips/blob/master/59.md)
+- **NIP-44:** [https://github.com/nostr-protocol/nips/blob/master/44.md](https://github.com/nostr-protocol/nips/blob/master/44.md)

Or more concisely:

-- **NIP-17:** https://github.com/nostr-protocol/nips/blob/master/17.md
-- **NIP-59:** https://github.com/nostr-protocol/nips/blob/master/59.md
-- **NIP-44:** https://github.com/nostr-protocol/nips/blob/master/44.md
+- [NIP-17](https://github.com/nostr-protocol/nips/blob/master/17.md)
+- [NIP-59](https://github.com/nostr-protocol/nips/blob/master/59.md)
+- [NIP-44](https://github.com/nostr-protocol/nips/blob/master/44.md)

Comment on lines +29 to +38
| SDK | Kind Used | ID Computed | sig="" | Timestamp Randomized |
|-----|-----------|-------------|--------|----------------------|
| Android | 14 (chat) | ✅ SHA256 | ✅ | ⚠️ ±2 days (VIOLATION) |
| Go | 10420/10421 | ✅ SHA256 | ✅ | ⚠️ ±2 days (VIOLATION) |
| Python | 10420/10421 | ✅ SHA256 | ✅ | ⚠️ ±2 days (VIOLATION) |
| React Native | 10420/10421 | ✅ getEventHash | ✅ | ⚠️ ±2 days (VIOLATION) |
| Dart | 10420/10421 | ✅ SHA256 | ✅ | ⚠️ ±2 days (VIOLATION) |
| Electron | 10420/10421 | ✅ getEventHash | ✅ | ⚠️ ±2 days (VIOLATION) |

**⚠️ SPEC VIOLATION:** All SDKs randomize the rumor's `created_at`. NIP-59 specifies that the rumor should contain the canonical message timestamp, while only the seal and gift-wrap timestamps should be randomized for privacy. This breaks timing semantics.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

find . -name "NIP17_NIP59_AUDIT.md" -type f

Repository: alltheseas/bugstr

Length of output: 93


🏁 Script executed:

cat -n ./rust/docs/NIP17_NIP59_AUDIT.md | head -80

Repository: alltheseas/bugstr

Length of output: 4774


🏁 Script executed:

wc -l ./rust/docs/NIP17_NIP59_AUDIT.md

Repository: alltheseas/bugstr

Length of output: 97


🏁 Script executed:

cat -n ./rust/docs/NIP17_NIP59_AUDIT.md | sed -n '260,289p'

Repository: alltheseas/bugstr

Length of output: 1952


🏁 Script executed:

# Verify the exact table context and any surrounding explanatory text
cat -n ./rust/docs/NIP17_NIP59_AUDIT.md | sed -n '25,50p'

Repository: alltheseas/bugstr

Length of output: 1847


Table timestamps need clarification: Add note indicating whether table shows pre-fix (2026-01-16) or post-fix state.

The rumor timestamp violation table (lines 29-36) displays all SDKs with "⚠️ ±2 days (VIOLATION)", but line 268 marks "Fix rumor timestamp" as complete with [x] on 2026-01-16. Without an explicit header note, the table's temporal perspective is ambiguous—readers cannot determine if it documents the pre-fix audit or should reflect post-fix compliance. Consider adding a note above the table clarifying: "Violations shown below were present at audit date (2026-01-16); see line 268 for fixes."

🤖 Prompt for AI Agents
In `@rust/docs/NIP17_NIP59_AUDIT.md` around lines 29 - 38, The table showing "⚠️
±2 days (VIOLATION)" for SDK rumor timestamps is ambiguous about whether it
reflects the state before or after the "Fix rumor timestamp" change; add a
clarifying note immediately above the table (the block starting with the header
row "| SDK | Kind Used | ID Computed | sig | Timestamp Randomized |") stating
that the table reflects the audit findings as of 2026-01-16 (pre-fix) and point
readers to the "Fix rumor timestamp" entry (the checklist item marked complete
on 2026-01-16) for the post-fix status.

Comment on lines +54 to +71
| Android | 13 ✅ | Sender privkey ✅ | NIP-44 ✅ | Randomized ✅ | ⚠️ NO (VIOLATION) |
| Go | 13 ✅ | Sender privkey ✅ | NIP-44 ✅ | Randomized ✅ | ✅ |
| Python | 13 ✅ | Sender keys ✅ | NIP-44 ✅ | Randomized ✅ | ✅ |
| React Native | 13 ✅ | Sender privkey ✅ | NIP-44 ✅ | Randomized ✅ | ✅ |
| Dart | 13 ✅ | Sender privkey ✅ | NIP-44 ✅ | Randomized ✅ | ✅ |
| Electron | 13 ✅ | Sender privkey ✅ | NIP-44 ✅ | Randomized ✅ | ✅ |

**⚠️ SPEC VIOLATION (Android):** NIP-59 says seal "tags MUST be empty". Android adds expiration tags to the seal when `expirationSeconds` is set.

**Location:** `android/.../Nip17PayloadBuilder.kt:173-175`
```kotlin
val sealTags = buildList {
expirationSeconds?.let { add(listOf("expiration", it.toString())) }
}
```

Expiration tags should only be on the gift wrap, not the seal.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Same documentation timing concern as rumor section.

Line 269 marks Android seal tags issue as fixed, but the table (line 54) still shows "⚠️ NO (VIOLATION)". The code snippet on lines 64-68 is helpful for understanding the issue, but should be marked as "previous implementation" if already remediated.

🤖 Prompt for AI Agents
In `@rust/docs/NIP17_NIP59_AUDIT.md` around lines 54 - 71, Update the doc to
reflect that the Android seal-tags issue has been remediated: change the Android
row status in the compatibility table from "⚠️ NO (VIOLATION)" to a passing
indicator (e.g., "✅"), remove or update the “⚠️ SPEC VIOLATION (Android)”
header, and relocate the Kotlin snippet that shows the old behavior (the
buildList block using sealTags and expirationSeconds from
Nip17PayloadBuilder.kt) into a clearly labeled "previous implementation / fixed"
section noting that expiration tags should only be applied to the gift wrap (not
the seal).

Comment on lines +948 to +951
// Decompress reassembled data
let payload_str = String::from_utf8_lossy(&reassembled);
let decompressed = decompress_payload(&payload_str)
.unwrap_or_else(|_| payload_str.to_string());
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Lossy UTF-8 conversion may silently corrupt crash data.

String::from_utf8_lossy replaces invalid UTF-8 sequences with replacement characters (�). If the reassembled payload contains binary data or encoding issues, this silently corrupts the content.

Consider using String::from_utf8 with explicit error handling:

🐛 Proposed fix
-                let payload_str = String::from_utf8_lossy(&reassembled);
-                let decompressed = decompress_payload(&payload_str)
-                    .unwrap_or_else(|_| payload_str.to_string());
+                let payload_str = match String::from_utf8(reassembled.clone()) {
+                    Ok(s) => s,
+                    Err(e) => {
+                        eprintln!("{} Invalid UTF-8 in reassembled payload: {}", "⚠".yellow(), e);
+                        String::from_utf8_lossy(&reassembled).to_string()
+                    }
+                };
+                let decompressed = decompress_payload(&payload_str)
+                    .unwrap_or_else(|_| payload_str);
🤖 Prompt for AI Agents
In `@rust/src/bin/main.rs` around lines 948 - 951, The code uses
String::from_utf8_lossy(&reassembled) which can silently replace invalid bytes;
change to use String::from_utf8(reassembled.clone()) and explicitly handle the
Err case: try to convert reassembled into a UTF-8 String with String::from_utf8
and on Ok pass that to decompress_payload(&payload_str); on Err do not silently
lossy-convert — instead add explicit handling such as calling a new
decompress_payload_bytes(&reassembled) (or change decompress_payload to accept
&[u8]) to handle binary data, and/or log the conversion error before any
fallback; update references to payload_str, decompressed, and decompress_payload
accordingly.

@alltheseas
Copy link
Owner Author

@mmalmi I applied your hashtree-rs work to splitting larger crash reports into small enough (compressed) chunks via NIP-17.

What do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant