Skip to content

Node v24.14.0 nsolid v6.2.1 release#426

Open
santigimeno wants to merge 441 commits intonode-v24.x-nsolid-v6.xfrom
node-v24.14.0-nsolid-v6.2.1-release
Open

Node v24.14.0 nsolid v6.2.1 release#426
santigimeno wants to merge 441 commits intonode-v24.x-nsolid-v6.xfrom
node-v24.14.0-nsolid-v6.2.1-release

Conversation

@santigimeno
Copy link
Member

@santigimeno santigimeno commented Mar 2, 2026

Summary by CodeRabbit

  • New Features

    • Upgraded tooling to Python 3.14 across configs and CI.
    • Added multiple new benchmarks (HTTP parser fragmentation, ESM import, SQLite, URL inspection, TextDecoder streaming).
    • Improved Android build compatibility and devcontainer auto environment setup.
  • Bug Fixes

    • Corrected base64 path in buffer byteLength benchmark and standardized boolean handling in benchmarks.
  • Documentation

    • Major BUILDING guide updates (especially Windows), expanded LICENSE, updated CHANGELOG, typo fixes in templates.
  • Chores

    • CI modernized: slimmer/ARM runners, updated actions, conditional sccache, sparse checkout.
    • Labeling, Dependabot cadence, and workflow refinements.
    • Repository housekeeping (.gitattributes, .gitignore, lint rules).

dependabot bot and others added 30 commits January 19, 2026 19:19
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 4.31.6 to 4.31.9.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](github/codeql-action@fe4161a...5d4e8d1)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: 4.31.9
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
PR-URL: nodejs/node#61241
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Ulises Gascón <ulisesgascongonzalez@gmail.com>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Bumps [actions/cache](https://github.com/actions/cache) from 4.3.0 to 5.0.1.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](actions/cache@0057852...9255dc7)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-version: 5.0.1
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
PR-URL: nodejs/node#61243
Reviewed-By: Antoine du Hamel <duhamelantoine1995@gmail.com>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Ulises Gascón <ulisesgascongonzalez@gmail.com>
Reviewed-By: Rafael Gonzaga <rafael.nunu@hotmail.com>
Bumps [actions/setup-node](https://github.com/actions/setup-node) from 6.0.0 to 6.1.0.
- [Release notes](https://github.com/actions/setup-node/releases)
- [Commits](actions/setup-node@2028fbc...395ad32)

---
updated-dependencies:
- dependency-name: actions/setup-node
  dependency-version: 6.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
PR-URL: nodejs/node#61244
Reviewed-By: Antoine du Hamel <duhamelantoine1995@gmail.com>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Ulises Gascón <ulisesgascongonzalez@gmail.com>
Bumps [step-security/harden-runner](https://github.com/step-security/harden-runner) from 2.13.2 to 2.14.0.
- [Release notes](https://github.com/step-security/harden-runner/releases)
- [Commits](step-security/harden-runner@95d9a5d...20cf305)

---
updated-dependencies:
- dependency-name: step-security/harden-runner
  dependency-version: 2.14.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
PR-URL: nodejs/node#61245
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Ulises Gascón <ulisesgascongonzalez@gmail.com>
Bumps the eslint group in /tools/eslint with 2 updates: [eslint](https://github.com/eslint/eslint) and [eslint-plugin-jsdoc](https://github.com/gajus/eslint-plugin-jsdoc).

Updates `eslint` from 9.39.1 to 9.39.2
- [Release notes](https://github.com/eslint/eslint/releases)
- [Commits](eslint/eslint@v9.39.1...v9.39.2)

Updates `eslint-plugin-jsdoc` from 61.4.1 to 61.5.0
- [Release notes](https://github.com/gajus/eslint-plugin-jsdoc/releases)
- [Commits](gajus/eslint-plugin-jsdoc@v61.4.1...v61.5.0)

---
updated-dependencies:
- dependency-name: eslint
  dependency-version: 9.39.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: eslint
- dependency-name: eslint-plugin-jsdoc
  dependency-version: 61.5.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: eslint
...

Signed-off-by: dependabot[bot] <support@github.com>
PR-URL: nodejs/node#61246
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Antoine du Hamel <duhamelantoine1995@gmail.com>
This helps diffing snapshots when the reproducibility gets broken.

PR-URL: nodejs/node#61101
Reviewed-By: Anna Henningsen <anna@addaleax.net>
Reviewed-By: Chengzhong Wu <legendecas@gmail.com>
PR-URL: nodejs/node#61250
Fixes: nodejs/node#61018
Reviewed-By: Yagiz Nizipli <yagiz@nizipli.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Security release automation requires that all backport commits
include PR-URL metadata in their commit messages.

Signed-off-by: RafaelGSS <rafael.nunu@hotmail.com>
PR-URL: nodejs/node#61256
Refs: nodejs/node-core-utils#1022
Reviewed-By: Chengzhong Wu <legendecas@gmail.com>
Reviewed-By: Marco Ippolito <marcoippolito54@gmail.com>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Reviewed-By: Aviv Keller <me@aviv.sh>
PR-URL: nodejs/node#60511
Reviewed-By: Aviv Keller <me@aviv.sh>
This requires setting BUILDING_V8_PLATFORM_SHARED to expose the
public symbols since we build with hidden visibility by default.

PR-URL: nodejs/node#61144
Fixes: nodejs/node#61102
Reviewed-By: Michaël Zasso <targos@protonmail.com>
Reviewed-By: Chengzhong Wu <legendecas@gmail.com>
Reviewed-By: Richard Lau <richard.lau@ibm.com>
PR-URL: nodejs/node#61260
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Reviewed-By: Aviv Keller <me@aviv.sh>
Reviewed-By: Rafael Gonzaga <rafael.nunu@hotmail.com>
PR-URL: nodejs/node#58264
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Antoine du Hamel <duhamelantoine1995@gmail.com>
PR-URL: nodejs/node#61255
Reviewed-By: Marco Ippolito <marcoippolito54@gmail.com>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Ruy Adorno <ruy@vlt.sh>
Reviewed-By: Ulises Gascón <ulisesgascongonzalez@gmail.com>
PR-URL: nodejs/node#61270
Reviewed-By: Michaël Zasso <targos@protonmail.com>
Reviewed-By: Aviv Keller <me@aviv.sh>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Rafael Gonzaga <rafael.nunu@hotmail.com>
PR-URL: nodejs/node#61271
Reviewed-By: Michaël Zasso <targos@protonmail.com>
Reviewed-By: Aviv Keller <me@aviv.sh>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Reviewed-By: Rafael Gonzaga <rafael.nunu@hotmail.com>
PR-URL: nodejs/node#61267
Reviewed-By: Gireesh Punathil <gpunathi@in.ibm.com>
Reviewed-By: Antoine du Hamel <duhamelantoine1995@gmail.com>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Reviewed-By: Rafael Gonzaga <rafael.nunu@hotmail.com>
PR-URL: nodejs/node#61297
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Antoine du Hamel <duhamelantoine1995@gmail.com>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Reviewed-By: Rafael Gonzaga <rafael.nunu@hotmail.com>
Reviewed-By: Gireesh Punathil <gpunathi@in.ibm.com>
PR-URL: nodejs/node#61273
Refs: nodejs/node#58378
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
resolves #53514

PR-URL: nodejs/node#53641
Reviewed-By: Aviv Keller <me@aviv.sh>
PR-URL: nodejs/node#61090
Refs: nodejs/node#60862
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Reviewed-By: Chengzhong Wu <legendecas@gmail.com>
PR-URL: nodejs/node#60825
Reviewed-By: Chengzhong Wu <legendecas@gmail.com>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Chemi Atlow <chemi@atlow.co.il>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Reviewed-By: Jordan Harband <ljharb@gmail.com>
Reviewed-By: LiviaMedeiros <livia@cirno.name>
PR-URL: nodejs/node#60726
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Anna Henningsen <anna@addaleax.net>
PR-URL: nodejs/node#60759
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
PR-URL: nodejs/node#60761
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
PR-URL: nodejs/node#61004
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Reviewed-By: René <contact.9a5d6388@renegade334.me.uk>
PR-URL: nodejs/node#60906
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
PR-URL: nodejs/node#61062
Reviewed-By: Moshe Atlow <moshe@atlow.co.il>
Reviewed-By: Michaël Zasso <targos@protonmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
PR-URL: nodejs/node#61135
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Rafael Gonzaga <rafael.nunu@hotmail.com>
Reviewed-By: Michaël Zasso <targos@protonmail.com>
Reviewed-By: Gürgün Dayıoğlu <hey@gurgun.day>
PR-URL: nodejs/node#61321
Reviewed-By: Aviv Keller <me@aviv.sh>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
thisalihassan and others added 25 commits February 22, 2026 23:16
When binding UTF-8 strings to prepared statements, transfer ownership of
malloc-backed Utf8Value buffers to SQLite to avoid an extra copy for
large strings. Use sqlite3_bind_blob64() when binding BLOB parameters.

PR-URL: nodejs/node#61580
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Edy Silva <edigleyssonsilva@gmail.com>
Reviewed-By: René <contact.9a5d6388@renegade334.me.uk>
Reviewed-By: Zeyu "Alex" Yang <himself65@outlook.com>
PR-URL: nodejs/node#61696
Reviewed-By: Yagiz Nizipli <yagiz@nizipli.com>
Reviewed-By: Stephen Belanger <admin@stephenbelanger.com>
Reviewed-By: Daniel Lemire <daniel@lemire.me>
Reviewed-By: Vinícius Lourenço Claro Cardoso <contact@viniciusl.com.br>
Reviewed-By: Сковорода Никита Андреевич <chalkerx@gmail.com>
Reviewed-By: Gürgün Dayıoğlu <hey@gurgun.day>
Reviewed-By: Anna Henningsen <anna@addaleax.net>
Reviewed-By: René <contact.9a5d6388@renegade334.me.uk>
PR-URL: nodejs/node#61659
Reviewed-By: Ethan Arrowood <ethan@arrowood.dev>
Reviewed-By: Jacob Smith <jacob@frende.me>
Reviewed-By: Moshe Atlow <moshe@atlow.co.il>
Reviewed-By: Pietro Marchini <pietro.marchini94@gmail.com>
Reviewed-By: Chemi Atlow <chemi@atlow.co.il>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Tierney Cyren <hello@bnb.im>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
`require(mod)` does not keep the mod in require cache if
mod throws synchronously. This fixes the tests to print
the stack immediately in case that source map cache could
be cleaned up when the CJS module is reclaimed by GC in
the next event loop tick.

PR-URL: nodejs/node#61699
Reviewed-By: Anna Henningsen <anna@addaleax.net>
Reviewed-By: Michaël Zasso <targos@protonmail.com>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Minwoo Jung <nodecorelab@gmail.com>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Add documentation explaining that applications are expected to attach
'error' event handlers to EventEmitters that can emit errors, including
HTTP streams. Crashes resulting from missing error handlers are not
considered denial-of-service vulnerabilities in Node.js.

PR-URL: nodejs/node#61701
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Reviewed-By: Anna Henningsen <anna@addaleax.net>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Rafael Gonzaga <rafael.nunu@hotmail.com>
Reviewed-By: Chengzhong Wu <legendecas@gmail.com>
Reviewed-By: Ulises Gascón <ulisesgascongonzalez@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Small perf improvement:

    $ node benchmark/compare.js --runs 100 \
    >   --old ./node-0da120f879 --new ./node \
    >   --filter structured-clone misc > comparison.csv && \
    >   npx node-benchmark-compare comparison.csv

    [00:02:15|% 100| 1/1 files | 200/200 runs | 3/3 configs]: Done
                                                        confidence improvement accuracy (*)   (**)  (***)
    misc/structured-clone.js n=10000 type='arraybuffer'         **      1.81 %       ±1.28% ±1.68% ±2.16%
    misc/structured-clone.js n=10000 type='object'               *      0.62 %       ±0.55% ±0.73% ±0.93%
    misc/structured-clone.js n=10000 type='string'             ***      8.30 %       ±1.46% ±1.92% ±2.47%

    Be aware that when doing many comparisons the risk of a false-positive result increases.
    In this case, there are 3 comparisons, you can thus expect the following amount of false-positive results:
      0.15 false positives, when considering a   5% risk acceptance (*, **, ***),
      0.03 false positives, when considering a   1% risk acceptance (**, ***),
      0.00 false positives, when considering a 0.1% risk acceptance (***)

PR-URL: nodejs/node#61703
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Yagiz Nizipli <yagiz@nizipli.com>
PR-URL: nodejs/node#61732
Reviewed-By: Moshe Atlow <moshe@atlow.co.il>
Reviewed-By: Antoine du Hamel <duhamelantoine1995@gmail.com>
Reviewed-By: Michaël Zasso <targos@protonmail.com>
PR-URL: nodejs/node#61730
Reviewed-By: Antoine du Hamel <duhamelantoine1995@gmail.com>
Reviewed-By: Ulises Gascón <ulisesgascongonzalez@gmail.com>
Reviewed-By: Michaël Zasso <targos@protonmail.com>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
PR-URL: nodejs/node#61685
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Reviewed-By: Gürgün Dayıoğlu <hey@gurgun.day>
PR-URL: nodejs/node#61748
Reviewed-By: Moshe Atlow <moshe@atlow.co.il>
Reviewed-By: Antoine du Hamel <duhamelantoine1995@gmail.com>
PR-URL: nodejs/node#61279
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Marco Ippolito <marcoippolito54@gmail.com>
Reviewed-By: Chengzhong Wu <legendecas@gmail.com>
PR-URL: nodejs/node#61280
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Marco Ippolito <marcoippolito54@gmail.com>
PR-URL: nodejs/node#61341
Reviewed-By: Yagiz Nizipli <yagiz@nizipli.com>
Reviewed-By: Aviv Keller <me@aviv.sh>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
PR-URL: nodejs/node#61456
Reviewed-By: Guy Bedford <guybedford@gmail.com>
Reviewed-By: Daniel Lemire <daniel@lemire.me>
Reviewed-By: Richard Lau <richard.lau@ibm.com>
To reduce cache thrashing.

PR-URL: nodejs/node#61790
Refs: nodejs/node#61436
Reviewed-By: Marco Ippolito <marcoippolito54@gmail.com>
Reviewed-By: Tierney Cyren <hello@bnb.im>
Reviewed-By: Michaël Zasso <targos@protonmail.com>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Reviewed-By: Antoine du Hamel <duhamelantoine1995@gmail.com>
PR-URL: nodejs/node#61742
Reviewed-By: Filip Skokan <panva.ip@gmail.com>
Reviewed-By: Richard Lau <richard.lau@ibm.com>
PR-URL: nodejs/node#61734
Reviewed-By: René <contact.9a5d6388@renegade334.me.uk>
Reviewed-By: Michaël Zasso <targos@protonmail.com>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Reviewed-By: Rafael Gonzaga <rafael.nunu@hotmail.com>
PR-URL: nodejs/node#61759
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
PR-URL: nodejs/node#61899
Reviewed-By: Michaël Zasso <targos@protonmail.com>
Reviewed-By: Chengzhong Wu <legendecas@gmail.com>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
PR-URL: nodejs/node#61903
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
PR-URL: nodejs/node#60727
Backport-PR-URL: nodejs/node#61840
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Yagiz Nizipli <yagiz@nizipli.com>
Reviewed-By: Stefan Stojanovic <stefan.stojanovic@janeasystems.com>
PR-URL: nodejs/node#61530
Backport-PR-URL: nodejs/node#61840
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Reviewed-By: Michaël Zasso <targos@protonmail.com>
Reviewed-By: Richard Lau <richard.lau@ibm.com>
Notable changes:

async_hooks:
  * (SEMVER-MINOR) add `trackPromises` option to `createHook()` (Joyee Cheung) nodejs/node#61415
build,deps:
  * replace cjs-module-lexer with merve (Yagiz Nizipli) nodejs/node#61456
deps:
  * (SEMVER-MINOR) add LIEF as a dependency (Joyee Cheung) nodejs/node#61167
events:
  * (SEMVER-MINOR) repurpose `events.listenerCount()` to accept `EventTarget`s (René) nodejs/node#60214
fs:
  * (SEMVER-MINOR) add `ignore` option to `fs.watch` (Matteo Collina) nodejs/node#61433
http:
  * (SEMVER-MINOR) add `http.setGlobalProxyFromEnv()` (Joyee Cheung) nodejs/node#60953
module:
  * (SEMVER-MINOR) allow subpath imports that start with `#/` (Jan Martin) nodejs/node#60864
process:
  * (SEMVER-MINOR) preserve `AsyncLocalStorage` in `queueMicrotask` only when needed (Gürgün Dayıoğlu) nodejs/node#60913
sea:
  * (SEMVER-MINOR) split sea binary manipulation code (Joyee Cheung) nodejs/node#61167
sqlite:
  * (SEMVER-MINOR) enable defensive mode by default (Bart Louwers) nodejs/node#61266
  * (SEMVER-MINOR) add sqlite prepare options args (Guilherme Araújo) nodejs/node#61311
src:
  * (SEMVER-MINOR) add initial support for ESM in embedder API (Joyee Cheung) nodejs/node#61548
stream:
  * (SEMVER-MINOR) add `bytes()` method to `node:stream/consumers` (wantaek) nodejs/node#60426
  * (SEMVER-MINOR) do not pass `readable.compose()` output via `Readable.from()` (René) nodejs/node#60907
test:
  * (SEMVER-MINOR) use fixture directories for sea tests (Joyee Cheung) nodejs/node#61167
test_runner:
  * (SEMVER-MINOR) add `env` option to `run` function (Ethan Arrowood) nodejs/node#61367
  * (SEMVER-MINOR) support expecting a test-case to fail (Jacob Smith) nodejs/node#60669
util:
  * (SEMVER-MINOR) add `convertProcessSignalToExitCode` utility (Erick Wendel) nodejs/node#60963

PR-URL: nodejs/node#61924
2026-02-24 Node.js v24.14.0 Krypton (LTS) Release
Git-EVTag-v0-SHA512: f02aae3385cab64e189bcb6307365e4b519b0d1142e42a4ab968e0240356a8ccbca01a8e59600e06d3a6f92369be5b93d6c233b05ad843db548fba05ef7eb68e
@santigimeno santigimeno requested a review from RafaelGSS March 2, 2026 09:58
@santigimeno santigimeno self-assigned this Mar 2, 2026
@coderabbitai
Copy link

coderabbitai bot commented Mar 2, 2026

Walkthrough

Broad updates across the repo: Python references moved to 3.14, CI runners/actions bumped and gated with conditional sccache, new devcontainer post-start, lint exclusions expanded, docs/licenses updated, Makefile quoting/OUT_NODE introduced, Android/V8 trap handler patch changes, benchmark additions/edits, and substantial addition of LIEF headers (ART/ASM/Abstract/BinaryStream/COFF/DEX/DWARF/DyldSharedCache/ELF).

Changes

Cohort / File(s) Summary
Configurations (DSC)
.configurations/configuration.dsc.yaml, .configurations/configuration.vsEnterprise.dsc.yaml, .configurations/configuration.vsProfessional.dsc.yaml
Bumped configurationVersion to 0.1.1; Python install description/id updated from 3.12 to 3.14.
Devcontainer & Lint/Attrs
.devcontainer/base/devcontainer.json, .cpplint, .gitattributes, .gitignore
Added postStartCommand (direnv/envrc); added cpplint filters; disabled EOL normalization for vendored crates; un-ignored deps/LIEF/**.
CI Workflows (GitHub Actions)
.github/workflows/*
Switched many jobs to ubuntu-slim or ubuntu-24.04-arm; updated actions (checkout/setup-node/setup-python/codeql/codecov/upload-artifact); introduced conditional sccache on main; adjusted test steps (configure/test-ci, -j1); minor job renames/notification logic changes.
Repository Meta
.github/ISSUE_TEMPLATE/config.yml, .github/dependabot.yml, .github/label-pr-config.yml
Typos fixed; removed github-actions cooldowns; label mapping tweaks (npm fast-track removed; V8 labels added).
Build Scripts
Makefile, configure, configure.py, android-configure, android-patches/trap-handler.h.patch, common.gypi
Quoting/OUT_NODE path usage; added Python 3.14 detection; new shared-lib CLI options and JSON config emission; Android Python version checks; trap handler support logic adjusted; V8 embedder string and GCC version detection updated.
Benchmarks
benchmark/...
Added multiple new benchmarks (ESM import, HTTP parser fragmentation, internal util, sqlite, URLSearchParams, TextDecoder stream); boolean config handling updates; fixed missing return in buffer bytelength; removed esm/cjs-parse.js; minor N-API message edits and N-API call rename.
Documentation & Licensing
BUILDING.md, CHANGELOG.md, LICENSE
Expanded Windows build docs and branding updates; changelog index adjusted; large license additions/replacements including LIEF and others.
LIEF: Config & Docs
deps/LIEF/LICENSE, deps/LIEF/README.md, deps/LIEF/config/mbedtls/config.h
Added LIEF license/README; mbedtls config include and AESNI undef on x86.
LIEF: Core Headers (Aggregate)
deps/LIEF/include/... (ART, ASM, Abstract, BinaryStream, COFF, DEX, DWARF, DyldSharedCache, ELF, utilities)
Introduces extensive public headers for multiple formats and utilities: new types, parsers, visitors, hash/json utilities, editor APIs, stream abstractions, and note/property structures. Pure declarations; no implementations in this diff.

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

Suggested reviewers

  • RafaelGSS

Poem

In burrows of code I hop and weave,
New headers sprout like springtime leaves.
CI clouds shift, slim and bright,
Python bumps to 3.14’s light.
LIEF arrives with formats grand—
Thump-thump! I stamp, review in hand. 🐇✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 9.09% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title concisely and accurately identifies the primary change: Node v24.14.0 release with N|Solid v6.2.1.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch node-v24.14.0-nsolid-v6.2.1-release

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 19

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
.github/workflows/lint-release-proposal.yml (1)

33-36: ⚠️ Potential issue | 🟡 Minor

Use the correct variable in commit-title diagnostics.

At Line 35, $ACTUAL is unset; this prints an empty value and makes failures harder to debug.

💡 Suggested fix
-          echo "Actual: $ACTUAL"
+          echo "Actual: $COMMIT_SUBJECT"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/lint-release-proposal.yml around lines 33 - 36, The
workflow prints an unset variable ACTUAL instead of the actual commit subject;
update the diagnostics to use COMMIT_SUBJECT (or assign ACTUAL=COMMIT_SUBJECT)
so the "Actual:" line shows the real commit title; change the echo that
references $ACTUAL to echo the COMMIT_SUBJECT variable to aid debugging when the
grep check against EXPECTED_TITLE fails.
🟡 Minor comments (11)
benchmark/internal/util_isinsidenodemodules.js-45-48 (1)

45-48: ⚠️ Potential issue | 🟡 Minor

Handle non-positive depth to prevent runaway recursion.

If depth becomes 0 or negative (e.g., custom stackCount), Line 48 never reaches the base case and recurses until stack overflow.

Proposed fix
 function nestCallStack(depth, callback) {
   // nestCallStack(1) already adds a stack frame, so we stop at 1.
-  if (depth === 1) {
+  if (depth <= 1) {
     return callback();
   }
   return nestCallStack(depth - 1, callback);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmark/internal/util_isinsidenodemodules.js` around lines 45 - 48, The
recursion base case in nestCallStack only checks for depth === 1, so
non-positive depth (0 or negative from a custom stackCount) causes runaway
recursion; update the base-case check in nestCallStack to handle non-positive
values (e.g., use if (depth <= 1) return callback();) and ideally
validate/sanitize the incoming depth/stackCount before recursing to ensure
non-positive inputs short-circuit instead of recursing indefinitely.
deps/LIEF/include/LIEF/Abstract/DebugInfo.hpp-53-59 (1)

53-59: ⚠️ Potential issue | 🟡 Minor

Fix inconsistent downcast example and assertion message.

The example references LIEF::Instruction/inst and find_function, which don’t match this API. The static_assert message also references Instruction instead of DebugInfo.

✏️ Proposed fix
-  /// std::unique_ptr<LIEF::Instruction> dbg = bin->debug_info();
-  /// if (const auto* dwarf = inst->as<LIEF::dwarf::DebugInfo>()) {
-  ///   dwarf->find_function("main");
+  /// std::unique_ptr<LIEF::DebugInfo> dbg = bin->debug_info();
+  /// if (const auto* dwarf = dbg->as<LIEF::dwarf::DebugInfo>()) {
+  ///   dwarf->find_function_address("main");
   /// }
@@
-                  "Require Instruction inheritance");
+                  "Require DebugInfo inheritance");

Also applies to: 63-64

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/Abstract/DebugInfo.hpp` around lines 53 - 59, The
downcast example in DebugInfo.hpp and the static_assert text are inconsistent:
replace the incorrect references to Instruction/inst/find_function with a
correct DebugInfo example (e.g., show obtaining a DebugInfo pointer via
as<LIEF::debug::DebugInfo>() and calling an appropriate DebugInfo method), and
update the static_assert message to mention DebugInfo (not Instruction); locate
and fix the example block and the static_assert string in the DebugInfo class
(symbols: DebugInfo, as<...>(), and the static_assert) and make the example and
assertion wording consistent with the DebugInfo API.
benchmark/util/text-decoder-stream.js-25-40 (1)

25-40: ⚠️ Potential issue | 🟡 Minor

Add a default case to fail fast on invalid type

Without a default branch, an unexpected type leaves buf undefined and fails later at Line 48 with a less actionable error.

Proposed fix
   switch (type) {
@@
     case 'Buffer': {
       buf = Buffer.alloc(len, fill);
       break;
     }
+    default:
+      throw new Error(`Unsupported type: ${type}`);
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmark/util/text-decoder-stream.js` around lines 25 - 40, The switch on
the variable `type` that creates `buf` (handling 'SharedArrayBuffer',
'ArrayBuffer', 'Buffer') lacks a default branch causing `buf` to remain
undefined for unexpected values; add a `default` case that throws a clear error
(e.g., TypeError or Error) including the invalid `type` value so the code fails
fast and surfaces the root cause instead of erroring later when `buf` is used.
deps/LIEF/include/LIEF/ART/java_structures.hpp-300-304 (1)

300-304: ⚠️ Potential issue | 🟡 Minor

Fix inconsistent version/type comments to prevent mapping errors.

The comments around Line 363 and Line 367 conflict with the namespace (ART_56), and Line 303 has a chat16_t typo. These are easy-to-miss sources of future maintenance mistakes.

🧹 Suggested comment cleanup
-// count[0] (LSB) == 0 ----> chat16_t
+// count[0] (LSB) == 0 ----> char16_t

-// Android 9.0.0 - ART 66
+// Android 9.0.0 - ART 56
 namespace ART_56 {

-/// Namespace related to the Java part of ART 46
+/// Namespace related to the Java part of ART 56
 namespace Java {

Also applies to: 363-368

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/ART/java_structures.hpp` around lines 300 - 304, Fix
the inconsistent and typo'd comments around the jstring structure: correct
"chat16_t" to "char16_t", remove the duplicated "as" in "encoded as as
char16_t", and clarify the count[0] LSB mapping (e.g., "count[0] (LSB) == 1 ->
compressed (char), == 0 -> char16_t") so the meaning is unambiguous; also align
the version/type comments near the ART_56 namespace and the nearby block (lines
referenced around the jstring/template<class T = no_brooks_read_barrier_t> and
the ART_56 namespace) so they consistently reflect ART_56 semantics and avoid
conflicting wording across the ranges mentioned (303 and 363–368).
deps/LIEF/include/LIEF/DWARF/editor/Variable.hpp-39-40 (1)

39-40: ⚠️ Potential issue | 🟡 Minor

Fix typo in public API documentation.

revelant should be relevant.

Suggested fix
-  /// revelant in the case of a static global variable. For stack variable, you
+  /// relevant in the case of a static global variable. For stack variable, you
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/DWARF/editor/Variable.hpp` around lines 39 - 40, Typo
in the public API docstring: change "revelant" to "relevant" in the comment
above the Variable class (or declaration) that references set_stack_offset;
update the documentation comment in Variable.hpp so the sentence reads
"...relevant in the case of a static global variable..." to keep public API docs
correct.
deps/LIEF/include/LIEF/DEX/instructions.hpp-352-353 (1)

352-353: ⚠️ Potential issue | 🟡 Minor

Fix swapped namespace end comments.

Line 352 closes LIEF::DEX and Line 353 closes LIEF, but the trailing comments are reversed.

Suggested patch
-} // Namespace LIEF
-} // Namespace DEX
+} // Namespace DEX
+} // Namespace LIEF
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/DEX/instructions.hpp` around lines 352 - 353, The
closing namespace comments are reversed—swap the trailing comments on the two
closing braces so the first closing brace is labeled "Namespace DEX" and the
final closing brace is labeled "Namespace LIEF"; update the end-comment text
adjacent to the braces that close the nested namespaces (matching the namespace
declarations for LIEF and DEX) so the comments accurately reflect the closed
namespaces.
deps/LIEF/include/LIEF/ELF/NoteDetails/properties/Needed.hpp-30-32 (1)

30-32: ⚠️ Potential issue | 🟡 Minor

Add a null guard in classof.

classof dereferences prop unconditionally; guard it to avoid null-pointer crashes in tolerant call paths.

Suggested patch
   static bool classof(const NoteGnuProperty::Property* prop) {
-    return prop->type() == NoteGnuProperty::Property::TYPE::NEEDED;
+    return prop != nullptr &&
+           prop->type() == NoteGnuProperty::Property::TYPE::NEEDED;
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/ELF/NoteDetails/properties/Needed.hpp` around lines 30
- 32, The classof helper in Needed.hpp dereferences prop unconditionally; update
NoteGnuProperty::properties::Needed::classof (the static bool classof(const
NoteGnuProperty::Property* prop)) to first check that prop is not null and
return false when it is, before evaluating prop->type() ==
NoteGnuProperty::Property::TYPE::NEEDED; this avoids null-pointer crashes in
tolerant call paths while preserving the existing type check for non-null
pointers.
deps/LIEF/include/LIEF/ELF/NoteDetails/QNXStack.hpp-68-68 (1)

68-68: ⚠️ Potential issue | 🟡 Minor

Fix typo in closing namespace comment.

At Line 68, namepsace should be namespace.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/ELF/NoteDetails/QNXStack.hpp` at line 68, The closing
inline comment for the ELF namespace has a typo: "namepsace ELF"; update that
comment to "namespace ELF" so the closing comment for the namespace matches the
correct spelling and convention (look for the closing bracket and comment after
the namespace ELF block in QNXStack.hpp).
deps/LIEF/include/LIEF/ELF/NoteDetails/core/CoreSigInfo.hpp-65-65 (1)

65-65: ⚠️ Potential issue | 🟡 Minor

Fix typo in namespace closing comment.

Line 65 uses namepsace instead of namespace.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/ELF/NoteDetails/core/CoreSigInfo.hpp` at line 65, Fix
the typo in the namespace closing comment in CoreSigInfo.hpp by changing the
comment that reads "} // namepsace ELF" to the correct "} // namespace ELF" so
the trailing comment matches the actual namespace name (file contains the
closing brace for namespace ELF near the end of CoreSigInfo.hpp).
deps/LIEF/include/LIEF/DEX/MapItem.hpp-69-69 (1)

69-69: ⚠️ Potential issue | 🟡 Minor

Fix typo in API comment text.

Line 69 says prupose; it should be purpose.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/DEX/MapItem.hpp` at line 69, Fix the typo in the API
comment within the MapItem declaration: change the comment text "Reserved value
(likely for alignment prupose)" to "Reserved value (likely for alignment
purpose)"; locate the comment associated with the MapItem class in MapItem.hpp
(symbol: MapItem) and update the spelling only.
deps/LIEF/include/LIEF/DyldSharedCache/MappingInfo.hpp-84-86 (1)

84-86: ⚠️ Potential issue | 🟡 Minor

Guard Line 85 against overflow in end_address().

address() + size() can wrap for large values and return an invalid lower end address.

Suggested hardening
+#include <limits>
@@
   uint64_t end_address() const {
-    return address() + size();
+    const uint64_t start = address();
+    const uint64_t len = size();
+    if (len > (std::numeric_limits<uint64_t>::max() - start)) {
+      return std::numeric_limits<uint64_t>::max();
+    }
+    return start + len;
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/DyldSharedCache/MappingInfo.hpp` around lines 84 - 86,
The end_address() method currently returns address() + size() which can
overflow; update MappingInfo::end_address() to detect overflow by comparing
size() to (std::numeric_limits<uint64_t>::max() - address()) and handle the case
(e.g., return std::numeric_limits<uint64_t>::max() or signal an error) instead
of doing the unchecked addition; reference the existing end_address(),
address(), and size() methods when making the change.
🧹 Nitpick comments (17)
benchmark/util/deprecate.js (1)

9-9: Consider making emitWarningSync consistent with modifyPrototype.

For uniformity, emitWarningSync could also use [true, false] instead of [1, 0], unless the underlying API specifically expects numeric values.

♻️ Optional consistency fix
-  emitWarningSync: [1, 0],
+  emitWarningSync: [true, false],
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmark/util/deprecate.js` at line 9, The entry emitWarningSync in the
deprecate config uses numeric flags [1, 0] while modifyPrototype uses boolean
flags; update emitWarningSync to use [true, false] for consistency (i.e.,
replace [1, 0] with [true, false] in the emitWarningSync array) and verify any
consumer code that reads emitWarningSync (search for references to
emitWarningSync) accepts booleans—if a consumer expects numeric values, adjust
that consumer (or coerce) so the new booleans are handled correctly.
benchmark/napi/function_call/index.js (1)

27-27: Terminology update looks good; consider using __filename for the path.

The "Node-API binding" terminology is correct. However, the hardcoded path misc/function_call/index.js doesn't match the actual file location (benchmark/napi/function_call/index.js). For consistency with benchmark/napi/function_args/index.js, consider using __filename.

🔧 Suggested improvement
-  console.error('misc/function_call/index.js Node-API binding failed to load');
+  console.error(`${__filename}: Node-API binding failed to load`);

This would also be a good opportunity to fix line 18 similarly:

console.error(`${__filename}: V8 Binding failed to load`);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmark/napi/function_call/index.js` at line 27, Replace the hardcoded path
string in the console.error call with __filename so the message reports the
actual file path; specifically update the console.error invocation that
currently logs 'misc/function_call/index.js Node-API binding failed to load' to
use a template including __filename and the same descriptive text (e.g.,
`${__filename}: Node-API binding failed to load`), and apply the same pattern to
the similar console.error that logs the V8 binding failure (replace its literal
path with `${__filename}: V8 Binding failed to load`) so both messages use the
runtime filename.
benchmark/esm/import-cjs.js (1)

31-31: Clarify the warm-up comment on Line 31.

The comment mentions “wasm”, but this benchmark is warming module import/cache behavior for CJS fixtures. Renaming avoids confusion.

✏️ Suggested wording update
-    await import(tmpdir.fileURL(`imported-cjs-initial.js`));  // Initialize the wasm first.
+    await import(tmpdir.fileURL(`imported-cjs-initial.js`));  // Warm up module loading/cache first.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmark/esm/import-cjs.js` at line 31, The comment on the import statement
awaiting import(tmpdir.fileURL(`imported-cjs-initial.js`)) incorrectly refers to
"wasm"; update the inline comment to clarify this step is warming module
import/cache for the CJS fixture (e.g., "Warm CJS module import/cache" or "Prime
CJS import/cache for fixture") so it accurately reflects the purpose of this
import of imported-cjs-initial.js.
deps/LIEF/README.md (1)

59-59: Duplicate TOC entry for “About”.

About is listed twice in the content section; removing one will keep navigation cleaner.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/README.md` at line 59, The table of contents contains a duplicated
"About" entry; remove the duplicate markdown list item "[About](`#about`)" so only
one TOC link to the "About" heading remains (update the README.md content around
the existing TOC list to delete the redundant "[About](`#about`)" entry).
deps/LIEF/include/LIEF/Abstract/hash.hpp (2)

23-26: Add local forward declarations for all referenced visit types.

Section, Symbol, and Function are referenced in this header but not forward-declared here. Adding them locally reduces dependence on transitive declarations from included headers.

Proposed refactor
 namespace LIEF {
 class Binary;
 class Header;
 class Relocation;
 class Object;
+class Section;
+class Symbol;
+class Function;

Also applies to: 37-42

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/Abstract/hash.hpp` around lines 23 - 26, The header
declares forward declarations for Binary, Header, Relocation, and Object but
omits local forward declarations for Section, Symbol, and Function which are
referenced later; add "class Section;", "class Symbol;", and "class Function;"
alongside the existing forward declarations in LIEF/Abstract/hash.hpp (near the
declarations for Binary/Header/Relocation/Object) so the header no longer relies
on transitive declarations from other includes and to ensure Section, Symbol,
and Function are locally declared for any visitor/template code that references
them.

29-37: Collapse consecutive public: blocks for readability.

This is purely cosmetic, but merging access blocks makes the class declaration a bit cleaner.

Proposed refactor
 class LIEF_API AbstractHash : public LIEF::Hash {
-  public:
+ public:
   static LIEF::Hash::value_type hash(const Object& obj);
-
-  public:
   using LIEF::Hash::Hash;
   using LIEF::Hash::visit;
-
-  public:
   void visit(const Binary& binary)         override;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/Abstract/hash.hpp` around lines 29 - 37, Combine the
three consecutive access specifiers into a single public: block to improve
readability; move the declarations for static LIEF::Hash::value_type hash(const
Object& obj), the using declarations (using LIEF::Hash::Hash; using
LIEF::Hash::visit;) and the void visit(const Binary& binary) override;
declaration under one public: header so the symbols stay unchanged and only the
access-section layout is simplified.
benchmark/http/bench-parser-fragmented.js (2)

53-53: Use deterministic filler values to reduce benchmark noise.

At Line 53, Math.random() makes payload size/content vary between runs, which hurts result comparability. Prefer deterministic filler content so each parameter set benchmarks the same byte pattern.

Proposed change
-    header += `X-Filler${i}: ${Math.random().toString(36).substring(2)}\r\n`;
+    // Deterministic payload for reproducible benchmark runs.
+    header += `X-Filler${i}: ${'x'.repeat(16)}\r\n`;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmark/http/bench-parser-fragmented.js` at line 53, The filler header
generation uses Math.random() (in the header += `X-Filler${i}:
${Math.random()...}` expression), which introduces non-deterministic payloads
and benchmark noise; replace the random value with a deterministic string (for
example a fixed repeating token or a value derived solely from the loop index i)
so the header building code that appends X-Filler headers produces identical
bytes across runs and stabilizes benchmark comparisons.

62-64: Partition buffer into exactly the requested fragment count.

Lines 62-64 can produce fewer fragments than frags for some sizes. For benchmark parameter integrity, split using an exact partition strategy.

Proposed change
-  for (let i = 0; i < headerBuf.length; i += fragSize) {
-    fragments.push(headerBuf.slice(i, Math.min(i + fragSize, headerBuf.length)));
-  }
+  const actualFrags = Math.min(frags, headerBuf.length);
+  const base = Math.floor(headerBuf.length / actualFrags);
+  const rem = headerBuf.length % actualFrags;
+  let offset = 0;
+  for (let i = 0; i < actualFrags; i++) {
+    const size = base + (i < rem ? 1 : 0);
+    const next = offset + size;
+    fragments.push(headerBuf.slice(offset, next));
+    offset = next;
+  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmark/http/bench-parser-fragmented.js` around lines 62 - 64, The current
loop slicing headerBuf by fragSize can produce fewer than frags fragments;
change to explicitly create exactly frags fragments by computing baseLen =
Math.floor(headerBuf.length / frags) and remainder = headerBuf.length % frags,
then iterate i from 0 to frags-1 and slice headerBuf into fragments where the
first remainder fragments have size baseLen+1 and the rest have size baseLen,
pushing each slice into fragments; update the code that references
fragSize/fragments to use frags, headerBuf, and fragments so the benchmark
always yields exactly frags parts.
benchmark/util/text-decoder-stream.js (1)

7-7: ignoreBOM dimension is currently a no-op for this workload

Line 7 varies ignoreBOM, but Line 23 builds payload from plain text without BOM bytes, so this axis likely measures identical decode behavior.

Consider either (a) prepending an actual BOM fixture for utf-8/utf-16le, or (b) removing ignoreBOM from the matrix to avoid misleading benchmark permutations.

Also applies to: 23-23

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmark/util/text-decoder-stream.js` at line 7, The ignoreBOM axis in the
benchmark matrix is a no-op because the payload is built from plain text without
BOM bytes; either remove ignoreBOM from the benchmark matrix (the
matrix/ignoreBOM entry) to avoid redundant permutations, or modify the payload
construction so it includes an actual BOM for the encodings tested (e.g.,
prepend the UTF-8 or UTF-16LE BOM bytes to the payload variable used when
running tests) and ensure the decoder invocation that reads ignoreBOM sees the
BOM fixture.
deps/LIEF/include/LIEF/DWARF/types/ClassLike.hpp (1)

15-16: Use a file-specific include guard macro for ClassLike.hpp.

LIEF_DWARF_TYPE_STRUCTURE_H does not match this filename and is more collision-prone than a dedicated guard.

♻️ Suggested refactor
-#ifndef LIEF_DWARF_TYPE_STRUCTURE_H
-#define LIEF_DWARF_TYPE_STRUCTURE_H
+#ifndef LIEF_DWARF_TYPE_CLASSLIKE_H
+#define LIEF_DWARF_TYPE_CLASSLIKE_H
...
-#endif
+#endif

Also applies to: 152-152

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/DWARF/types/ClassLike.hpp` around lines 15 - 16, The
include guard in ClassLike.hpp currently uses LIEF_DWARF_TYPE_STRUCTURE_H which
is not file-specific and risks collisions; update the header guard to a unique
macro based on the filename (e.g., LIEF_DWARF_CLASSLIKE_HPP or similar) and
replace both the `#ifndef/`#define and the corresponding `#endif` comment if present
so the guard fully matches the new macro; ensure the same change is applied
consistently in the duplicate occurrence noted at line 152.
deps/LIEF/include/LIEF/DWARF/types/Array.hpp (1)

18-20: Consider adding direct includes for used STL types.

This header publicly uses std::unique_ptr, std::string, and size_t; direct includes reduce transitive-include fragility.

Suggested fix
+#include <cstddef>
+#include <memory>
+#include <string>
+
 `#include` "LIEF/visibility.h"
 `#include` "LIEF/DWARF/Type.hpp"

Also applies to: 31-42

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/DWARF/types/Array.hpp` around lines 18 - 20, The
header LIEF/DWARF/types/Array.hpp publicly uses STL symbols (std::unique_ptr,
std::string, size_t) but doesn't include their headers; add direct includes for
<memory> (for std::unique_ptr), <string> (for std::string) and <cstddef> (for
size_t) at the top of Array.hpp so the public API doesn't rely on transitive
includes; update the same pattern for the other affected DWARF type headers
(lines 31-42 region) that expose these types.
deps/LIEF/include/LIEF/DWARF/types/Subroutine.hpp (1)

18-20: Add direct STL includes for header self-containment.

std::vector and std::unique_ptr are used in the public type alias but not directly included.

Suggested fix
+#include <memory>
+#include <vector>
+
 `#include` "LIEF/visibility.h"
 `#include` "LIEF/DWARF/Type.hpp"

Also applies to: 32-32

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/DWARF/types/Subroutine.hpp` around lines 18 - 20, The
header Subroutine.hpp is not self-contained because it uses std::vector and
std::unique_ptr in its public type alias but doesn't include <vector> and
<memory>; update the includes at the top of LIEF/DWARF/types/Subroutine.hpp by
adding the direct STL headers <vector> and <memory> so the public alias (the
std::vector/std::unique_ptr usage in this file) compiles without relying on
transitive includes.
deps/LIEF/include/LIEF/DWARF/Variable.hpp (1)

18-24: Add explicit standard headers used by this interface.

Please avoid relying on transitive includes for iterator/string/utility/cstddef types in this public header.

Proposed include hardening
 `#include` <memory>
+#include <cstddef>
+#include <iterator>
+#include <string>
+#include <utility>
 
 `#include` "LIEF/visibility.h"
 `#include` "LIEF/errors.hpp"

Also applies to: 40-43, 57-57, 98-98, 104-104

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/DWARF/Variable.hpp` around lines 18 - 24, The public
header LIEF/DWARF/Variable.hpp is relying on transitive includes for standard
types; update the header (and the other affected ranges) to explicitly include
the needed standard headers (e.g. <string>, <iterator>, <utility>, <cstddef> and
any other STL headers used by symbols in the file such as std::string,
std::size_t, std::move, iterator traits) so the DWARF::Variable class and any
functions or typedefs in this header do not depend on indirect includes; modify
the `#include` block around the top of Variable.hpp to add these explicit standard
headers and repeat the same hardening for the other mentioned locations (lines
40-43, 57, 98, 104) where public headers expose std types.
deps/LIEF/include/LIEF/DWARF/Type.hpp (1)

18-23: Make standard-library includes explicit in this public header.

This file uses iterator/string/utility/cstddef symbols that are currently satisfied via transitive includes.

Proposed include hardening
 `#include` <memory>
+#include <cstddef>
+#include <iterator>
+#include <string>
+#include <utility>

Also applies to: 68-71, 85-85, 166-166

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/DWARF/Type.hpp` around lines 18 - 23, The public
header Type.hpp is relying on transitive includes for
iterator/string/utility/cstddef symbols used by class Type and its public API;
add explicit standard headers (`#include` <iterator>, `#include` <string>, `#include`
<utility>, `#include` <cstddef>) to the top of LIEF/DWARF/Type.hpp so all
referenced STL types and size_t/ptrdiff_t are directly provided, and apply the
same explicit-include hardening to the other affected header sections (the
ranges noted in the review) where iterator/string/utility/cstddef are used.
deps/LIEF/include/LIEF/ART/types.hpp (1)

20-21: Trim unused includes in this minimal type-alias header.

Only <cstdint> is needed for the current content.

♻️ Proposed cleanup
 `#include` <cstdint>
-#include <vector>
-#include "LIEF/iterators.hpp"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/ART/types.hpp` around lines 20 - 21, This header
currently includes <vector> and "LIEF/iterators.hpp" but only integer typedefs
are used; replace those unused includes with <cstdint> only so types.hpp
contains just the necessary header for fixed-width integer types (update the
include list in types.hpp accordingly and remove the references to vector and
LIEF/iterators.hpp).
deps/LIEF/include/LIEF/Abstract/json.hpp (1)

16-17: Minor: Include guard endif comment mismatch.

The include guard is defined as LIEF_ABSTRACT_JSON_H (line 16-17), but the #endif comment on line 29 references LIEF_JSON_SUPPORT. This is a cosmetic inconsistency in the vendored LIEF code.

Suggested fix
-#endif // LIEF_JSON_SUPPORT
+#endif // LIEF_ABSTRACT_JSON_H

Also applies to: 29-29

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/Abstract/json.hpp` around lines 16 - 17, The include
guard macro is defined as LIEF_ABSTRACT_JSON_H but the trailing `#endif` comment
references LIEF_JSON_SUPPORT; update the endif comment to match the actual guard
by changing the comment after `#endif` to LIEF_ABSTRACT_JSON_H (or alternatively
rename the guard macro to LIEF_JSON_SUPPORT if that was intended) so the symbol
names are consistent; adjust the comment associated with the `#endif` in json.hpp
to reference LIEF_ABSTRACT_JSON_H and keep no other changes.
deps/LIEF/include/LIEF/ART/Parser.hpp (1)

97-100: Remove stale commented-out declaration from the public header.

This block is dead/commented code and should be dropped to keep the parser API surface clean.

🧹 Proposed cleanup
-  //// Parse a **Single** java.lang.DexCache object
-  //template<typename ART_T, typename PTR_T>
-  //void parse_class_roots(size_t object_offset);
-
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/ART/Parser.hpp` around lines 97 - 100, Remove the
stale commented-out template declaration for parse_class_roots from the public
header Parser.hpp; specifically delete the commented block containing
"//template<typename ART_T, typename PTR_T> //void parse_class_roots(size_t
object_offset);" so the Parser API surface no longer exposes dead/commented code
and the header stays clean (look for the parse_class_roots comment block inside
deps/LIEF/include/LIEF/ART/Parser.hpp).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/coverage-linux.yml:
- Around line 74-75: The "Build and test" step currently masks failures by
appending "|| exit 0" to the run command; remove the "|| exit 0" suffix from the
NODE_V8_COVERAGE=coverage/tmp make test-cov ... line so that `make test-cov`
returns a non-zero exit code and the CI job fails on test regressions (if you
need to handle flakes, use GitHub's `continue-on-error` or rerun-on-failure
strategies instead of forcing a zero exit).

In @.github/workflows/daily-wpt-fyi.yml:
- Line 39: The workflow uses runs-on: ubuntu-24.04-arm while setup-node is
selecting nightlies filtered for linux-x64 (or node-version: "nightly"), causing
ARM runners to attempt downloading x64 artifacts; fix by either (A) change
runs-on from ubuntu-24.04-arm to a x64 runner (e.g., ubuntu-24.04) so linux-x64
nightlies exist, or (B) keep runs-on: ubuntu-24.04-arm but switch the Node
install step (actions/setup-node) to use a published LTS (e.g., node-version:
"lts/*") or an unofficial arm64 nightly provider/explicit architecture mapping;
locate the runner declaration runs-on: ubuntu-24.04-arm and the
actions/setup-node step (node-version / filter settings) and apply one of these
changes consistently.

In @.github/workflows/notify-on-review-wanted.yml:
- Around line 15-16: The workflow sets runs-on: ubuntu-24.04-arm which is
incompatible with the amd64-only Docker action rtCamp/action-slack-notify;
change the runner to an amd64 host (e.g., revert runs-on to ubuntu-24.04) or
alternatively replace the usage of rtCamp/action-slack-notify with an action
that publishes multi-arch images or run the notify step on an amd64-specific job
(e.g., add a separate job using runs-on: ubuntu-24.04 and call
rtCamp/action-slack-notify there) so the Docker image architecture matches the
runner.

In @.github/workflows/update-openssl.yml:
- Line 15: The workflow currently uses the lightweight runner via the runs-on:
ubuntu-slim setting which may lack CPU/RAM for the OpenSSL updater job; change
the runner to a standard Ubuntu image by replacing the runs-on: ubuntu-slim
value with runs-on: ubuntu-24.04 so the job runs on a 2 vCPU/8 GB runner and has
sufficient resources for downloads, apt installs, and regen steps.

In `@benchmark/common.js`:
- Around line 121-123: The current boolean parsing in the configs loop silently
coerces any non-'true' string to false; update the logic around
configType/configValue so that when configType === 'boolean' you only accept the
exact strings 'true' or 'false' (map to true/false respectively) and otherwise
throw or return a clear error mentioning the option key and invalid value (use
the existing identifiers configs, cliOptions, configType, configValue and the
key variable to locate the code), then push the validated boolean into
cliOptions[key].

In `@deps/LIEF/include/LIEF/Abstract/Relocation.hpp`:
- Around line 61-63: Relocation::size(size_t) currently casts size_t to uint8_t
unconditionally which silently truncates values >255; update the setter to
validate the incoming size (e.g., if (size >
std::numeric_limits<uint8_t>::max()) throw std::out_of_range("Relocation size
out of range") or assert/return an error) before performing the cast and
assignment to size_; use std::numeric_limits<uint8_t>::max() for the bound and
apply the same validation-and-safe-cast change to the other places in this
header where a size_t is narrowed to uint8_t (the other setters/assignments
surrounding size_) so truncation cannot occur silently.

In `@deps/LIEF/include/LIEF/BinaryStream/BinaryStream.hpp`:
- Around line 468-472: BinaryStream::read_array currently advances pos_ via
increment_pos(sizeof(T) * size) even when peek_array<T>(size) returns nullptr,
corrupting stream state on failure; fix read_array by calling
peek_array<T>(size) into tmp, check if tmp is non-null before calling
increment_pos(sizeof(T) * size), and only advance the position when peek_array
succeeded (return nullptr unchanged otherwise), preserving stream state;
reference BinaryStream::read_array, BinaryStream::peek_array, increment_pos and
pos_.
- Around line 238-240: The can_read checks currently use strict < comparisons
and reject reads that end exactly at EOF; update all can_read overloads (the
methods named can_read in BinaryStream.hpp) to allow end-aligned reads by
replacing the (offset + size) < this->size() check with a safe <= style check —
e.g. validate non-negative offset/size and use offset <= this->size() - size (or
(offset + size) <= this->size() with care for overflow) so reads that end at EOF
succeed; apply the same fix to every can_read variant in the file.

In `@deps/LIEF/include/LIEF/BinaryStream/VectorStream.hpp`:
- Around line 85-90: In VectorStream::read_at the bounds check uses offset +
size which can overflow; replace it with an overflow-safe check such as
verifying size <= stream_size and offset <= stream_size and size <= stream_size
- offset (i.e., check if size > stream_size - offset) before returning
binary_.data() + offset; update the condition in read_at(uint64_t offset,
uint64_t size, uint64_t /*va*/) const to use this subtraction-based check and
still return make_error_code(lief_errors::read_error) on failure.

In `@deps/LIEF/include/LIEF/COFF/AuxiliarySymbols/AuxiliaryCLRToken.hpp`:
- Around line 66-68: The accessor reserved() in class AuxiliaryCLRToken
currently returns aux_type_ instead of the stored reserved_ byte; update the
reserved() method to return reserved_ (i.e., return the reserved_ member) so
callers receive the actual reserved value; locate the reserved() function in
AuxiliaryCLRToken and change its return expression from aux_type_ to reserved_.

In `@deps/LIEF/include/LIEF/DWARF/enums.hpp`:
- Around line 29-34: The EH_ENCODING enum has a collision: SLEB128 is 0x09 but
SIGNED is also set to 0x09; change EH_ENCODING::SIGNED to 0x08 so it matches the
DW_EH_PE_signed canonical value (make the edit where the enum lists SLEB128,
SDATA2, SDATA4, SDATA8, SIGNED and update SIGNED from 0x09 to 0x08). Ensure only
the numeric literal for SIGNED is changed and rebuild to verify no other
references assume the old value.

In `@deps/LIEF/include/LIEF/DWARF/types/Base.hpp`:
- Around line 31-54: The ENCODING enum uses implicit sequential values but must
use the DWARF DW_ATE_* numeric codes; update Base::ENCODING so each enumerator
has an explicit integer value (NONE = 0 and then assign the exact DW_ATE_*
numeric constants for SIGNED, SIGNED_CHAR, UNSIGNED, UNSIGNED_CHAR, FLOAT,
BOOLEAN, ADDRESS), keep the existing comments but make them match the assigned
numeric values, and ensure any serialization or comparisons cast the enum to its
underlying integer type when interacting with DWARF data.

In `@deps/LIEF/include/LIEF/DWARF/types/ClassLike.hpp`:
- Around line 84-87: ClassLike::classof currently checks Type::KIND for CLASS,
STRUCT, and UNION but omits Type::KIND::PACKED, causing RTTI-like checks to fail
for Packed (which derives from ClassLike); update ClassLike::classof to also
return true when kind == Type::KIND::PACKED so Packed instances are recognized,
referencing the ClassLike::classof method, the Type::KIND enum, and the Packed
subclass.

In `@deps/LIEF/include/LIEF/DWARF/types/SetTy.hpp`:
- Around line 37-43: The pointer-like operators in SetTy (operator-> and
operator*) directly access underlying_.get(), which bypasses lazy
initialization; change operator-> to call underlying_type() and return a const
Type* from that result, and change operator* to call underlying_type() and
return a const Type& (not a pointer) to match normal dereference semantics and
sibling wrappers—replace uses of underlying_.get() in operator-> and operator*
with underlying_type() and adjust the return type of operator* accordingly.

In `@deps/LIEF/include/LIEF/DyldSharedCache/utils.hpp`:
- Around line 42-50: The function is_shared_cache(const uint8_t* buffer, size_t
size) currently constructs a LIEF::SpanStream with buffer unconditionally; add
an early guard that if buffer == nullptr and size > 0 (or alternatively if size
!= 0 && buffer == nullptr) the function returns false (or another chosen error
behavior) before creating LIEF::SpanStream to avoid dereferencing a null
pointer; update the is_shared_cache(const std::vector<uint8_t>&) call site if
needed to rely on the guarded behavior.

In `@deps/LIEF/include/LIEF/ELF/DynamicEntryFlags.hpp`:
- Around line 99-105: raw_flags() currently sums flag enum values which corrupts
bitfields (especially FLAGS_1 where BASE is encoded per flag); change the
reconstruction to bitwise OR the raw flag bitmasks instead of adding, and when
handling FLAGS_1 ensure you mask out any per-flag BASE encoding so only the
intended bit positions are OR'ed together; update the lambda in raw_flags() to
compute a uint64_t flag_value from each FLAG (e.g., derive the true bitmask via
masking or a helper that returns the canonical bitmask) and combine with value
using '|' rather than '+' so flags_list_t, FLAG and FLAGS_1/BASE are handled
correctly.

In `@deps/LIEF/include/LIEF/ELF/DynamicEntryRunPath.hpp`:
- Line 34: The file exposes base constructors via "using
DynamicEntry::DynamicEntry;" which allows creating DynamicEntryRunPath instances
with wrong tags; remove that using-declaration and instead provide explicit
constructors for DynamicEntryRunPath that either (a) accept only validated
parameters or (b) forward to the base constructor but assert/validate that the
tag equals TAG::RUNPATH (or throw/delete if not), and delete any inherited
constructors that would permit arbitrary TAG values; update both the
default/parameterized overloads (lines around the existing using and the block
at 36-51) to enforce TAG::RUNPATH so instances cannot be constructed with
non-RUNPATH tags.

In `@deps/LIEF/include/LIEF/ELF/NoteDetails/core/CoreSigInfo.hpp`:
- Around line 44-46: The mutator signatures for signo, sigcode, and sigerrno
currently take uint32_t but the corresponding accessors use int32_t (lines
37-43), causing unsafe round-trips for negative values; change the parameter
types of void signo(uint32_t value), void sigcode(uint32_t value), and void
sigerrno(uint32_t value) to use int32_t so they match the accessor types and
preserve sign.

In `@LICENSE`:
- Around line 915-935: Insert the missing MIT redistribution condition into the
sorttable.js license block: locate the "The MIT Licence, for code from
kryogenix.org" / "subject to the following conditions:" paragraph and add the
standard clause beginning "The above copyright notice and this permission notice
shall be included in all copies or substantial portions of the Software."
immediately after "subject to the following conditions:" so the notice is
required before the warranty disclaimer.

---

Outside diff comments:
In @.github/workflows/lint-release-proposal.yml:
- Around line 33-36: The workflow prints an unset variable ACTUAL instead of the
actual commit subject; update the diagnostics to use COMMIT_SUBJECT (or assign
ACTUAL=COMMIT_SUBJECT) so the "Actual:" line shows the real commit title; change
the echo that references $ACTUAL to echo the COMMIT_SUBJECT variable to aid
debugging when the grep check against EXPECTED_TITLE fails.

---

Minor comments:
In `@benchmark/internal/util_isinsidenodemodules.js`:
- Around line 45-48: The recursion base case in nestCallStack only checks for
depth === 1, so non-positive depth (0 or negative from a custom stackCount)
causes runaway recursion; update the base-case check in nestCallStack to handle
non-positive values (e.g., use if (depth <= 1) return callback();) and ideally
validate/sanitize the incoming depth/stackCount before recursing to ensure
non-positive inputs short-circuit instead of recursing indefinitely.

In `@benchmark/util/text-decoder-stream.js`:
- Around line 25-40: The switch on the variable `type` that creates `buf`
(handling 'SharedArrayBuffer', 'ArrayBuffer', 'Buffer') lacks a default branch
causing `buf` to remain undefined for unexpected values; add a `default` case
that throws a clear error (e.g., TypeError or Error) including the invalid
`type` value so the code fails fast and surfaces the root cause instead of
erroring later when `buf` is used.

In `@deps/LIEF/include/LIEF/Abstract/DebugInfo.hpp`:
- Around line 53-59: The downcast example in DebugInfo.hpp and the static_assert
text are inconsistent: replace the incorrect references to
Instruction/inst/find_function with a correct DebugInfo example (e.g., show
obtaining a DebugInfo pointer via as<LIEF::debug::DebugInfo>() and calling an
appropriate DebugInfo method), and update the static_assert message to mention
DebugInfo (not Instruction); locate and fix the example block and the
static_assert string in the DebugInfo class (symbols: DebugInfo, as<...>(), and
the static_assert) and make the example and assertion wording consistent with
the DebugInfo API.

In `@deps/LIEF/include/LIEF/ART/java_structures.hpp`:
- Around line 300-304: Fix the inconsistent and typo'd comments around the
jstring structure: correct "chat16_t" to "char16_t", remove the duplicated "as"
in "encoded as as char16_t", and clarify the count[0] LSB mapping (e.g.,
"count[0] (LSB) == 1 -> compressed (char), == 0 -> char16_t") so the meaning is
unambiguous; also align the version/type comments near the ART_56 namespace and
the nearby block (lines referenced around the jstring/template<class T =
no_brooks_read_barrier_t> and the ART_56 namespace) so they consistently reflect
ART_56 semantics and avoid conflicting wording across the ranges mentioned (303
and 363–368).

In `@deps/LIEF/include/LIEF/DEX/instructions.hpp`:
- Around line 352-353: The closing namespace comments are reversed—swap the
trailing comments on the two closing braces so the first closing brace is
labeled "Namespace DEX" and the final closing brace is labeled "Namespace LIEF";
update the end-comment text adjacent to the braces that close the nested
namespaces (matching the namespace declarations for LIEF and DEX) so the
comments accurately reflect the closed namespaces.

In `@deps/LIEF/include/LIEF/DEX/MapItem.hpp`:
- Line 69: Fix the typo in the API comment within the MapItem declaration:
change the comment text "Reserved value (likely for alignment prupose)" to
"Reserved value (likely for alignment purpose)"; locate the comment associated
with the MapItem class in MapItem.hpp (symbol: MapItem) and update the spelling
only.

In `@deps/LIEF/include/LIEF/DWARF/editor/Variable.hpp`:
- Around line 39-40: Typo in the public API docstring: change "revelant" to
"relevant" in the comment above the Variable class (or declaration) that
references set_stack_offset; update the documentation comment in Variable.hpp so
the sentence reads "...relevant in the case of a static global variable..." to
keep public API docs correct.

In `@deps/LIEF/include/LIEF/DyldSharedCache/MappingInfo.hpp`:
- Around line 84-86: The end_address() method currently returns address() +
size() which can overflow; update MappingInfo::end_address() to detect overflow
by comparing size() to (std::numeric_limits<uint64_t>::max() - address()) and
handle the case (e.g., return std::numeric_limits<uint64_t>::max() or signal an
error) instead of doing the unchecked addition; reference the existing
end_address(), address(), and size() methods when making the change.

In `@deps/LIEF/include/LIEF/ELF/NoteDetails/core/CoreSigInfo.hpp`:
- Line 65: Fix the typo in the namespace closing comment in CoreSigInfo.hpp by
changing the comment that reads "} // namepsace ELF" to the correct "} //
namespace ELF" so the trailing comment matches the actual namespace name (file
contains the closing brace for namespace ELF near the end of CoreSigInfo.hpp).

In `@deps/LIEF/include/LIEF/ELF/NoteDetails/properties/Needed.hpp`:
- Around line 30-32: The classof helper in Needed.hpp dereferences prop
unconditionally; update NoteGnuProperty::properties::Needed::classof (the static
bool classof(const NoteGnuProperty::Property* prop)) to first check that prop is
not null and return false when it is, before evaluating prop->type() ==
NoteGnuProperty::Property::TYPE::NEEDED; this avoids null-pointer crashes in
tolerant call paths while preserving the existing type check for non-null
pointers.

In `@deps/LIEF/include/LIEF/ELF/NoteDetails/QNXStack.hpp`:
- Line 68: The closing inline comment for the ELF namespace has a typo:
"namepsace ELF"; update that comment to "namespace ELF" so the closing comment
for the namespace matches the correct spelling and convention (look for the
closing bracket and comment after the namespace ELF block in QNXStack.hpp).

---

Nitpick comments:
In `@benchmark/esm/import-cjs.js`:
- Line 31: The comment on the import statement awaiting
import(tmpdir.fileURL(`imported-cjs-initial.js`)) incorrectly refers to "wasm";
update the inline comment to clarify this step is warming module import/cache
for the CJS fixture (e.g., "Warm CJS module import/cache" or "Prime CJS
import/cache for fixture") so it accurately reflects the purpose of this import
of imported-cjs-initial.js.

In `@benchmark/http/bench-parser-fragmented.js`:
- Line 53: The filler header generation uses Math.random() (in the header +=
`X-Filler${i}: ${Math.random()...}` expression), which introduces
non-deterministic payloads and benchmark noise; replace the random value with a
deterministic string (for example a fixed repeating token or a value derived
solely from the loop index i) so the header building code that appends X-Filler
headers produces identical bytes across runs and stabilizes benchmark
comparisons.
- Around line 62-64: The current loop slicing headerBuf by fragSize can produce
fewer than frags fragments; change to explicitly create exactly frags fragments
by computing baseLen = Math.floor(headerBuf.length / frags) and remainder =
headerBuf.length % frags, then iterate i from 0 to frags-1 and slice headerBuf
into fragments where the first remainder fragments have size baseLen+1 and the
rest have size baseLen, pushing each slice into fragments; update the code that
references fragSize/fragments to use frags, headerBuf, and fragments so the
benchmark always yields exactly frags parts.

In `@benchmark/napi/function_call/index.js`:
- Line 27: Replace the hardcoded path string in the console.error call with
__filename so the message reports the actual file path; specifically update the
console.error invocation that currently logs 'misc/function_call/index.js
Node-API binding failed to load' to use a template including __filename and the
same descriptive text (e.g., `${__filename}: Node-API binding failed to load`),
and apply the same pattern to the similar console.error that logs the V8 binding
failure (replace its literal path with `${__filename}: V8 Binding failed to
load`) so both messages use the runtime filename.

In `@benchmark/util/deprecate.js`:
- Line 9: The entry emitWarningSync in the deprecate config uses numeric flags
[1, 0] while modifyPrototype uses boolean flags; update emitWarningSync to use
[true, false] for consistency (i.e., replace [1, 0] with [true, false] in the
emitWarningSync array) and verify any consumer code that reads emitWarningSync
(search for references to emitWarningSync) accepts booleans—if a consumer
expects numeric values, adjust that consumer (or coerce) so the new booleans are
handled correctly.

In `@benchmark/util/text-decoder-stream.js`:
- Line 7: The ignoreBOM axis in the benchmark matrix is a no-op because the
payload is built from plain text without BOM bytes; either remove ignoreBOM from
the benchmark matrix (the matrix/ignoreBOM entry) to avoid redundant
permutations, or modify the payload construction so it includes an actual BOM
for the encodings tested (e.g., prepend the UTF-8 or UTF-16LE BOM bytes to the
payload variable used when running tests) and ensure the decoder invocation that
reads ignoreBOM sees the BOM fixture.

In `@deps/LIEF/include/LIEF/Abstract/hash.hpp`:
- Around line 23-26: The header declares forward declarations for Binary,
Header, Relocation, and Object but omits local forward declarations for Section,
Symbol, and Function which are referenced later; add "class Section;", "class
Symbol;", and "class Function;" alongside the existing forward declarations in
LIEF/Abstract/hash.hpp (near the declarations for
Binary/Header/Relocation/Object) so the header no longer relies on transitive
declarations from other includes and to ensure Section, Symbol, and Function are
locally declared for any visitor/template code that references them.
- Around line 29-37: Combine the three consecutive access specifiers into a
single public: block to improve readability; move the declarations for static
LIEF::Hash::value_type hash(const Object& obj), the using declarations (using
LIEF::Hash::Hash; using LIEF::Hash::visit;) and the void visit(const Binary&
binary) override; declaration under one public: header so the symbols stay
unchanged and only the access-section layout is simplified.

In `@deps/LIEF/include/LIEF/Abstract/json.hpp`:
- Around line 16-17: The include guard macro is defined as LIEF_ABSTRACT_JSON_H
but the trailing `#endif` comment references LIEF_JSON_SUPPORT; update the endif
comment to match the actual guard by changing the comment after `#endif` to
LIEF_ABSTRACT_JSON_H (or alternatively rename the guard macro to
LIEF_JSON_SUPPORT if that was intended) so the symbol names are consistent;
adjust the comment associated with the `#endif` in json.hpp to reference
LIEF_ABSTRACT_JSON_H and keep no other changes.

In `@deps/LIEF/include/LIEF/ART/Parser.hpp`:
- Around line 97-100: Remove the stale commented-out template declaration for
parse_class_roots from the public header Parser.hpp; specifically delete the
commented block containing "//template<typename ART_T, typename PTR_T> //void
parse_class_roots(size_t object_offset);" so the Parser API surface no longer
exposes dead/commented code and the header stays clean (look for the
parse_class_roots comment block inside deps/LIEF/include/LIEF/ART/Parser.hpp).

In `@deps/LIEF/include/LIEF/ART/types.hpp`:
- Around line 20-21: This header currently includes <vector> and
"LIEF/iterators.hpp" but only integer typedefs are used; replace those unused
includes with <cstdint> only so types.hpp contains just the necessary header for
fixed-width integer types (update the include list in types.hpp accordingly and
remove the references to vector and LIEF/iterators.hpp).

In `@deps/LIEF/include/LIEF/DWARF/Type.hpp`:
- Around line 18-23: The public header Type.hpp is relying on transitive
includes for iterator/string/utility/cstddef symbols used by class Type and its
public API; add explicit standard headers (`#include` <iterator>, `#include`
<string>, `#include` <utility>, `#include` <cstddef>) to the top of
LIEF/DWARF/Type.hpp so all referenced STL types and size_t/ptrdiff_t are
directly provided, and apply the same explicit-include hardening to the other
affected header sections (the ranges noted in the review) where
iterator/string/utility/cstddef are used.

In `@deps/LIEF/include/LIEF/DWARF/types/Array.hpp`:
- Around line 18-20: The header LIEF/DWARF/types/Array.hpp publicly uses STL
symbols (std::unique_ptr, std::string, size_t) but doesn't include their
headers; add direct includes for <memory> (for std::unique_ptr), <string> (for
std::string) and <cstddef> (for size_t) at the top of Array.hpp so the public
API doesn't rely on transitive includes; update the same pattern for the other
affected DWARF type headers (lines 31-42 region) that expose these types.

In `@deps/LIEF/include/LIEF/DWARF/types/ClassLike.hpp`:
- Around line 15-16: The include guard in ClassLike.hpp currently uses
LIEF_DWARF_TYPE_STRUCTURE_H which is not file-specific and risks collisions;
update the header guard to a unique macro based on the filename (e.g.,
LIEF_DWARF_CLASSLIKE_HPP or similar) and replace both the `#ifndef/`#define and
the corresponding `#endif` comment if present so the guard fully matches the new
macro; ensure the same change is applied consistently in the duplicate
occurrence noted at line 152.

In `@deps/LIEF/include/LIEF/DWARF/types/Subroutine.hpp`:
- Around line 18-20: The header Subroutine.hpp is not self-contained because it
uses std::vector and std::unique_ptr in its public type alias but doesn't
include <vector> and <memory>; update the includes at the top of
LIEF/DWARF/types/Subroutine.hpp by adding the direct STL headers <vector> and
<memory> so the public alias (the std::vector/std::unique_ptr usage in this
file) compiles without relying on transitive includes.

In `@deps/LIEF/include/LIEF/DWARF/Variable.hpp`:
- Around line 18-24: The public header LIEF/DWARF/Variable.hpp is relying on
transitive includes for standard types; update the header (and the other
affected ranges) to explicitly include the needed standard headers (e.g.
<string>, <iterator>, <utility>, <cstddef> and any other STL headers used by
symbols in the file such as std::string, std::size_t, std::move, iterator
traits) so the DWARF::Variable class and any functions or typedefs in this
header do not depend on indirect includes; modify the `#include` block around the
top of Variable.hpp to add these explicit standard headers and repeat the same
hardening for the other mentioned locations (lines 40-43, 57, 98, 104) where
public headers expose std types.

In `@deps/LIEF/README.md`:
- Line 59: The table of contents contains a duplicated "About" entry; remove the
duplicate markdown list item "[About](`#about`)" so only one TOC link to the
"About" heading remains (update the README.md content around the existing TOC
list to delete the redundant "[About](`#about`)" entry).

Comment on lines +74 to 75
- name: Build and test
run: NODE_V8_COVERAGE=coverage/tmp make test-cov -j4 V=1 TEST_CI_ARGS="-p dots --measure-flakiness 9" || exit 0
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Do not mask test failures in coverage CI.

The || exit 0 on Line 75 makes the job pass even when make test-cov fails, which hides real regressions and weakens coverage gating.

Suggested patch
-      - name: Build and test
-        run: NODE_V8_COVERAGE=coverage/tmp make test-cov -j4 V=1 TEST_CI_ARGS="-p dots  --measure-flakiness 9" || exit 0
+      - name: Build and test
+        run: NODE_V8_COVERAGE=coverage/tmp make test-cov -j4 V=1 TEST_CI_ARGS="-p dots  --measure-flakiness 9"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/coverage-linux.yml around lines 74 - 75, The "Build and
test" step currently masks failures by appending "|| exit 0" to the run command;
remove the "|| exit 0" suffix from the NODE_V8_COVERAGE=coverage/tmp make
test-cov ... line so that `make test-cov` returns a non-zero exit code and the
CI job fails on test regressions (if you need to handle flakes, use GitHub's
`continue-on-error` or rerun-on-failure strategies instead of forcing a zero
exit).

node-version: ${{ fromJSON(needs.collect-versions.outputs.matrix) }}
fail-fast: false
runs-on: ubuntu-latest
runs-on: ubuntu-24.04-arm
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n .github/workflows/daily-wpt-fyi.yml | head -50

Repository: nodesource/nsolid

Length of output: 1733


🏁 Script executed:

#!/bin/bash
# Check latest x64 nightly and whether arm64 artifact exists for it
json="$(curl -sS https://nodejs.org/download/nightly/index.json)" 2>/dev/null || echo "[]"

if [[ "$json" != "[]" ]]; then
  # Get latest x64 nightly
  x64_latest=$(echo "$json" | jq -r '[.[] | select(.files[]? | contains("linux-x64"))][0].version' 2>/dev/null)
  # Check if that version has arm64 artifact
  arm64_match=$(echo "$json" | jq -r --arg v "$x64_latest" '[.[] | select(.version == $v and (.files[]? | contains("linux-arm64")))] | length' 2>/dev/null)
  
  echo "Latest x64 nightly: $x64_latest"
  echo "ARM64 artifact exists for this version: $arm64_match"
  
  # Also check the latest arm64 nightly
  arm64_latest=$(echo "$json" | jq -r '[.[] | select(.files[]? | contains("linux-arm64"))][0].version' 2>/dev/null)
  echo "Latest arm64 nightly: $arm64_latest"
  
  # Are they the same?
  if [[ "$x64_latest" == "$arm64_latest" ]]; then
    echo "Result: Both architectures have the same latest nightly version - no mismatch"
  else
    echo "Result: Architecture mismatch detected - x64 latest != arm64 latest"
  fi
else
  echo "Could not fetch nightly index"
fi

Repository: nodesource/nsolid

Length of output: 1733


🌐 Web query:

NodeJS nightly builds linux-x64 linux-arm64 availability parity

💡 Result:

Node.js official nightly builds are not in parity between linux-x64 and linux-arm64.

  • In the most recent nightly directory (v26.0.0-nightly20260302…), the published Linux binaries include linux-x64 (plus ppc64le and s390x), but there are no linux-arm64 tarballs in that nightly output. [1]
  • By contrast, official release builds do publish both linux-x64 and linux-arm64 tarballs (example: /dist/latest/ includes node-…-linux-arm64.* and node-…-linux-x64.*). [2]

If you need arm64 Linux binaries when nightlies don’t provide them, the Node.js project also points to unofficial-builds for additional/experimental platform binaries (not guaranteed / community-maintained). [3]

Sources: [1] [2] [3]


🌐 Web query:

GitHub Actions ubuntu-24.04-arm runner architecture

💡 Result:

ubuntu-24.04-arm GitHub Actions runners are Linux on ARM64 (AArch64) (i.e., uname -m would be aarch64). They’re a partner image provided by Arm Limited for GitHub-hosted Arm runners. [1], [2]

Sources: [1] (actions/partner-runner-images), [2] (GitHub Changelog)


🌐 Web query:

setup-node action nightly version architecture mismatch linux-x64 arm64

💡 Result:

An “architecture mismatch” with actions/setup-node (e.g., it downloads linux-x64 but your runner is arm64) almost always comes from one of these:

  1. Your workflow forces x64 on an ARM runner
    If you set architecture: x64 (or inherit it from a reusable workflow), setup-node will try to install x64 even on an arm64 machine. setup-node supports an architecture input and otherwise uses the system architecture by default. [1]

  2. You’re using “nightly”, and the action is pulling the wrong artifact for your platform
    Nightly builds are fetched from Node’s nightly download site (not the normal LTS cache path). [2] If the resolved nightly + arch combo doesn’t exist (or a mirror/cache only has x64), you can end up with x64 being selected/downloaded.

  3. Caching/mirrors may be x64-only for the fast path
    The actions/node-versions packaging/caching has historically been a point of confusion for ARM users (requests to include arm64 artifacts). [3][4]

Fix (recommended)

  • Don’t set architecture unless you truly need cross-arch.
  • Or, set it explicitly to ARM64 on ARM runners:
- uses: actions/setup-node@v6
  with:
    node-version: '24-nightly'
    architecture: 'arm64'

Also ensure you’re actually running on ARM (self-hosted or an ARM runner image). Quick check:

- run: uname -m

If uname -m prints aarch64/arm64, use architecture: arm64 (or omit architecture). If it prints x86_64, use x64 (or omit architecture).

Sources: [1] https://github.com/actions/setup-node [2] https://git.ari.lt/actions/setup-node/src/branch/main/docs/advanced-usage.md [3] actions/setup-node#857 [4] https://github.com/actions/node-versions


Nightly builds lack ARM64 artifacts; ARM runner will fail with x64 filter.

The workflow uses ubuntu-24.04-arm (ARM64) but filters nightlies by linux-x64. Node.js official nightly builds do not publish linux-arm64 tarballs—only linux-x64 (plus ppc64le and s390x). This causes setup-node to attempt downloading an x64 binary on an ARM runner, which will fail.

The suggested fix (changing the filter to linux-arm64) won't work because arm64 nightlies don't exist. Either:

  1. Use LTS versions instead of nightlies on ARM runners, or
  2. Switch to unofficial-builds that provide arm64 nightlies, or
  3. Explicitly set architecture: arm64 in setup-node (though this still doesn't solve missing arm64 nightly artifacts).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/daily-wpt-fyi.yml at line 39, The workflow uses runs-on:
ubuntu-24.04-arm while setup-node is selecting nightlies filtered for linux-x64
(or node-version: "nightly"), causing ARM runners to attempt downloading x64
artifacts; fix by either (A) change runs-on from ubuntu-24.04-arm to a x64
runner (e.g., ubuntu-24.04) so linux-x64 nightlies exist, or (B) keep runs-on:
ubuntu-24.04-arm but switch the Node install step (actions/setup-node) to use a
published LTS (e.g., node-version: "lts/*") or an unofficial arm64 nightly
provider/explicit architecture mapping; locate the runner declaration runs-on:
ubuntu-24.04-arm and the actions/setup-node step (node-version / filter
settings) and apply one of these changes consistently.

Comment on lines +15 to +16
# cannot use ubuntu-slim here because rtCamp/action-slack-notify is dockerized
runs-on: ubuntu-24.04-arm
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, let's examine the workflow file to understand the context
cat -n .github/workflows/notify-on-review-wanted.yml

Repository: nodesource/nsolid

Length of output: 1733


🌐 Web query:

rtCamp/action-slack-notify Docker image arm64 linux/arm64 support manifest

💡 Result:

rtCamp/action-slack-notify on Docker Hub is amd64-only right now—there is no linux/arm64 image and therefore no multi-arch manifest list that includes arm64.

  • On the Docker Hub Tags page, each published tag (e.g., master, v2.1.0) shows OS/ARCH = linux/amd64 (and no linux/arm64). [1]
  • The published Dockerfile is also x86_64/amd64-specific (e.g., it downloads vault_${VAULT_VERSION}_linux_amd64.zip and creates a symlink to libc.musl-x86_64.so.1). [2]

So, pulling it on an ARM64 host will fail unless you force emulation (e.g., --platform=linux/amd64) and have qemu/binfmt set up—or you rebuild/publish a proper multi-arch image and manifest.

Sources: [1] [2]


ARM runner incompatible with amd64-only Docker action.

Line 16 switches to an ARM runner (ubuntu-24.04-arm), but rtCamp/action-slack-notify is built for amd64 only—there is no linux/arm64 image in the manifest. This job will fail at runtime on an ARM runner.

Suggested fix
-    runs-on: ubuntu-24.04-arm
+    runs-on: ubuntu-24.04
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/notify-on-review-wanted.yml around lines 15 - 16, The
workflow sets runs-on: ubuntu-24.04-arm which is incompatible with the
amd64-only Docker action rtCamp/action-slack-notify; change the runner to an
amd64 host (e.g., revert runs-on to ubuntu-24.04) or alternatively replace the
usage of rtCamp/action-slack-notify with an action that publishes multi-arch
images or run the notify step on an amd64-specific job (e.g., add a separate job
using runs-on: ubuntu-24.04 and call rtCamp/action-slack-notify there) so the
Docker image architecture matches the runner.

openssl-update:
if: github.repository == 'nodejs/node'
runs-on: ubuntu-latest
runs-on: ubuntu-slim
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

❓ Verification inconclusive

🏁 Script executed:

cat -n .github/workflows/update-openssl.yml | head -30

Repository: nodesource/nsolid

Length of output: 1733


🏁 Script executed:

# Get info about the GitHub-hosted runners
web_search
GitHub ubuntu-slim runner specifications disk space CPU memory limits

Repository: nodesource/nsolid

Length of output: 1733


🌐 Web query:

GitHub Actions ubuntu-slim runner specifications CPU memory disk timeout limits

💡 Result:

ubuntu-slim (GitHub-hosted runner) specs

  • CPU: 1 vCPU
  • Memory: 5 GB RAM
  • Disk: 14 GB SSD
  • Arch: x64
  • Notes: minimal preinstalled toolset vs standard Ubuntu images (ubuntu-24.04, ubuntu-22.04, ubuntu-latest). [1][2]

Timeouts / limits

  • ubuntu-slim job timeout: 15 minutes (hard limit for this single-CPU runner label). [1]
  • All standard GitHub-hosted runners job execution time limit: 6 hours (360 minutes) per job. [3]

Sources: [1] GitHub Docs – GitHub-hosted runners reference, [2] actions/runner-images repo, [3] GitHub Docs – Actions usage limits.


🌐 Web query:

GitHub Actions ubuntu-24.04 vs ubuntu-slim runner performance comparison

💡 Result:

What you’re really comparing

  • ubuntu-24.04 (same as ubuntu-latest on x64): a GitHub-hosted VM runner with 2 vCPU / 8 GB RAM / 14 GB SSD. [1]
  • ubuntu-slim: a smaller GitHub-hosted runner with 1 vCPU / 5 GB RAM / 14 GB SSD. Despite the name, it’s still Ubuntu 24.04.3 LTS—it’s “slim” mainly in resources + preinstalled software. [1][2]

Performance expectations (practical)

  • CPU-bound jobs (compiles, tests, compression, hashing): expect ~up to ~2× slower throughput on ubuntu-slim vs ubuntu-24.04 simply because it’s 1 vCPU vs 2 vCPU (and less RAM can further reduce parallelism/cache effectiveness). [1]
  • Memory-sensitive jobs (large builds, big dependency graphs, heavy containers): ubuntu-slim is more likely to hit memory pressure, slowing builds or causing OOMs versus ubuntu-24.04. [1]
  • Startup / provisioning time:
    • ubuntu-slim typically benefits when your workflow only needs a minimal toolchain, because it has a much smaller preinstalled software set than the full ubuntu-24.04 image. Compare the tool lists: full image includes many languages, browsers, Android tooling, etc., while slim includes a minimal subset. [2][3]

Biggest non-performance difference (often mistaken for “performance”)

  • Tooling availability:
    • Full ubuntu-24.04 image is heavily preloaded (e.g., multiple language versions, browsers/drivers, Android SDK/NDKs, etc.). [3]
    • ubuntu-slim ships with a smaller default set (Node/Python basics, some CLIs, Docker client tools, etc.), so you may spend extra time installing what you need—or you may save time by not carrying the extra image complexity. [2][3]

Recommendation

  • Use ubuntu-24.04 for build/test workloads that benefit from parallelism or need the rich preinstalled toolset. [1][3]
  • Use ubuntu-slim for lightweight CI tasks (linting, small unit tests, formatting, scripting) where 1 vCPU is enough and you don’t need the full image’s tooling. [1][2]

Sources: GitHub-hosted runner hardware specs [1], ubuntu-slim image details [2], ubuntu-24.04 image details [3].


Consider ubuntu-24.04 for this OpenSSL updater job; ubuntu-slim may not be optimal.

This workflow performs resource-intensive operations (download, apt install, regeneration). The ubuntu-slim runner (1 vCPU, 5 GB RAM) has significantly fewer resources than ubuntu-24.04 (2 vCPU, 8 GB RAM), and the job may experience slower execution or resource contention on the smaller runner. Since this is not a lightweight task, ubuntu-24.04 provides better headroom.

Suggested change
-    runs-on: ubuntu-slim
+    runs-on: ubuntu-24.04
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
runs-on: ubuntu-slim
runs-on: ubuntu-24.04
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/update-openssl.yml at line 15, The workflow currently uses
the lightweight runner via the runs-on: ubuntu-slim setting which may lack
CPU/RAM for the OpenSSL updater job; change the runner to a standard Ubuntu
image by replacing the runs-on: ubuntu-slim value with runs-on: ubuntu-24.04 so
the job runs on a 2 vCPU/8 GB runner and has sufficient resources for downloads,
apt installs, and regen steps.

Comment on lines +121 to +123
const configType = typeof configs[key][0];
const configValue = configType === 'number' ? +value : configType === 'boolean' ? value === 'true' : value;
cliOptions[key].push(configValue);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

fd 'common.js$' benchmark --full-path

Repository: nodesource/nsolid

Length of output: 1733


Reject invalid boolean literals instead of silently coercing to false.

At lines 121–123, any value other than the string 'true' becomes false (e.g., typos like tru, True, or false), silently corrupting benchmark configuration:

const configValue = configType === 'boolean' ? value === 'true' : value;
Proposed fix
        const configType = typeof configs[key][0];
-        const configValue = configType === 'number' ? +value : configType === 'boolean' ? value === 'true' : value;
+        let configValue;
+        if (configType === 'number') {
+          configValue = Number(value);
+          if (!Number.isFinite(configValue)) {
+            throw new TypeError(`invalid numeric value for "${key}": ${value}`);
+          }
+        } else if (configType === 'boolean') {
+          if (value !== 'true' && value !== 'false') {
+            throw new TypeError(`invalid boolean value for "${key}": ${value}`);
+          }
+          configValue = value === 'true';
+        } else {
+          configValue = value;
+        }
         cliOptions[key].push(configValue);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmark/common.js` around lines 121 - 123, The current boolean parsing in
the configs loop silently coerces any non-'true' string to false; update the
logic around configType/configValue so that when configType === 'boolean' you
only accept the exact strings 'true' or 'false' (map to true/false respectively)
and otherwise throw or return a clear error mentioning the option key and
invalid value (use the existing identifiers configs, cliOptions, configType,
configValue and the key variable to locate the code), then push the validated
boolean into cliOptions[key].

Comment on lines +42 to +50
inline bool is_shared_cache(const uint8_t* buffer, size_t size) {
LIEF::SpanStream strm(buffer, size);
return is_shared_cache(strm);
}

/// Check if the given buffer points to a dyld shared cache file
inline bool is_shared_cache(const std::vector<uint8_t>& buffer) {
return is_shared_cache(buffer.data(), buffer.size());
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard null raw buffers before creating SpanStream.

At Line 42, buffer is used unconditionally. nullptr with non-zero size should be rejected early.

🛠️ Proposed fix
 inline bool is_shared_cache(const uint8_t* buffer, size_t size) {
+  if (buffer == nullptr && size != 0) {
+    return false;
+  }
   LIEF::SpanStream strm(buffer, size);
   return is_shared_cache(strm);
 }
@@
 inline bool is_shared_cache(const std::vector<uint8_t>& buffer) {
+  if (buffer.empty()) {
+    return false;
+  }
   return is_shared_cache(buffer.data(), buffer.size());
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/DyldSharedCache/utils.hpp` around lines 42 - 50, The
function is_shared_cache(const uint8_t* buffer, size_t size) currently
constructs a LIEF::SpanStream with buffer unconditionally; add an early guard
that if buffer == nullptr and size > 0 (or alternatively if size != 0 && buffer
== nullptr) the function returns false (or another chosen error behavior) before
creating LIEF::SpanStream to avoid dereferencing a null pointer; update the
is_shared_cache(const std::vector<uint8_t>&) call site if needed to rely on the
guarded behavior.

Comment on lines +99 to +105
uint64_t raw_flags() const {
flags_list_t flags = this->flags();
return std::accumulate(flags.begin(), flags.end(), uint64_t(0),
[] (uint64_t value, FLAG f) {
return value + (uint64_t)f;
}
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

raw_flags() reconstructs the value incorrectly.

Adding encoded flag values breaks bitfield reconstruction (especially for FLAGS_1, where BASE is included per flag).

🛠️ Proposed fix
   uint64_t raw_flags() const {
     flags_list_t flags = this->flags();
     return std::accumulate(flags.begin(), flags.end(), uint64_t(0),
       [] (uint64_t value, FLAG f) {
-        return value + (uint64_t)f;
+        const uint64_t encoded = static_cast<uint64_t>(f);
+        const uint64_t raw = encoded >= BASE ? (encoded - BASE) : encoded;
+        return value | raw;
       }
     );
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
uint64_t raw_flags() const {
flags_list_t flags = this->flags();
return std::accumulate(flags.begin(), flags.end(), uint64_t(0),
[] (uint64_t value, FLAG f) {
return value + (uint64_t)f;
}
);
uint64_t raw_flags() const {
flags_list_t flags = this->flags();
return std::accumulate(flags.begin(), flags.end(), uint64_t(0),
[] (uint64_t value, FLAG f) {
const uint64_t encoded = static_cast<uint64_t>(f);
const uint64_t raw = encoded >= BASE ? (encoded - BASE) : encoded;
return value | raw;
}
);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/ELF/DynamicEntryFlags.hpp` around lines 99 - 105,
raw_flags() currently sums flag enum values which corrupts bitfields (especially
FLAGS_1 where BASE is encoded per flag); change the reconstruction to bitwise OR
the raw flag bitmasks instead of adding, and when handling FLAGS_1 ensure you
mask out any per-flag BASE encoding so only the intended bit positions are OR'ed
together; update the lambda in raw_flags() to compute a uint64_t flag_value from
each FLAG (e.g., derive the true bitmask via masking or a helper that returns
the canonical bitmask) and combine with value using '|' rather than '+' so
flags_list_t, FLAG and FLAGS_1/BASE are handled correctly.


public:
static constexpr char delimiter = ':';
using DynamicEntry::DynamicEntry;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Do not expose base constructors that can break RUNPATH invariants.

Line 34 allows constructing DynamicEntryRunPath through generic DynamicEntry constructors, which can produce instances with non-TAG::RUNPATH tags.

Suggested fix
-  using DynamicEntry::DynamicEntry;

Also applies to: 36-51

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/ELF/DynamicEntryRunPath.hpp` at line 34, The file
exposes base constructors via "using DynamicEntry::DynamicEntry;" which allows
creating DynamicEntryRunPath instances with wrong tags; remove that
using-declaration and instead provide explicit constructors for
DynamicEntryRunPath that either (a) accept only validated parameters or (b)
forward to the base constructor but assert/validate that the tag equals
TAG::RUNPATH (or throw/delete if not), and delete any inherited constructors
that would permit arbitrary TAG values; update both the default/parameterized
overloads (lines around the existing using and the block at 36-51) to enforce
TAG::RUNPATH so instances cannot be constructed with non-RUNPATH tags.

Comment on lines +44 to +46
void signo(uint32_t value);
void sigcode(uint32_t value);
void sigerrno(uint32_t value);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use signed mutator types to match SIGINFO accessors.

Lines 44-46 take uint32_t, but Lines 37-43 expose int32_t. This prevents safe round-trip for negative values (notably sigcode/sigerrno).

Suggested fix
-  void signo(uint32_t value);
-  void sigcode(uint32_t value);
-  void sigerrno(uint32_t value);
+  void signo(int32_t value);
+  void sigcode(int32_t value);
+  void sigerrno(int32_t value);

Also applies to: 37-43

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deps/LIEF/include/LIEF/ELF/NoteDetails/core/CoreSigInfo.hpp` around lines 44
- 46, The mutator signatures for signo, sigcode, and sigerrno currently take
uint32_t but the corresponding accessors use int32_t (lines 37-43), causing
unsafe round-trips for negative values; change the parameter types of void
signo(uint32_t value), void sigcode(uint32_t value), and void sigerrno(uint32_t
value) to use int32_t so they match the accessor types and preserve sign.

Comment on lines +915 to +935
The MIT Licence, for code from kryogenix.org

Code downloaded from the Browser Experiments section of kryogenix.org is
licenced under the so-called MIT licence. The licence is below.

Copyright (c) 1997-date Stuart Langridge

Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add the missing MIT redistribution condition in the sorttable.js block.

The text jumps from “subject to the following conditions” (Line 927) directly to the disclaimer, but the MIT clause requiring notice inclusion is missing.

🛠️ Suggested fix
     Permission is hereby granted, free of charge, to any person obtaining a
     copy of this software and associated documentation files (the "Software"),
     to deal in the Software without restriction, including without limitation
     the rights to use, copy, modify, merge, publish, distribute, sublicense,
     and/or sell copies of the Software, and to permit persons to whom the
     Software is furnished to do so, subject to the following conditions:
+
+    The above copyright notice and this permission notice shall be included in
+    all copies or substantial portions of the Software.
 
     THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@LICENSE` around lines 915 - 935, Insert the missing MIT redistribution
condition into the sorttable.js license block: locate the "The MIT Licence, for
code from kryogenix.org" / "subject to the following conditions:" paragraph and
add the standard clause beginning "The above copyright notice and this
permission notice shall be included in all copies or substantial portions of the
Software." immediately after "subject to the following conditions:" so the
notice is required before the warranty disclaimer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.