Skip to content

chore: v1 branch#1656

Draft
SwenSchaeferjohann wants to merge 472 commits intov1-c8c0ea2e6from
main
Draft

chore: v1 branch#1656
SwenSchaeferjohann wants to merge 472 commits intov1-c8c0ea2e6from
main

Conversation

@SwenSchaeferjohann
Copy link
Contributor

No description provided.

@github-advanced-security
Copy link
Contributor

This pull request sets up GitHub code scanning for this repository. Once the scans have completed and the checks have passed, the analysis results for this pull request branch will appear on this overview. Once you merge this pull request, the 'Security' tab will show more code scanning analysis results (for example, for the default branch). Depending on your configuration and choice of analysis tool, future pull requests will be annotated with code scanning analysis results. For more information about GitHub code scanning, check out the documentation.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jun 11, 2025

Important

Review skipped

Auto reviews are limited based on label configuration.

🏷️ Required labels (at least one) (1)
  • ai-review

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch main

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Comment on lines 32 to 100
name: system-programs
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
timeout-minutes: 60

services:
redis:
image: redis:8.0.1
ports:
- 6379:6379
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5

env:
REDIS_URL: redis://localhost:6379

strategy:
matrix:
include:
- program: sdk-test-program
sub-tests: '["cargo-test-sbf -p sdk-native-test"]'
- program: sdk-anchor-test-program
sub-tests: '["cargo-test-sbf -p sdk-anchor-test", "cargo-test-sbf -p sdk-pinocchio-test"]'
- program: sdk-libs
packages: light-macros light-sdk light-program-test light-client light-batched-merkle-tree
test_cmd: |
cargo test -p light-macros
cargo test -p light-sdk
cargo test -p light-program-test
cargo test -p light-client
cargo test -p client-test
cargo test -p light-sparse-merkle-tree
cargo test -p light-batched-merkle-tree --features test-only -- --skip test_simulate_transactions --skip test_e2e
steps:
- name: Checkout sources
uses: actions/checkout@v4

- name: Setup and build
uses: ./.github/actions/setup-and-build
with:
skip-components: "redis"

- name: build-programs
run: |
source ./scripts/devenv.sh
npx nx build @lightprotocol/programs

- name: Run sub-tests for ${{ matrix.program }}
if: matrix.sub-tests != null
run: |
source ./scripts/devenv.sh
npx nx build @lightprotocol/zk-compression-cli

IFS=',' read -r -a sub_tests <<< "${{ join(fromJSON(matrix.sub-tests), ', ') }}"
for subtest in "${sub_tests[@]}"
do
echo "$subtest"
eval "RUSTFLAGS=\"-D warnings\" $subtest"
done

- name: Run tests for ${{ matrix.program }}
if: matrix.test_cmd != null
run: |
source ./scripts/devenv.sh
npx nx build @lightprotocol/zk-compression-cli
${{ matrix.test_cmd }}

Check warning

Code scanning / CodeQL

Workflow does not contain permissions Medium

Actions job or workflow does not limit the permissions of the GITHUB_TOKEN. Consider setting an explicit permissions block, using the following as a minimal starting point: {contents: read}

Copilot Autofix

AI 22 days ago

In general, fix this by adding an explicit permissions block that grants only the minimal required scopes for GITHUB_TOKEN. For a test-only workflow that just checks out code and runs local commands, contents: read is sufficient.

For this specific file, the simplest, non-functional-changing fix is to add a workflow-level permissions block (so it applies to all current and future jobs) right after the name: examples-tests line. Set it to:

permissions:
  contents: read

This grants only read access to repository contents, which is enough for actions/checkout@v6 and the subsequent test steps. No other scopes (like pull-requests or issues) are required by anything shown in the snippet, and no other code changes or imports are necessary.

Suggested changeset 1
.github/workflows/sdk-tests.yml

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/.github/workflows/sdk-tests.yml b/.github/workflows/sdk-tests.yml
--- a/.github/workflows/sdk-tests.yml
+++ b/.github/workflows/sdk-tests.yml
@@ -19,6 +19,9 @@
 
 name: examples-tests
 
+permissions:
+  contents: read
+
 concurrency:
   group: ${{ github.workflow }}-${{ github.ref }}
   cancel-in-progress: true
EOF
@@ -19,6 +19,9 @@

name: examples-tests

permissions:
contents: read

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
Copilot is powered by AI and may make mistakes. Always verify output.
ananas-block and others added 9 commits October 7, 2025 19:28
* fix: compressed token es module generation

* fix
* chore: add address merkle tree pubkey print to light program test output

* feat: statelessjs add PackedAccounts v1 and v2

* get address tree pubkey

* fix feedback

* fix: ts sdk anchor test

* revert: anchor dev dep bump

* build anchor sdk test program for ci
* fix release

* fix: use cargo publish instead of cargo release

- Replace 'cargo release publish' with 'cargo publish' in validate-packages.sh
- cargo-release was removed in commit c7227ba
- Revert workflow to use PR event data (not hardcoded commits)
* refactor: light program test make anchor programs optional deps

* feat: LightAccount read only support

* fix: add test serial
* chore: remove duplicate program builds

* chore: cli ci mode

* next try

* cleanup

* refactor: caching

* fix solana cache

* remove cli build

* enable cli again

* revert toolchain

* fix clean checkout

* remove manual redis setup

* disable nx cache security

* chore: add multiple prover keys caches

* fix test cli ci

* remove duplicate address program build

* split up program tests more evenly

* add js ci nx commands
Comment on lines 22 to 96
name: stateless-js-v1
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest

services:
redis:
image: redis:8.0.1
ports:
- 6379:6379
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5

env:
LIGHT_PROTOCOL_VERSION: V1
REDIS_URL: redis://localhost:6379
CI: true

steps:
- name: Checkout sources
uses: actions/checkout@v4

- name: Setup and build
uses: ./.github/actions/setup-and-build
with:
skip-components: "redis,disk-cleanup"
cache-suffix: "js"

- name: Build stateless.js with V1
run: |
cd js/stateless.js
pnpm build:v1

- name: Build CLI
- name: Build compressed-token with V1
run: |
source ./scripts/devenv.sh
npx nx build @lightprotocol/zk-compression-cli --skip-nx-cache
cd js/compressed-token
pnpm build:v1

# Comment for breaking changes to Photon
- name: Run CLI tests
- name: Build CLI (CI mode - Linux x64 only)
run: |
source ./scripts/devenv.sh
npx nx test @lightprotocol/zk-compression-cli
npx nx build-ci @lightprotocol/zk-compression-cli

- name: Run stateless.js tests
- name: Run stateless.js tests with V1
run: |
source ./scripts/devenv.sh
npx nx test @lightprotocol/stateless.js
echo "Running stateless.js tests with retry logic (max 2 attempts)..."
attempt=1
max_attempts=2
until npx nx test-ci @lightprotocol/stateless.js; do
attempt=$((attempt + 1))
if [ $attempt -gt $max_attempts ]; then
echo "Tests failed after $max_attempts attempts"
exit 1
fi
echo "Attempt $attempt/$max_attempts failed, retrying..."
sleep 5
done
echo "Tests passed on attempt $attempt"

- name: Run compressed-token tests
- name: Run compressed-token tests with V1
run: |
source ./scripts/devenv.sh
npx nx test @lightprotocol/compressed-token
echo "Running compressed-token tests with retry logic (max 2 attempts)..."
attempt=1
max_attempts=2
until npx nx test-ci @lightprotocol/compressed-token; do
attempt=$((attempt + 1))
if [ $attempt -gt $max_attempts ]; then
echo "Tests failed after $max_attempts attempts"
exit 1
fi
echo "Attempt $attempt/$max_attempts failed, retrying..."
sleep 5
done
echo "Tests passed on attempt $attempt"

Check warning

Code scanning / CodeQL

Workflow does not contain permissions Medium

Actions job or workflow does not limit the permissions of the GITHUB_TOKEN. Consider setting an explicit permissions block, using the following as a minimal starting point: {contents: read}

Copilot Autofix

AI 22 days ago

To fix the problem, explicitly define minimal GITHUB_TOKEN permissions for this workflow or for the specific job. Since the workflow only checks out code, builds, and runs tests, it normally only needs read access to repository contents. We can add a permissions block at the root of the workflow so it applies to all jobs (there is only one job in the snippet). This will ensure that even if the repository or org default is read-write, the workflow will only have contents: read.

Concretely, in .github/workflows/js.yml, add:

permissions:
  contents: read

between the name: js-tests-v1 section and the concurrency: block (lines 14–16 in the snippet). This does not change any existing behavior of steps, but constrains the token according to the principle of least privilege. No additional methods, definitions, or imports are needed.

Suggested changeset 1
.github/workflows/js.yml

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/.github/workflows/js.yml b/.github/workflows/js.yml
--- a/.github/workflows/js.yml
+++ b/.github/workflows/js.yml
@@ -13,6 +13,9 @@
 
 name: js-tests-v1
 
+permissions:
+  contents: read
+
 concurrency:
   group: ${{ github.workflow }}-${{ github.ref }}
   cancel-in-progress: true
EOF
@@ -13,6 +13,9 @@

name: js-tests-v1

permissions:
contents: read

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
Copilot is powered by AI and may make mistakes. Always verify output.
sergeytimoshin and others added 11 commits October 10, 2025 18:47
* feat: move proving keys management into cli

* keys downloader

* refactor: integrate dynamic proving key management

* fix: update artifact path in prover-test workflow

* fix: update artifact path in prover-test workflow

* fix: add read permissions to prover-test workflow

* fix: update Go setup and add linux/arm64 support in prover-release workflow

* refactor: remove buildProver script and integrate dynamic prover binary management in CLI

* feat: add utility to download prover binary

* fix: update getProverNameByArch

* fix: correct getProverNameByArch implementation

* fix: add missing commas in downloadProverBinary and processProverServer

* feat: lazy prover

# Conflicts:
#	sdk-libs/program-test/src/program_test/config.rs

* update prover version to 1.0.1

* fix

* remove prover configuration from local test validator

* feat: implement thread-safe checksum cache management

* fix

* preload keys for lazy test

* preload test keys for lazy test

* preload test keys for lazy test

* fix

* fix

* increase default max wait time to 900 seconds

* start queue workers for redis queue

* bump prover version to 1.0.2

* restore download_keys.sh

* chmod +x download_keys.sh

* remove ProverConfig

* remove build-ci & test-ci workflow cmds

* use go key downloader

* refactor key path construction to use filepath

* add error handling for directory change in download-gnark-keys.sh

* add redirect handling with limit in downloadFile function

* add error handling for gnark keys download in download-gnark-keys.sh

* update preload-keys

* bump prover version

* remove unused run mode handling

* bump prover version
- Remove proving keys caching and download from setup-and-build action
- Remove @lightprotocol/programs from devDependencies in stateless.js, compressed-token, and CLI packages

This simplifies the CI workflow and removes unnecessary build dependencies.
* refactor: feature gate poseidon

* chore: remove proving keys and programs dependency from CI

- Remove proving keys caching and download from setup-and-build action
- Remove @lightprotocol/programs from devDependencies in stateless.js, compressed-token, and CLI packages

This simplifies the CI workflow and removes unnecessary build dependencies.

* fix tests

* fix tests add sdk compressed account poseidon import
* feat: enhance account-checks with pubkey validation and improved error reporting

- Add next_checked_pubkey method for pubkey validation with detailed errors
- Add print_on_error_pubkey helper for pubkey mismatch debugging
- Add PartialEq bound to AccountInfoTrait::Pubkey for comparisons
- Add InvalidAccount error variant for account validation failures
- Add discriminator logging in check_discriminator for debugging
- Update test_account_info to match pinocchio 0.9 Account struct layout
- Update tests with correct discriminator hash

* fix: update test_account_info for pinocchio 0.9 borrow state

* feat: add docs

* docs: fix error table and hex codes in account-checks docs

- Add missing FailedBorrowRentSysvar (12014) to error table
- Fix hex code conversions: 12006=0x2EE6, 12020=0x2EF4, 12021=0x2EF5
- Add language identifier 'text' to code fence in log examples

Addresses CodeRabbit review feedback.

* refactor: migrate account-checks error codes to 20000 range

Migrated all AccountError codes from 12006-12021 to 20000-20015:
- InvalidDiscriminator: 12006 → 20000 (0x4E20)
- AccountOwnedByWrongProgram: 12007 → 20001
- AccountNotMutable: 12008 → 20002
- BorrowAccountDataFailed: 12009 → 20003
- InvalidAccountSize: 12010 → 20004
- AccountMutable: 12011 → 20005
- AlreadyInitialized: 12012 → 20006
- InvalidAccountBalance: 12013 → 20007
- FailedBorrowRentSysvar: 12014 → 20008
- InvalidSigner: 12015 → 20009
- InvalidSeeds: 12016 → 20010
- InvalidProgramId: 12017 → 20011
- ProgramNotExecutable: 12018 → 20012
- AccountNotZeroed: 12019 → 20013
- NotEnoughAccountKeys: 12020 → 20014 (0x4E2E)
- InvalidAccount: 12021 → 20015 (0x4E2F)

Updated all documentation with new error codes and hex values.
All tests passing.
* feat: add version command and update Dockerfile to use Go 1.25

* feat: update prover version to 2.0.0
* chore: unify rust caches

* fix go warning
* fix: prevent compressible account funding with 1 epoch

* removed results from toplevel test functions

* add ctoken create account tests

* add close account tests

* test: add ctoken transfer tests

* add create ata tests

* restore program ci

* add compress and close tests

* fix: spl instruction compatibility

* add rent constant

* fix failing tests asserts

* add mint duplicates check

* compressible add tests and overflow guards

* refactor: use array map and tinyvec instead of arrayVec

* refactor: unify output compressed indices into one

* refactor: ctoken instruction discriminators
* refactor: compressed-account nostd

* refactor: light-hasher no-std

* refactor: account checks nostd

* refactor: make light-sdk-types no_std compatible with alloc feature

- Add no_std attribute with std/alloc feature flags
- Replace std::result types with core::result equivalents
- Add conditional Vec imports with #[cfg(feature = "alloc")]
- Add keccak feature for optional v1 address derivation
- Gate solana_msg::msg! logging with #[cfg(feature = "std")]
- Remove unused light-zero-copy dependency
- Maintain backward compatibility with default = ["std"]
- All tests passing, dependent crates build successfully

Three supported modes:
- no_std without alloc: Pure no_std (Vec unavailable)
- no_std with alloc: For BPF programs with allocator
- std mode (default): Standard library with full compatibility

Part of ongoing no_std refactoring for Solana program compatibility.

* refactor: sdk types, sdk pinocchio nostd

* fix ci

* fix feedback

* fix tests

* fix feature compilations

* fix errors

* fix test

* test: nostd integration test
* fix: test indexer return full Merkle proof, feat: light-sdk add merkle tree feature
ananas-block and others added 19 commits February 11, 2026 20:32
* fix: enforce canonical bump in PDA verification

Audit issue #15 (HIGH): verify_pda used derive_address which accepts
any bump seed, allowing non-canonical bumps for ATAs. Switch to
find_program_address to derive the canonical bump and reject any
non-canonical bump with InvalidSeeds error.

* fix: use pinocchio::pubkey::find_program_address instead of pinocchio_pubkey

* fix: remove bump from ATA instruction data and derive canonical bump on-chain

Remove client-provided bump from CreateAssociatedTokenAccountInstructionData
and all SDK/test callers. The on-chain program now derives the canonical bump
via find_program_address, preventing non-canonical bump attacks (audit #15).

- Remove bump field from instruction data structs
- Update verify_pda to derive canonical bump and return it
- Update validate_ata_derivation and decompress_mint callers
- Remove _with_bump SDK variants and ATA2 dead code
- Remove associated_token::bump from macro attribute support
- Update derive_associated_token_account to return Pubkey only
- Update all 100+ call sites across SDK, tests, and TypeScript

* fix: update wrong bump test for canonical bump derivation

With canonical bumps, the program derives the bump internally so
providing a wrong bump is no longer possible. Replace with a test
that passes a wrong ATA address to verify PDA validation.

* fix test

* fix lint
…#2265)

* fix: interpret max_top_up as units of 1,000 lamports (L-07)

max_top_up is u16 (max 65,535). As raw lamports this only allows
~0.065 SOL which is insufficient for many use cases. Interpreting
as units of 1,000 lamports gives a max of ~65.5M lamports (~0.065 SOL
becomes ~65 SOL), covering realistic rent top-up scenarios.

* address comments
* fix: validate mint for all token accounts, not just compressible

Audit issue #7 (MEDIUM): is_valid_mint was only called inside
configure_compression_info, so non-compressible token accounts
could be initialized with an invalid mint. Move validation to
initialize_ctoken_account so it runs for all account types.

* fix: read mint data once, pass decimals to configure_compression_info

* fix: tests

* fix tests

* fix tests

* fix tests

* fix js tests
* chore: add token_pool test to CI and fix InvalidMint error expectation

- Add test-compressed-token-token-pool to justfile CI targets
- Fix failing_tests_add_token_pool to expect InvalidMint error instead of
  ConstraintSeeds (restricted_seed() parses mint before PDA check)

* fix: enforce extension state checks for SPL compress (H-01 follow-up)

Add extension state enforcement (paused, non-zero fees, non-nil hook)
for SPL Token-2022 compress operations. Previously, SPL compress could
bypass these checks, allowing an attacker to:
1. SPL Compress 10K with transfer fee mint (pool receives 9.9K)
2. SPL Decompress 10K (pool sends 10K)
3. Profit from the fee difference, draining pool funds

Fix follows the same pattern as H-01 (PR #2246) - enforcement at the
processing point in process_token_compression(), not in cache building.

- Add enforce_extension_state() call for Compress mode in Token-2022 branch
- Update test_spl_to_ctoken_fails_when_mint_paused to expect error 6127
  (MintPaused from Light Token program) instead of 67 (SPL Token-2022)

* fix lint
* fix: store_data may cache incorrect owner

* fix test

* fix: add owner comparison to new_addresses_eq and test owner fix path

The new_addresses_eq helper was missing owner() comparison, which is
critical since this PR fixes store_data caching the incorrect owner.
Add unit tests with non-empty new_address_params to verify store_data
sets owner to invoking_program on first and subsequent invocations.
* refactor: max top to be u16::MAX

* fix: correct stale max_top_up doc comments

Since Some(0) is now meaningful (no top-ups allowed), the doc comments
saying "non-zero value" were misleading. Updated SDK structs to say
"When set (Some)" and TRANSFER_CHECKED.md to specify [1, u16::MAX-1].
* chore: check compress only is applied correctly

* restore: has delegate check
…oken account check, add additiona create ata idempotent check (#2292)

* fix: add rent-exemption and mint/owner checks for token account creation

- create.rs: verify non-compressible token account is rent-exempt before
  initializing (audit issue #8)
- create_ata.rs: in idempotent mode, deserialize existing account and
  verify mint and owner fields match expected values (audit issue #4)

Entire-Checkpoint: caaa14ac3051

* fix:

Entire-Checkpoint: caaa14ac3051

* add tests and format

Entire-Checkpoint: 2b4028368dbf

* chore: restore photon submodule to match main
Entire-Checkpoint: c298aaf24a18
)

PR #2279 fixed mint-to but missed 7 other instruction builders that
hardcoded maxTopUp: 0 (no top-ups allowed). The on-chain program
rejects any rent top-up with MaxTopUpExceeded (0x467b) when maxTopUp
is 0. Set to 65535 (u16::MAX = no limit) to match the Rust SDK default.

Files changed:
- wrap.ts, unwrap.ts, create-decompress-interface-instruction.ts
- create-mint.ts, decompress-mint.ts, mint-to-compressed.ts
- update-mint.ts, update-metadata.ts

Co-authored-by: tilo-14 <tilo@luminouslabs.com>
…pi LightToLight path (#2294)

TransferInterfaceCpi hardcoded fee_payer: None for LightToLight transfers,
causing PrivilegeEscalation when the on-chain program attempted rent top-ups
using the readonly authority account. Pass self.payer as fee_payer instead,
since payer is already writable.

token-sdk: set fee_payer: Some(self.payer) in TransferInterface::instruction()
and add system_program + payer to account_infos in invoke()/invoke_signed().

token-pinocchio: set fee_payer: Some(self.payer) in TransferCpi construction
for both invoke() and invoke_signed(). TransferCpi already handles fee_payer
in its account_infos internally.

Co-authored-by: tilo-14 <tilo@luminouslabs.com>
- Restore typedoc.stateless.json accidentally deleted in PR #2065
- Add compressed-token build step before typedoc generation
- Add workflow_dispatch for manual triggers
- Add branch push triggers (main, swen/pub-beta-cov) with path filter on js/*/src/**

Co-authored-by: tilo-14 <tilo@luminouslabs.com>
Entire-Checkpoint: b50f5875b462
* fix: add delegate to packed accounts in decompress instruction, version-aware proof chunking

wip

fixes

fixes

fix ci

test fixes

fix and lint

upd

fix ci

rev

fix

order

exporting

fix regr

wip

fixes

bump versions

decompress mint at create

fix ci

bump versions

rm md

js(compressed-token): MAX_TOP_UP constant and optional maxTopUp override

- Add MAX_TOP_UP (65535) in constants.ts; use in all instruction builders
- mintTo action: default maxTopUp to MAX_TOP_UP when omitted
- wrap/unwrap: optional maxTopUp on instruction and action
- decompressMint: maxTopUp in DecompressMintParams and DecompressMintInstructionParams
- createDecompressInterfaceInstruction, createMintInstruction,
  createMintToCompressedInstruction, update-mint, update-metadata: optional maxTopUp
- Non-breaking: all new params optional, default no cap

Co-authored-by: Cursor <cursoragent@cursor.com>

unskip tests

changelog.md

* fix mc
ananas-block and others added 2 commits February 18, 2026 23:35
* fix: address creation
Entire-Checkpoint: e8de3c68866c

* fix feature gate

* fmt tests

Entire-Checkpoint: 5a3de7e68923
)

Rename terminology in JSDoc, comments, and error strings:
- c-token → light-token
- cmint/CMint → light mint
- compressed mint → light mint (where referring to the type)
- compressed/decompressed (state) → preserved as-is
- ATA → associated token account (expanded)
- via pool → via interface PDA
- hot/cold terminology for token accounts and mints

Co-authored-by: tilo-14 <tilo@luminouslabs.com>
* refactor: light account creation to generic function

* refactor: migrate manual LightPreInit impls to create_accounts()

Replace manual CPI orchestration in pda, account_loader, and two_mints
test modules with the unified create_accounts() generic function,
reducing boilerplate across both pinocchio and anchor manual tests.

* refactor: migrate pinocchio-light-program-test processors to create_accounts()

Replace manual CPI orchestration with the unified create_accounts()
function across all 5 migratable processors (pda, account_loader,
mint, two_mints, all).

* fix: guard against u8 truncation in create_accounts() const generics

Fail fast if PDAS or MINTS exceed u8::MAX, preventing silent wrapping
in downstream `as u8` casts for cpi_context_offset and account indices.

* fix minor issues & cleanup

Entire-Checkpoint: 1da6fecc501a

* fix comments

Entire-Checkpoint: c788d31d848b

* add light-account feature

Entire-Checkpoint: 7bbee35ce40f

* fix ci

* fix: expose idempotent flag in associated_token:: macro
Entire-Checkpoint: 5a7704933620

* fix lint
* feat: pinocchio account add custom discriminator, add 1 byte discriminator compress decompress test

* feat: add 1 byte discriminator account to stress test

* randomize tests and format

* address feedback

* test: discriminators with 2-7 bytes

* feat: add discriminator compile time collision detection

* fix doc comment
ananas-block and others added 4 commits February 20, 2026 18:48
* refactor: add opt fee payer to revoke and approve instructions

* fix: comment
* Add files via upload

* Rename certora_2026-02_light-token-2-extensions.pdf.pdf to certora_2026-02_light-token-2-extensions.pdf
…inocchio (#2301)

* fix(token-sdk, token-pinocchio): make authority/owner readonly in close, approve, revoke

close: owner changed from writable to readonly — on-chain uses next_signer
(signer-only check), never writes to owner account.

approve/revoke: added max_top_up: Option<u16> field. Owner is now readonly
by default (max_top_up: None) and only writable when max_top_up is Some,
matching the existing pattern in transfer and burn. This prevents privilege
escalation failures when calling programs pass authority as read-only.

* fix(token-sdk): return ProgramError from get_token_account_balance

Delegate to Token::amount_from_account_info and return ProgramError
instead of TokenSdkError so callers can use ? directly in Anchor
contexts without redundant .map_err().

* fix(tests): add max_top_up field to Approve and Revoke in compressed-token-test

* before reverting pro

* rm max top up

* rm max top up

* fix(tests): mark PDA authority writable in invoke_signed tests

Authority must be writable when no fee_payer is set since it pays
for compressible account rent top-ups.

* fix: restore accidentally deleted anchor build artifacts

* fix

* fix(tests): restore max_top_up exceeded tests for approve and revoke

Restore the raw builder test pattern that verifies the on-chain
MaxTopUpExceeded error path. These tests use the SDK instruction
builder, then append max_top_up bytes directly to test the on-chain
limit — same pattern used for Transfer.

* fix(token-sdk): use short wire format to match pinocchio (no max_top_up bytes)

* style: fix formatting in test files

* fix(token-sdk): mandatory fee_payer, authority always readonly, builder max_top_up

Redesign all token instruction structs so that fee_payer is a mandatory
field and authority/owner is always readonly (fee_payer pays for top-ups
instead). Remove max_top_up from struct fields and add .with_max_top_up()
builder pattern that appends 2 bytes to the wire format.

Exception: Approve/Revoke keep owner writable (on-chain doesn't support
fee_payer yet), but the fee_payer field exists for API consistency.

Update all CPI structs (solana AccountInfo + pinocchio) accordingly,
and fix invoke_signed test programs to pass a separate fee_payer account
since PDA authority != transaction fee payer.

* before ci

* style: fix formatting in sdk test files

* fix(sdk-test): swap fee_payer and program accounts in transfer test

fee_payer must be at index 4 to match TransferCpi handler field order.
LIGHT_TOKEN_PROGRAM_ID at index 5 is still resolved by the CPI runtime.

* fix(token-sdk): make owner/authority readonly in approve, revoke, close; align transfer_checked

- token-sdk approve/revoke: owner AccountMeta new -> new_readonly (readonly signer)
- token-pinocchio approve/revoke/close: owner writable_signer -> readonly_signer
- token-pinocchio transfer_checked: remove optional fee_payer, make mandatory; authority always readonly_signer; use Pubkey::default() sentinel for system_program in account_metas
- token-client revoke: add owner-mismatch guard in execute_with_owner matching approve

* test(sdk-pinocchio): add invoke_with_fee_payer handlers and tests

Add separate fee_payer support (non-PDA authority) to the pinocchio test
program and integration tests:

- src: add process_*_invoke_with_fee_payer handlers for approve, revoke,
  transfer, burn, ctoken_mint_to; add InstructionType variants 36-40
  and dispatch in lib.rs
- tests: fix invoke_signed tests to include fee_payer account; add
  *_invoke_with_separate_fee_payer tests for all five operations
  demonstrating authority != fee_payer separation
- pda_owner accounts in invoke_signed tests changed to new_readonly

* fix(sdk-pinocchio-test): add fee_payer to transfer_checked tests

The TransferCheckedCpi handler was updated to require a separate
fee_payer at accounts[5], but the three transfer_checked tests still
passed light_token_program there, causing PrivilegeEscalation.

Add payer as writable fee_payer at [5] and move light_token_program
to [6] in all three test_ctoken_transfer_checked_* tests.

* test(sdk-test): add invoke_with_fee_payer handlers and tests

- Fix approve/revoke invoke_signed: add separate fee_payer at accounts[5]/[4]
- Add process_{transfer,burn,mint_to,approve,revoke}_invoke_with_fee_payer handlers
- Add discriminators 36-40 to InstructionType enum and dispatch
- Add test_{transfer,burn,mint_to,approve,revoke}_invoke_with_separate_fee_payer tests

* fmt

* align change

* Switch TransferInterface LightToLight from Transfer to TransferChecked

Add top-level mint field to TransferInterface/TransferInterfaceCpi so
LightToLight path uses TransferChecked (disc 12, validates decimals)
instead of plain Transfer (disc 3). Require mint account in test
handlers (min accounts 7→8) and pass it in LightToLight test callsites.

* add macro

* fix: make PDA authority readonly, regenerate compressed-token-sdk README

* fix: pinocchio sdk inconsistencies

* docs: revert compressed-token-sdk README to compressed token terminology, remove broken doc links

* docs: clarify light token account loading description

* fix readme

* fix broken links

---------

Co-authored-by: tilo-14 <tilo@luminouslabs.com>
Co-authored-by: ananas <jorrit@lightprotocol.com>
* fix: update account data handling to strip discriminator prefix and include discriminator length

* chore: update photon subproject and refactor account data handling to remove discriminator length

* chore: update photon subproject to latest commit

* chore: update photon subproject to latest commit

* chore: update photon subproject to latest commit

* feat: add --helius-rpc cli flag
feat: add support for getProgramAccounts standard rpc calls for compression
feat: structured error logging

* feat: enhance compressible data tracking

- Introduced `forester_api_urls` argument in `DashboardArgs` for specifying multiple API base URLs.
- Enhanced `EpochManager` to handle non-retryable registration errors gracefully.
- Implemented `CompressibleTrackerHandles` struct to manage multiple trackers for compressible data.
- Refactored `initialize_compressible_trackers` to streamline tracker initialization and bootstrap processes.
- Updated `run_pipeline_with_run_id` to accept preconfigured tracker handles, improving modularity.
- Modified `main` function to initialize compressible trackers and manage shutdown signals effectively.

* fix: improve transaction handling in compressors

- Enhanced error handling in `CTokenCompressor`, `MintCompressor`, and `PdaCompressor` to manage pending states more effectively.
- Added checks to ensure accounts are marked as pending during transaction processing.

chore: update epoch manager to check eligibility for compression

- Modified `dispatch_compression`, `dispatch_pda_compression`, and `dispatch_mint_compression` to include eligibility checks based on the current light slot.

refactor: improve account tracking with atomic counters

- Introduced `compressed_count` and `pending` sets in account trackers for better management of compression states.
- Updated `CompressibleTracker` trait to include methods for managing pending accounts and counting compressed accounts.

fix: ensure proper handling of closed accounts in trackers

- Added logic to remove closed accounts from trackers in `CTokenAccountTracker`, `MintAccountTracker`, and `PdaAccountTracker`.

feat: add usePhotonStats hook for fetching photon statistics

- Implemented a new hook `usePhotonStats` using SWR for fetching photon statistics from the API.
- Introduced error handling for API responses.

refactor: enhance utility functions for address exploration

- Added `explorerUrl` function to generate Solana explorer URLs based on the current network.
- Improved `formatSlotCountdown` to handle additional parameters for better status reporting.

feat: extend forester types with new statistics

- Updated `AggregateQueueStats`, `ForesterStatus`, and related interfaces to include new fields for batch processing statistics.
- Introduced `PhotonStats` interface for tracking photon-related metrics.

* fix: improve error handling and method consistency in compressors and state management

* feat: add transaction verification to compressors and refactor MintAccountTracker initialization

* fix: improve transaction confirmation handling and error reporting in compressors

* fix: remove unnecessary pubkey collection before marking accounts as pending

* chore: update subproject commit for photon dependency

* fix: implement retry logic for transaction status verification

* refactor transaction handling
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants