Skip to content

Groth16 prover#463

Open
Vindaar wants to merge 49 commits intomasterfrom
groth16prover
Open

Groth16 prover#463
Vindaar wants to merge 49 commits intomasterfrom
groth16prover

Conversation

@Vindaar
Copy link
Collaborator

@Vindaar Vindaar commented Aug 22, 2024

Edit: The PR previously contained a long description of my debugging of the Groth16 prover implementation. For anyone interested, it can still be found here.

This PR implements a Groth16 prover and verifier, which requires SnarkJS produced .wtns, .zkey and .r1cs binary files.

  • Implement verifier

@Vindaar
Copy link
Collaborator Author

Vindaar commented Jan 12, 2025

I just had another look at SnarkJS to see if I can figure out the double Montgomery encoding situation.

As it turns out at least I could identify the SnarkJS code that writes the coefficients as doubly Montgomery encoded. This happens here:

const nR2 = curve.Fr.mul(n, R2r);
curve.Fr.toRprLE(buffCoeff, 12, nR2);

where R2r is:

const R2r = curve.Fr.e(Scalar.mod(Scalar.mul(Rr,Rr), primeR));

Which at the very least finally ends the question of whether I'm imagining things or not.

I still have no clue why this is done to be honest. The commits touching these lines are not very illuminating either.

@Vindaar Vindaar changed the title Groth16 prover buggy draft Groth16 prover Jan 25, 2025
@Vindaar Vindaar marked this pull request as ready for review January 25, 2025 17:51
Previously these were generated from the wrong branch of the
`{.booldefine.}` variables we had in the code when the PR was still a
draft. Instead of parsing the data as doubly Montgomery encoded as
they actually are, we parsed them as regular Montgomery encoded
resulting in wrong test vectors.
@Vindaar
Copy link
Collaborator Author

Vindaar commented Sep 10, 2025

I guess the code at the moment only works on the 64-bit backend. That rings a bell, it was probably due to the parsing of the binary files. Need to check.

Copy link
Owner

@mratsim mratsim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

I have added several notes for further refactoring and will follow-up with an issue once that is merged. I think the PR is too big at the moment to add further non-critical fix.

Critical if possible would be to fix the 32-bit tests.

The FFT part is not worth changing at the moment but is important to merge now because the following future work use FFT:

  • FRI for zkVM acceleration
  • Whir for Lean Ethereum
  • PeerDAS. Ethereum 2D KZG commitments with Error Correcting Codes

result &= "\n" & sp & ")"

func toDecimal*[EC: EC_ShortW_Prj or EC_ShortW_Jac or EC_ShortW_Aff or EC_ShortW_JacExt](P: EC, indent: static int = 0): string =
## Stringify an elliptic curve point to Hex
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
## Stringify an elliptic curve point to Hex
## Stringify an elliptic curve point to decimal

## Note. Leading zeros are not removed.
## Output as decimal.
##
## WARNING: NOT constant time!
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually it is constant-time see low-level impl:

func toDecimal*(a: BigInt): string =
## Convert to a decimal string.
##
## This procedure is intended for configuration, prototyping, research and debugging purposes.
## You MUST NOT use it for production.
##
## This function is constant-time.
## This function does heap-allocation.
const len = decimalLength(BigInt.bits)
result = newString(len)
var a = a
for i in countdown(len-1, 0):
let c = ord('0') + a.div10().int
result[i] = char(c)

The mid-level impl is wrong:

func toDecimal*(f: FF): string =
## Convert to a decimal string.
##
## It is intended for configuration, prototyping, research and debugging purposes.
## You MUST NOT use it for production.
##
## This function is NOT constant-time at the moment.
f.toBig().toDecimal()

But it's fine, afaik, it's only used for debugging purposes.

accum.add toDecimal(f)

func appendDecimal*(accum: var string, f: ExtensionField, indent = 0, order: static Endianness = bigEndian) =
## Stringify a tower field element to hex.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
## Stringify a tower field element to hex.
## Stringify a tower field element to decimal.

for i in 0 ..< half:
# FFT Butterfly
y_times_root .scalarMul_vartime(output[i+half], rootsOfUnity[i])
y_times_root .scalarMul_vartime(rootsOfUnity[i], output[i+half])
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious the PR that introduced scalarMul_vartime also rewrote this part of the FFT and the argument order should be scalarMul_vartime(EC, scalar).

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah my mistake, the argument order when out-of-place is correct. (EC out, scalar, EC in)

FFTS_SizeNotPowerOfTwo = "Input must be of a power of 2 length"

FFT_Descriptor*[F] = object # `F` is either `Fp[Name]` or `Fr[Name]`
## Metadata for FFT on Elliptic Curve
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on fields.

Note that there was an implementation here: https://github.com/mratsim/constantine/blob/v0.2.0/research/kzg/fft_fr.nim

beta2*: EC_ShortW_Aff[Fp2[Name], G2] ## g₂^β
gamma2*: EC_ShortW_Aff[Fp2[Name], G2] ## g₂^γ
delta2*: EC_ShortW_Aff[Fp2[Name], G2] ## g₂^δ
gamma_abc*: seq[EC_ShortW_Aff[Fp[Name], G1]] ## == `zkey.ic` == `[g₁^{ \frac{β·A_i(τ) + α·B_i(τ) + C_0(τ)}{ γ }}]`
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: we will probably want to remove all seqs here in the future to unify with GPU impl.

Also we just don't use seq in Constantine except at the very edge for easy integration with Nim

r1cs: r1cs
)

proc calcAp[Name: static Algebra](ctx: Groth16Prover[Name], wt: seq[Fr[Name]]): EC_ShortW_Jac[Fp[Name], G1] {.noinit.} =
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All non-var parameters should be openArray instead of seq

result = B1_p


proc buildABC[Name: static Algebra](ctx: Groth16Prover[Name], wt: seq[Fr[Name]]): tuple[A, B, C: seq[Fr[Name]]] =
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: remove seq creation and prefer the caller to pass allocated buffers if possible

Comment on lines +153 to +165
proc transform[Name: static Algebra](args: seq[Fr[Name]], inc: Fr[Name]): seq[Fr[Name]] =
## Applies (multiplies) increasing powers of `inc` to each element
## of `args`, i.e.
##
## `{ a[0], a[1]·inc, a[2]·inc², a[3]·inc³, ... }`.
##
## In our case `inc` is usually a root of unity of the power given by
## `log2( FFT order ) + 1`.
result = newSeq[Fr[Name]](args.len)
var cur = Fr[Name].fromUint(1.uint64)
for i in 0 ..< args.len:
result[i].prod(args[i], cur)
cur *= inc
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: This patterns also exist in KZG and IPA and other commitments:

See

func ipaEvaluate* [Fr] (res: var Fr, poly: openArray[Fr], point: Fr, n: static int) =
var powers {.noInit.}: array[n,Fr]
powers.asUnchecked().computePowers(point, poly.len)
res.setZero()
for i in 0 ..< poly.len:
var tmp: Fr
tmp.prod(powers[i], poly[i])
res.sum(res,tmp)
res.setZero()
for i in 0 ..< poly.len:
var tmp {.noInit.}: Fr
tmp.prod(powers[i], poly[i])
res.sum(res,tmp)

So there is a computePowers proc, though it does allocate more.

func computePowers*[Name](dst: ptr UncheckedArray[FF[Name]], base: FF[Name], len: int) =
## We need linearly independent random numbers
## for batch proof sampling.
## Powers are linearly independent.
## It's also likely faster than calling a fast RNG + modular reduction
## to be in 0 < number < curve_order
## since modular reduction needs modular multiplication or a division anyway.
let N = len
if N >= 1:
dst[0].setOne()
if N >= 2:
dst[1] = base
for i in 2 ..< N:
dst[i].prod(dst[i-1], base)

pairing(proof.C, vk.delta2)

# 4. Check pairing equality
result = bool(lhs == rhs)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: This can be rewritten by negating either proof.A or proof.B then use multipairing and check that the result is the pairing group neutral element (i.e. 1 in GT/Fp12)

@greptile-apps
Copy link

greptile-apps bot commented Mar 16, 2026

Greptile Summary

This PR adds a full Groth16 prover and verifier to Constantine, including binary parsers for SnarkJS-produced .zkey, .wtns, and .r1cs files, a new finite-field NTT/FFT implementation (fft_fields.nim), and the core Groth16Prover / verify logic. It also fixes a butterfly argument-order bug in the existing elliptic-curve FFT and adds toDecimal display helpers for EC points and extension field elements.

Key issues found:

  • Compilation errors in prove*: The public prove* proc has two bugs that prevent it from compiling: (1) the return type is the raw Groth16Proof instead of the generic Groth16Proof[Name], and (2) it mutates ctx.r and ctx.s without declaring ctx as var. The test suite works around this entirely by using a local proveManual helper (which does take var ctx), so these errors are not caught by the test.
  • Memory leak in calcCp: FFTDescriptor[Fr[Name]].init(...) is allocated and never freed; the actual FFT work is handled inside itf which manages its own descriptors.
  • Dead code in calcB1: The first result = beta1.getJacobian + ctx.s * delta1 is immediately overwritten before the function returns.
  • filterIt(...)[0] panics on missing sections: Multiple parser accessor functions will raise IndexDefect on malformed input files rather than surfacing a useful error.
  • Hardcoded 32-byte field element sizes in toEcG1, toEcG2, and toFp2 silently break for curves other than BN254 (e.g., BLS12-381 with 48-byte Fp elements).
  • Off-by-one in randomFieldElement: The rejection condition b > m should be b >= m to exclude the case b == m which reduces to 0 mod m.
  • Known unresolved issue: The org-format example explicitly notes "RIGHT NOW C DOES NOT MATCH!" indicating the C component of the proof may not yet match SnarkJS output.

Confidence Score: 2/5

  • Not safe to merge — the public prove* API contains two compilation errors and the C proof component is acknowledged as not yet matching the SnarkJS reference.
  • The core prover logic in calcAp, calcBp, and the verifier appears structurally sound and the test (via proveManual) validates A and B correctly against SnarkJS. However, the publicly exported prove* function will not compile as written (wrong return type and mutation of immutable parameter), a memory leak exists in calcCp, and the org-example itself states the C element does not match the expected value. These issues need to be resolved before the PR is usable.
  • constantine/proof_systems/groth16.nim (compilation errors, memory leak, dead code) and constantine/proof_systems/groth16_utils.nim (hardcoded sizes, rejection sampler boundary).

Important Files Changed

Filename Overview
constantine/proof_systems/groth16.nim Core prover and verifier implementation. Has two compilation errors in prove*: missing generic type parameter (Groth16Proof should be Groth16Proof[Name]) and mutation of a non-var parameter. Additionally, calcCp allocates an FFT descriptor that is never used or freed (memory leak), and calcB1 contains dead code in its first line.
constantine/proof_systems/groth16_utils.nim Helper unmarshalling utilities for EC points and field elements. Contains hardcoded 32-byte slice indices that break for curves with different field element sizes (e.g., BLS12-381). randomFieldElement has an off-by-one in its rejection sampler (> should be >=).
constantine/proof_systems/constraint_systems/zkey_binary_parser.nim Parser for SnarkJS .zkey binary files. Multiple accessor functions use filterIt(...)[0] without bounds checking, which will panic on malformed input. Several inline parsing helpers share the same pattern.
constantine/proof_systems/constraint_systems/wtns_binary_parser.nim Parser for SnarkJS .wtns witness files. Same filterIt(...)[0] pattern as the zkey parser — will panic with an index error if a required section is absent from the file.
constantine/math/polynomials/fft_fields.nim New NTT/FFT implementation over finite fields (Fr, Fp). Well-structured with forward and inverse FFT, includes convenience wrappers that manage descriptor lifecycle via defer. No significant issues found.
constantine/math/polynomials/fft_lut.nim Lookup table construction for FFT roots of unity. Only BN254_Snarks is currently enabled (BLS12_381 is commented out). Primitive roots and scale-to-root-of-unity tables are correctly computed.
constantine/math/io/io_extfields.nim Adds appendDecimal / toDecimal for extension field elements. Docstrings are copy-pasted from the hex variants and are factually incorrect (claim hex/0x prefix output).
constantine/math/polynomials/fft.nim Single-line fix swapping the argument order of scalarMul_vartime — corrects a butterfly computation bug in the existing EC-based FFT.
constantine/proof_systems/constraint_systems/parser_utils.nim New shared parsing helpers (parseMagicHeader, parseSectionKind, parseCheck) extracted to reduce duplication between the three binary parsers.
tests/proof_systems/t_groth16_prover.nim Integration test for the prover. Uses proveManual (a test-local helper with var ctx) rather than the public prove* API, effectively working around the compilation errors in prove*. The C-component comparison in the org example is noted as not matching SnarkJS ("RIGHT NOW C DOES NOT MATCH!").
constantine/math/elliptic/ec_shortweierstrass_jacobian.nim Adds explicit bind declarations for isNeutral and ccopy inside a template to resolve name-resolution issues — small hygiene fix.
constantine/math/io/io_fields.nim Adds {.raises: [ValueError].} annotations to fromDecimal variants and adjusts the {.pop.} pragma scope so the exception is properly declared.

Sequence Diagram

sequenceDiagram
    participant User
    participant Groth16Prover
    participant BinaryParsers
    participant FFT as FFT (fft_fields)
    participant MSM as MSM (ec_multi_scalar_mul)
    participant Pairing as Pairings

    User->>BinaryParsers: parseZkeyFile() → ZkeyBin.toZkey[Name]()
    User->>BinaryParsers: parseWtnsFile() → WtnsBin.toWtns[Name]()
    User->>BinaryParsers: parseR1csFile() → R1csBin.toR1CS()
    User->>Groth16Prover: init(zkey, wtns, r1cs)
    User->>Groth16Prover: prove() [samples r, s]

    Groth16Prover->>MSM: calcAp: multiScalarMul(witnesses, A[])
    Groth16Prover->>MSM: calcBp: multiScalarMul(witnesses, B2[])
    Groth16Prover->>MSM: calcB1: multiScalarMul(witnesses, B1[])
    Groth16Prover->>Groth16Prover: buildABC(coeffs, witnesses)
    Groth16Prover->>FFT: itf(A_poly): ifft → transform → fft
    Groth16Prover->>FFT: itf(B_poly): ifft → transform → fft
    Groth16Prover->>FFT: itf(C_poly): ifft → transform → fft
    Groth16Prover->>MSM: calcCp: multiScalarMul(priv, C[]) + multiScalarMul(jabc, H[])
    Groth16Prover-->>User: Groth16Proof(A, B, C)

    User->>Pairing: verify(vk, proof, publicInputs)
    Pairing->>MSM: multiScalarMul(publicInputs, IC[])
    Pairing->>Pairing: e(A,B) == e(α,β)·e(I,γ)·e(C,δ)
    Pairing-->>User: bool
Loading

Comments Outside Diff (3)

  1. constantine/proof_systems/groth16_utils.nim, line 1592 (link)

    Off-by-one in randomFieldElement rejection condition

    The loop condition is while b.isZero().bool or (b > m).bool. Valid field elements are in [1, m-1] (zero is explicitly excluded). The condition correctly rejects b == 0, but uses b > m rather than b >= m. If b samples exactly m, the loop exits and result.fromBig(b) is called with b ≡ 0 (mod m), producing the zero element despite the isZero guard.

    While the probability is negligible for a 254-bit field (~2⁻²⁵⁴), the guard is semantically wrong:

  2. constantine/proof_systems/constraint_systems/wtns_binary_parser.nim, line 768-771 (link)

    filterIt(...)[0] panics if the expected section is absent

    func header*(wtns: WtnsBin) and func witnesses*(wtns: WtnsBin) both call filterIt(...)[0] without checking that the filter result is non-empty. If a malformed .wtns file is missing the kHeader or kData section, this raises an IndexDefect at runtime rather than returning a meaningful error.

    The same pattern appears in multiple helpers in zkey_binary_parser.nim (e.g., func header*, func Afield*, func groth16Header*, func icField*, etc.) and is also present in parseDatasection and parseCoefficients inline via zkey.sections.filterIt(...)[0].g16h.

    Consider adding a guard or using find/len checks before indexing, or returning an Option/Result type to propagate the error cleanly.

  3. constantine/math/io/io_extfields.nim, line 80-113 (link)

    Misleading docstrings copy-pasted from hex functions

    appendDecimal and toDecimal carry docstrings that say "Stringify a tower field element to hex" and "Result is prefixed with 0x". These are copy-pasted from the corresponding appendHex/toHex functions and are incorrect — the output is decimal, not hex.

    For example:

Last reviewed commit: 8efc5bb

Comment on lines +238 to +255
proc prove*[Name: static Algebra](ctx: Groth16Prover[Name]): Groth16Proof {.noinit.} =
## Generate a proof given the Groth16 prover context data.
##
## This implies calculating the proof elements `π = (g₁^A, g₁^C, g₂^B)`
##
## See `calcAp`, `calcBp` and `calcCp` on how these elements are computed.
# 1. Sample the random field elements `r` and `s` for the proof
ctx.r = randomFieldElement(Fr[Name])
ctx.s = randomFieldElement(Fr[Name])
# 2. get the witness data needed for all proof elements
let wt = ctx.wtns.witnesses
# 3. compute the individual proof elements
let A_p = ctx.calcAp(wt)
let B2_p = ctx.calcBp(wt)
let B1_p = ctx.calcB1(wt)
let C_p = ctx.calcCp(A_p, B1_p, wt)

result = Groth16Proof(A: A_p.getAffine(), B: B2_p.getAffine(), C: C_p.getAffine())
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing generic parameter in prove* return type and constructor

The return type is declared as Groth16Proof but Groth16Proof is a generic type that requires the Name parameter. Without it the declaration is incomplete and would fail to compile. The same issue appears in the result constructor on line 255.

Additionally, ctx is an immutable parameter (no var) yet the body assigns to ctx.r and ctx.s on lines 245–246 — Nim does not allow mutating a non-var parameter, which is another compile error.

Suggested change
proc prove*[Name: static Algebra](ctx: Groth16Prover[Name]): Groth16Proof {.noinit.} =
## Generate a proof given the Groth16 prover context data.
##
## This implies calculating the proof elements `π = (g₁^A, g₁^C, g₂^B)`
##
## See `calcAp`, `calcBp` and `calcCp` on how these elements are computed.
# 1. Sample the random field elements `r` and `s` for the proof
ctx.r = randomFieldElement(Fr[Name])
ctx.s = randomFieldElement(Fr[Name])
# 2. get the witness data needed for all proof elements
let wt = ctx.wtns.witnesses
# 3. compute the individual proof elements
let A_p = ctx.calcAp(wt)
let B2_p = ctx.calcBp(wt)
let B1_p = ctx.calcB1(wt)
let C_p = ctx.calcCp(A_p, B1_p, wt)
result = Groth16Proof(A: A_p.getAffine(), B: B2_p.getAffine(), C: C_p.getAffine())
proc prove*[Name: static Algebra](ctx: var Groth16Prover[Name]): Groth16Proof[Name] {.noinit.} =

And on line 255:

Suggested change
proc prove*[Name: static Algebra](ctx: Groth16Prover[Name]): Groth16Proof {.noinit.} =
## Generate a proof given the Groth16 prover context data.
##
## This implies calculating the proof elements `π = (g₁^A, g₁^C, g₂^B)`
##
## See `calcAp`, `calcBp` and `calcCp` on how these elements are computed.
# 1. Sample the random field elements `r` and `s` for the proof
ctx.r = randomFieldElement(Fr[Name])
ctx.s = randomFieldElement(Fr[Name])
# 2. get the witness data needed for all proof elements
let wt = ctx.wtns.witnesses
# 3. compute the individual proof elements
let A_p = ctx.calcAp(wt)
let B2_p = ctx.calcBp(wt)
let B1_p = ctx.calcB1(wt)
let C_p = ctx.calcCp(A_p, B1_p, wt)
result = Groth16Proof(A: A_p.getAffine(), B: B2_p.getAffine(), C: C_p.getAffine())
result = Groth16Proof[Name](A: A_p.getAffine(), B: B2_p.getAffine(), C: C_p.getAffine())

Comment on lines +194 to +198
let fftD = FFTDescriptor[Fr[Name]].init(abc[0].len * 2)

let A = itf(abc[0])
let B = itf(abc[1])
let C = itf(abc[2])
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fftD is allocated but never used or freed — memory leak

FFTDescriptor allocates heap memory for rootsOfUnity (via allocHeapArrayAligned). The descriptor fftD is initialised on line 194 but then the actual FFT work is performed by itf, which internally calls the convenience wrappers ifft_vartime/fft_vartime — each of which creates, uses, and deletes its own descriptor via defer: fftDesc.delete().

As a result fftD is:

  1. Never used — the itf calls ignore it entirely.
  2. Never freedfftD.delete() is never called, leaking the allocated memory every time calcCp runs.

The line should be removed (the FFT descriptors are managed inside itf already).

Suggested change
let fftD = FFTDescriptor[Fr[Name]].init(abc[0].len * 2)
let A = itf(abc[0])
let B = itf(abc[1])
let C = itf(abc[2])

Comment on lines +99 to +114
proc calcB1[Name: static Algebra](ctx: Groth16Prover[Name], wt: seq[Fr[Name]]): EC_ShortW_Jac[Fp[Name], G1] {.noinit.} =
let g16h = ctx.zkey.g16h

let beta1 = g16h.beta1
let delta1 = g16h.delta1
result = beta1.getJacobian + ctx.s * delta1

# Get the B1 data
let Bs = ctx.zkey.B1
doAssert Bs.len == wt.len
var B1_p {.noinit.}: EC_ShortW_Jac[Fp[Name], G1]
B1_p.multiScalarMul_vartime(wt, Bs)
# Add the independent terms, `β₁ + [s] · δ₁`
B1_p += beta1.getJacobian + ctx.s * delta1

result = B1_p
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First result assignment in calcB1 is dead code

Line 104 assigns result = beta1.getJacobian + ctx.s * delta1. However, result is unconditionally overwritten on line 114 (result = B1_p), so the first assignment is wasted computation. The correct formula is already applied via B1_p += beta1.getJacobian + ctx.s * delta1 on line 112, so the first line should be removed to avoid confusion.

Suggested change
proc calcB1[Name: static Algebra](ctx: Groth16Prover[Name], wt: seq[Fr[Name]]): EC_ShortW_Jac[Fp[Name], G1] {.noinit.} =
let g16h = ctx.zkey.g16h
let beta1 = g16h.beta1
let delta1 = g16h.delta1
result = beta1.getJacobian + ctx.s * delta1
# Get the B1 data
let Bs = ctx.zkey.B1
doAssert Bs.len == wt.len
var B1_p {.noinit.}: EC_ShortW_Jac[Fp[Name], G1]
B1_p.multiScalarMul_vartime(wt, Bs)
# Add the independent terms, `β₁ + [s] · δ₁`
B1_p += beta1.getJacobian + ctx.s * delta1
result = B1_p
proc calcB1[Name: static Algebra](ctx: Groth16Prover[Name], wt: seq[Fr[Name]]): EC_ShortW_Jac[Fp[Name], G1] {.noinit.} =
let g16h = ctx.zkey.g16h
let beta1 = g16h.beta1
let delta1 = g16h.delta1
# Get the B1 data
let Bs = ctx.zkey.B1
doAssert Bs.len == wt.len
var B1_p {.noinit.}: EC_ShortW_Jac[Fp[Name], G1]
B1_p.multiScalarMul_vartime(wt, Bs)
# Add the independent terms, `β₁ + [s] · δ₁`
B1_p += beta1.getJacobian + ctx.s * delta1
result = B1_p

c = coeffs.cs[i].section
s = coeffs.cs[i].index
coef = coeffs.cs[i].value
assert s.int < wt.len
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assert is silently disabled in release builds

assert in Nim is equivalent to the C assert() macro — it is compiled away when the program is built with -d:release or -d:danger. This bounds check would then be silently skipped in production, risking an out-of-bounds access. Use doAssert instead, which is unconditional.

Suggested change
assert s.int < wt.len
doAssert s.int < wt.len

Comment on lines +52 to +58

proc toEcG2*[Name: static Algebra](s: seq[byte]): EC_ShortW_Aff[Fp2[Name], G2] =
let x = toFp2[Name](s[0 .. 63])
let y = toFp2[Name](s[64 .. ^1])
result.x = x
result.y = y
if not bool(result.isNeutral()):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hardcoded 32-byte slice sizes break for other field sizes

toEcG1 slices the input as s[0 .. 31] and s[32 .. ^1], hardcoding the assumption that each field element is exactly 32 bytes. This is correct for BN254 (256-bit prime → 32 bytes) but silently produces wrong results for any curve whose base field element has a different byte length (e.g., BLS12-381 with 48-byte Fp elements). The same hardcoding exists in toFp2 (x[0..31], x[32..63]) and toEcG2 (s[0..63], s[64..^1]).

Consider deriving the slice boundaries from the curve's actual field size, e.g. Fp[Name].bits().ceilDiv_vartime(8), instead of the magic constant 32.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants