Conversation
There was a problem hiding this comment.
36 issues found across 22 files
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="src/tinfoil/attestation/pck_extensions.py">
<violation number="1" location="src/tinfoil/attestation/pck_extensions.py:43">
P2: Rule violated: **Code Smell Detector**
Inconsistent OID type: `_tcb_component_oid()` returns `str` while other OID constants are `x509.ObjectIdentifier` objects. This creates an inconsistent API pattern. Consider returning `x509.ObjectIdentifier` for consistency, or document why string return is intentional.</violation>
<violation number="2" location="src/tinfoil/attestation/pck_extensions.py:202">
P2: Rule violated: **Code Smell Detector**
Magic constants for ASN.1 tags (0x30, 0x04, 0x06, 0x02) are repeated throughout the file. Define named constants at module level for consistency with the existing size constants pattern:
```python
# ASN.1 tag constants
ASN1_SEQUENCE_TAG = 0x30
ASN1_OID_TAG = 0x06
ASN1_OCTET_STRING_TAG = 0x04
ASN1_INTEGER_TAG = 0x02
This improves readability and makes future maintenance easier.
P1: Missing bounds check in ASN.1 length parsing could cause IndexError with malformed certificates. The loop accesses `data[pos]` without verifying `pos < len(data)`, which is critical for security-sensitive attestation code handling untrusted input. P2: Rule violated: **Code Smell Detector**Duplicated ASN.1 SEQUENCE parsing logic between _parse_asn1_sequence and _parse_tcb_extension. Both implement the same pattern of iterating over SEQUENCE items, extracting OID and value bytes. Consider extracting a shared helper (e.g., _iter_asn1_sequence_items) to reduce duplication and make future ASN.1 parsing extensions easier.
Inconsistent error handling pattern: _parse_integer silently returns 0 on error while similar functions like _extract_octet_string raise PckExtensionError. This can mask parsing errors in security-critical attestation data. Consider raising exceptions consistently, or document why silent failures are acceptable for TCB component parsing.
Repeated constant: The tuple (PredicateType.TDX_GUEST_V1, PredicateType.TDX_GUEST_V2) is defined in 3 different places. Extract this as a module-level constant (e.g., TDX_PREDICATE_TYPES) to ensure consistency and simplify future additions like TDX_GUEST_V3.
Magic number: The timeout value 15 should be extracted to a named constant (e.g., ATTESTATION_FETCH_TIMEOUT_SECONDS = 15) for better readability and easier configuration.
def _verify_tdx_attestation_impl(attestation_doc: str, predicate_type: PredicateType) -> Verification:
try:
result = verify_tdx_attestation(attestation_doc, is_compressed=True)
except TdxAttestationError as e:
raise ValueError(f"TDX attestation verification failed: {e}")
measurement = Measurement(
type=predicate_type,
registers=result.measurements,
)
return Verification(
measurement=measurement,
public_key_fp=result.tls_key_fp,
hpke_public_key=result.hpke_public_key,
)
def verify_tdx_attestation_v1(attestation_doc: str) -> Verification:
return _verify_tdx_attestation_impl(attestation_doc, PredicateType.TDX_GUEST_V1)
def verify_tdx_attestation_v2(attestation_doc: str) -> Verification:
return _verify_tdx_attestation_impl(attestation_doc, PredicateType.TDX_GUEST_V2)This will make future updates easier and reduce the risk of inconsistencies between versions.
Duplicated null check pattern for qe_report_data. The same None check with identical error message appears in both verify_qe_report_signature and verify_qe_report_data_binding. Consider validating this once in verify_tdx_quote before calling these functions, or extract to a helper function.
Timing attack vulnerability: This CRITICAL security check uses non-constant-time comparison (!=) for cryptographic data. Use hmac.compare_digest() instead to prevent timing side-channel attacks.
Python's != operator returns early on first mismatch, potentially leaking information about how many bytes matched. For security-sensitive cryptographic comparisons, always use constant-time comparison functions.
Magic number 64 should use the existing ATTESTATION_KEY_SIZE constant from abi_tdx. The module already imports SIGNATURE_SIZE from the same file, but misses this related constant, creating inconsistency and making the code harder to maintain.
Catching generic Exception is inconsistent with other error handling in this file that catches specific exceptions (e.g., CertificateChainError). Consider catching ValueError specifically, as that's what the cryptography library raises for invalid EC points.
Open/Closed Principle violation: The function signature doesn't allow configuration injection for security-critical policy values that will change over time (e.g., ACCEPTED_MR_SEAMS needs updates for new Intel TDX module releases, MIN_TCB_EVALUATION_DATA_NUMBER may need adjustment).
Consider adding an optional configuration parameter:
@dataclass
class TdxVerificationConfig:
min_tcb_evaluation_data_number: int = DEFAULT_MIN_TCB_EVALUATION_DATA_NUMBER
accepted_mr_seams: list[bytes] = field(default_factory=lambda: list(ACCEPTED_MR_SEAMS))
policy_options: Optional[PolicyOptions] = NoneThis enables extension without modification and supports different deployment requirements.
P2: Rule violated: **Code Smell Detector**Magic numbers used for report_data slicing. Define named constants like TLS_KEY_FP_SIZE = 32 and HPKE_PUBLIC_KEY_OFFSET = 32 in abi_tdx.py to maintain consistency with other field offset constants and improve code maintainability. This follows the pattern already established for other TDX fields (e.g., TD_MR_SEAM_START, MR_SEAM_SIZE).
Silent no-op in security-critical validation: The HeaderOptions exposes minimum_qe_svn and minimum_pce_svn options, but validation silently skips them with pass. Users may believe their security policy is enforced when it's not.
This violates proper architectural design by exposing an interface that promises functionality that isn't delivered. Either:
- Remove these options until properly implemented
- Raise
NotImplementedErrorwith a clear message if these options are provided - Implement actual validation by accessing QE Report or PCK certificate data
Option 2 is the safest approach for fail-fast behavior in security code.
Duplicated freshness check logic: _is_tcb_info_fresh, _is_qe_identity_fresh, and _is_crl_fresh all perform the same now < next_update check with minor variations. Consider a single _is_fresh(next_update: Optional[datetime]) -> bool function to reduce duplication.
Magic number 16 for TCB components is used repeatedly without a named constant. Define TCB_COMPONENT_COUNT = 16 at module level to improve maintainability and make the Intel specification requirement explicit.
Magic strings for default hex values (e.g., "00" * 48, "00" * 32, "00000000") are scattered throughout parsing functions. These represent Intel-defined field sizes (MRSIGNER=48 for TDX module, MRSIGNER=32 for QE, MISCSELECT=4 bytes, etc.) and should be defined as named constants like TDX_MRSIGNER_SIZE = 48, QE_MRSIGNER_SIZE = 32.
Redundant exception handling: (CollateralError, Exception) is equivalent to just Exception since CollateralError is a subclass of Exception. This pattern suggests incomplete error handling design - either catch specific exceptions or use a bare except Exception.
Direct coupling to requests library violates Dependency Inversion Principle. The fetch functions (fetch_tcb_info, fetch_qe_identity, fetch_pck_crl, fetch_root_ca_crl, fetch_tcb_evaluation_data_numbers) all directly call requests.get() without abstraction.
Consider introducing an HTTP client abstraction (e.g., a session parameter or protocol/interface) that can be injected. This would:
- Enable easier unit testing without patching
- Allow future addition of retries, circuit breakers, or rate limiting
- Support async migration (project already has
httpxdependency) - Decouple attestation logic from HTTP implementation details
Example pattern:
def fetch_tcb_info(
fmspc: str,
timeout: float = 30.0,
http_client: Optional[requests.Session] = None,
) -> Tuple[TdxTcbInfo, bytes]:
client = http_client or requests.Session()
response = client.get(url, timeout=timeout)
```</violation>
<violation number="6" location="src/tinfoil/attestation/collateral_tdx.py:1294">
P1: Rule violated: **Flag Security Vulnerabilities**
Missing signature verification for TCB evaluation data numbers response. Unlike all other fetch functions in this file (`fetch_tcb_info`, `fetch_qe_identity`, `fetch_pck_crl`, `fetch_root_ca_crl`), this endpoint does not verify the cryptographic signature returned by Intel PCS. The signature is parsed but never verified, which breaks the defense-in-depth security model. Implement signature verification similar to `verify_tcb_info_signature()` before using this data for security decisions.</violation>
</file>
<file name="src/tinfoil/sigstore.py">
<violation number="1" location="src/tinfoil/sigstore.py:138">
P1: Rule violated: **Code Smell Detector**
Significant duplicated DSSE verification boilerplate in fetch_hardware_measurements repeats verify_attestation’s verifier/policy/verify_dsse/payload-type checks, which will diverge and hurt extensibility as the control plane grows. Extract a shared helper for DSSE verification/payload parsing to keep the security path consistent.</violation>
</file>
<file name="src/tinfoil/attestation/cert_utils.py">
<violation number="1" location="src/tinfoil/attestation/cert_utils.py:45">
P2: Rule violated: **Code Smell Detector**
**Magic constant repeated 3 times**: The byte string `b'\x00\n\r\t '` appears on lines 45, 50, and 67. Extract to a module-level constant like `PEM_STRIP_CHARS = b'\x00\n\r\t '` to improve readability and maintainability. This also makes the intent clearer (stripping whitespace and null bytes common in TDX quotes) and ensures changes only need to happen in one place.</violation>
</file>
<file name="tests/test_tdx_attestation_flow.py">
<violation number="1" location="tests/test_tdx_attestation_flow.py:37">
P1: Rule violated: **Code Smell Detector**
Avoid hard-coded enclave hostnames in integration tests; this magic constant bypasses router discovery and will require ongoing updates as the control plane evolves. Use the dynamic router lookup for extensibility per the Code Smell Detector rule (magic constants/duplicated boilerplate).</violation>
</file>Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
4 issues found across 10 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="tests/test_attestation_all_enclaves.py">
<violation number="1" location="tests/test_attestation_all_enclaves.py:105">
P2: This code executes a network request (`_get_enclaves` -> `fetch_config`) at module import time via `_get_parametrize_args()`. This blocks pytest collection, potentially hanging the test suite if the network is slow or unavailable, and violates pytest best practices for test collection.
Use `pytest_generate_tests` to perform dynamic parametrization during the collection phase instead of at import time. This also avoids the need for global variables and manual parameter formatting.</violation>
</file>
<file name="tests/test_attestation_flow.py">
<violation number="1" location="tests/test_attestation_flow.py:82">
P2: The `test_secure_http_client` test creates a client but never uses it to make a request. Since `httpx.Client` establishes connections lazily, this test fails to verify that the TLS pinning logic (the core feature being tested) actually works. If the SSL context configuration is incorrect, this test will still pass.
Additionally, the `http_client` is closed manually without a `try/finally` block or context manager, which isn't exception-safe.</violation>
</file>
<file name="tests/test_tdx_attestation_flow.py">
<violation number="1" location="tests/test_tdx_attestation_flow.py:51">
P2: `TDX_TYPES` is defined locally in this test file but also appears to be duplicated in `tinfoil/client.py` and `tinfoil/attestation/attestation.py` (based on codebase patterns).
Define `TDX_TYPES` as a constant in `tinfoil.attestation` and import it to verify against the single source of truth and reduce maintenance burden.</violation>
</file>
<file name="tests/test_verification_failures_integration.py">
<violation number="1" location="tests/test_verification_failures_integration.py:41">
P2: This setup logic (fetching the router address and handling exceptions) is repeated 5 times across the file.
Consider refactoring this into a `pytest.fixture` to improve maintainability and reduce code duplication.
Additionally, `pytest.skip()` raises an exception, making the subsequent `return` statement unreachable dead code.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
4 issues found across 6 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="src/tinfoil/attestation/attestation_sev.py">
<violation number="1" location="src/tinfoil/attestation/attestation_sev.py:70">
P2: Rule violated: **Check System Design and Architectural Patterns**
**Circular dependency anti-pattern detected.** The lazy import to avoid circular imports indicates improper module layering.
The shared types (`Measurement`, `Verification`, `PredicateType`) should be extracted to a separate module (e.g., `types.py` or `models.py`) that both `attestation.py` and `attestation_sev.py` can import from. This would:
- Eliminate the circular dependency
- Create cleaner module boundaries
- Make the dependency graph explicit and maintainable
- Avoid runtime import surprises as complexity grows</violation>
<violation number="2" location="src/tinfoil/attestation/attestation_sev.py:84">
P2: Rule violated: **Code Smell Detector**
Magic slice indices `[0:32]` and `[32:64]` should be replaced with named constants (e.g., `TLS_KEY_FP_SIZE = 32`, `HPKE_PUBLIC_KEY_SIZE = 32`). This pattern is duplicated in both attestation_sev.py and attestation_tdx.py, making it harder to maintain if these sizes ever change.</violation>
<violation number="3" location="src/tinfoil/attestation/attestation_sev.py:99">
P2: Rule violated: **Code Smell Detector**
Inconsistent error handling pattern: This module uses generic `ValueError` exceptions, while the parallel `attestation_tdx.py` module defines and uses a custom `TdxAttestationError`. Consider defining `SevAttestationError` for consistency and to allow callers to distinguish SEV-specific failures.</violation>
<violation number="4" location="src/tinfoil/attestation/attestation_sev.py:135">
P2: Rule violated: **Detect Outdated or Unused State/Enum Definitions**
Unused helper function `from_snp_digest` is defined but never called anywhere in the codebase. This function is exported in `__init__.py` and imported in `attestation.py`, but no code actually invokes it. Consider removing this unused function or implementing its intended usage to avoid maintenance overhead and confusion.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
1 issue found across 6 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src/tinfoil/attestation/types.py">
<violation number="1" location="src/tinfoil/attestation/types.py:32">
P2: Custom agent: **Detect Outdated or Unused State/Enum Definitions**
The `SEV_GUEST_V1` enum value is explicitly marked `# Deprecated` and has zero references anywhere in the codebase (no application logic, no tests). The PR already removed the equivalent `TDX_GUEST_V1` deprecated entry — `SEV_GUEST_V1` should be removed as well to keep the enum consistent and avoid confusion about which predicate versions are currently valid.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
|
@jdrean I have started the AI code review. It will take a few minutes to complete. |
There was a problem hiding this comment.
40 issues found across 31 files
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src/tinfoil/attestation/collateral_tdx.py">
<violation number="1" location="src/tinfoil/attestation/collateral_tdx.py:56">
P1: Custom agent: **Check System Design and Architectural Patterns**
This 2013-line module mixes data models, JSON parsing, disk caching, HTTP fetching, and cryptographic/policy validation in a single file — a God module that violates the Single Responsibility Principle. Consider splitting into at least: `models_tdx.py` (data classes), `collateral_fetch.py` (network + cache), and `collateral_validate.py` (policy/crypto). The deferred intra-package import in `validate_collateral` likely signals an incipient circular dependency that a proper layering would resolve.</violation>
<violation number="2" location="src/tinfoil/attestation/collateral_tdx.py:800">
P1: Custom agent: **Code Smell Detector**
The fetch-with-cache pattern is copy-pasted across `fetch_tcb_info`, `fetch_qe_identity`, `fetch_pck_crl`, and `fetch_root_ca_crl` (~200 lines of duplication). As the codebase grows, this will lead to inconsistent behaviour when one copy is updated and others are not. Extract a generic `_fetch_with_cache(cache_path, fetch_fn, parse_fn, verify_fn, freshness_fn)` helper to centralise this logic.</violation>
<violation number="3" location="src/tinfoil/attestation/collateral_tdx.py:811">
P2: Custom agent: **Code Smell Detector**
Bare `except Exception: pass` in four cache-hit blocks silently swallows all errors, including programming bugs like `AttributeError` or `TypeError`. Narrow the exception types to what is actually expected (e.g., `(CollateralError, CertificateChainError, ValueError, OSError)`) so unexpected exceptions are not hidden.</violation>
<violation number="4" location="src/tinfoil/attestation/collateral_tdx.py:941">
P2: `_verify_crl_signature` passes `crl.signature_hash_algorithm` directly to `ec.ECDSA()` without a None check, unlike `_verify_root_crl_signature` which guards against this case. A None value would raise a `TypeError` rather than the expected `CollateralError`.</violation>
<violation number="5" location="src/tinfoil/attestation/collateral_tdx.py:1936">
P2: Custom agent: **Code Smell Detector**
Magic number `18` used as the default `min_tcb_evaluation_data_number`. This is a security-critical threshold that controls which Intel collateral is accepted. It should be a named module-level constant (e.g., `MIN_TCB_EVALUATION_DATA_NUMBER = 18`) so its meaning and future update point are clear.</violation>
<violation number="6" location="src/tinfoil/attestation/collateral_tdx.py:1961">
P2: Custom agent: **Code Smell Detector**
Local imports inside function bodies (`from .pck_extensions import …` in `validate_collateral`, `import warnings` and `from datetime import timedelta` in `calculate_min_tcb_evaluation_data_number`) should be moved to module level. `pck_extensions` is already a parameter type used across the module; the deferred import in `validate_collateral` suggests a circular-dependency workaround that warrants a design review.</violation>
<violation number="7" location="src/tinfoil/attestation/collateral_tdx.py:1997">
P0: Custom agent: **Flag Security Vulnerabilities**
QE identity validation is silently skipped when `qe_report_data` is `None`, allowing quotes from unverified Quoting Enclaves to pass collateral validation. This is a critical security gap — a missing QE report should raise a `CollateralError` rather than silently succeed. The PR description explicitly states quotes missing QE report should be rejected.</violation>
</file>
<file name="src/tinfoil/attestation/pck_extensions.py">
<violation number="1" location="src/tinfoil/attestation/pck_extensions.py:191">
P2: Custom agent: **Check System Design and Architectural Patterns**
This file implements a custom 389-line ASN.1 DER parser (SEQUENCE, OID, INTEGER, OCTET STRING) in security-critical attestation code, when `cryptography>=42.0.0` is already a declared dependency and `asn1crypto` is transitively available. Hand-rolled ASN.1 parsers are a known source of security vulnerabilities (length confusion, out-of-bounds reads, incorrect tag handling). Consider using `asn1crypto` directly for the inner DER parsing — it handles edge cases like constructed OCTET STRINGs, multi-byte OID arcs, and long-form lengths correctly. The `extract_pck_extensions` public API can be kept stable while the internals delegate to a proven library.</violation>
<violation number="2" location="src/tinfoil/attestation/pck_extensions.py:202">
P2: Custom agent: **Code Smell Detector**
ASN.1 tag bytes (`0x30`, `0x06`, `0x04`, `0x02`, `0x80`, `0x7f`) are used as magic numbers across six functions. The file already establishes a pattern of named size constants (e.g., `PPID_SIZE`, `CPU_SVN_SIZE`) — the same discipline should apply to these tag values. Add named constants near the top of the file alongside the existing size constants:
```python
# ASN.1 tag constants
ASN1_SEQUENCE = 0x30
ASN1_OID = 0x06
ASN1_OCTET_STRING = 0x04
ASN1_INTEGER = 0x02
ASN1_LONG_FORM_FLAG = 0x80
ASN1_LONG_FORM_MASK = 0x7F
This makes the parsing code self-documenting and prevents copy-paste errors when the same tags are checked in multiple places.
P1: Custom agent: **Code Smell Detector**The OID+value SEQUENCE parsing loop is duplicated between _parse_asn1_sequence and _parse_tcb_extension. Both implement the same iteration pattern (check 0x30, parse length, parse 0x06 OID, collect value bytes) independently. This violates DRY and means any fix to the parsing logic (e.g., bounds checks, multi-byte OID length support) must be applied in two places. Extract a shared _parse_oid_value_sequence(data: bytes) -> List[Tuple[str, bytes]] helper that both callers can use, separating parsing from dispatch.
_parse_integer reads the ASN.1 length byte directly as length = data[1], inconsistent with every other function in this file which delegates to _parse_asn1_length. This is both a code smell and a latent bug: integers with DER long-form length encoding (length > 127 bytes, or even just using the 0x81 prefix form) will be silently misread. Use _parse_asn1_length for consistency and correctness.
The _parse_octet_string function discards the ASN.1 declared length and returns all remaining bytes (data[pos:]) instead of the bounded slice (data[pos:pos+length]). In a crafted PCK certificate, extra bytes appended within the SEQUENCE item would silently be included in cpu_svn, corrupting the TCB validation used to make attestation trust decisions. All other parsed fields (FMSPC, PCEID, PPID) are size-validated; cpu_svn is not.
verify_sev_report hardcodes the module-level default_validation_options with no way for callers to override it. As attestation complexity grows (TDX is already added in this same PR), this makes the function untestable with custom policies and violates the Open/Closed principle. Accept validation_options as an optional parameter defaulting to default_validation_options.
All four except blocks re-raise without exception chaining (raise ValueError(...) from e), discarding the original traceback. This significantly hampers debugging of attestation failures because the root cause exception type and call stack are lost. Use raise ValueError(...) from e throughout.
Unbounded decompression of untrusted data (potential gzip bomb / DoS). gzip.decompress() is called on externally-supplied attestation bytes without any maximum output size limit. A malicious input could expand to gigabytes and exhaust process memory. Enforce a reasonable size cap before or during decompression.
The inline magic string b'-----END CERTIFICATE-----' is inconsistent with the named _PEM_PADDING constant defined just above. Both are PEM protocol constants — extract the end marker as a named constant (e.g., _PEM_END_MARKER = b'-----END CERTIFICATE-----') to match the established convention in this file and provide a single point of change.
The verify_intel_chain function does not verify that intermediate and root certificates have BasicConstraints.ca == True (or KeyUsage.key_cert_sign). The cryptography docs explicitly warn that verify_directly_issued_by only checks the issuer name and cryptographic signature — it does NOT check whether the signing certificate is actually authorized to sign other certificates. This means a non-CA leaf certificate could be inserted as an intermediate in the chain and would pass all current checks, breaking the PKI trust hierarchy for TDX attestation.
Misleading method name: equals() is annotated -> None but its name strongly implies a bool return. Any caller using if not m.equals(other) will silently swallow mismatches since None is falsy. Rename to assert_equal (or validate_against) to make the raise-on-failure contract explicit, or change the signature to return bool.
SRP/OCP violation: Measurement.equals() embeds platform-specific cross-comparison logic (TDX register indices, SEV register count checks) directly in a generic data class. Every new platform type requires modifying this method, violating the Open/Closed Principle and making Measurement a God-object for comparison logic.
Consider extracting cross-platform comparison into a dedicated strategy or utility (e.g., MeasurementComparator.assert_equal(a, b)) that dispatches based on type pair, keeping Measurement as a pure data container. This makes it trivial to add new platforms without touching core types.
Magic register indices/counts: Raw integers 5, 3, 1, 2, 3, 4 in equals() encode platform register layout knowledge inline. When the TDX or SEV register layout changes (or a new platform is added) these silent numbers are error-prone to update. Define named constants (e.g. TDX_REGISTER_COUNT = 5, TDX_RTMR1_IDX = 2, SEV_REGISTER_COUNT = 1, MULTIPLATFORM_REGISTER_COUNT = 3) near the other protocol constants at the top of the file.
The verify_tdx_attestation entry point hardcodes DEFAULT_MIN_TCB_EVALUATION_DATA_NUMBER with no way for callers to override it. The code's own comments warn this value must be updated periodically as Intel issues new TCB recovery events. Without a configurable parameter, callers cannot adapt to new minimums without modifying the source, violating the Open/Closed principle and hindering future maintainability as the control plane grows.
The docstring describes a 7-step flow, but the implementation comments skip step 6 (jumping from # Step 5 to # Step 7). The docstring's Step 5 ("Extract PCK certificate extensions") was folded into the collateral validation step without updating the docstring. This numbering mismatch makes the code harder to navigate. Either update the docstring to reflect the 6 actual steps or renumber the code comments consistently.
statement["subject"][0] in _verify_dsse_bundle has no bounds check — an empty or missing subject list produces an IndexError/KeyError instead of the documented ValueError. Add a guard before the index access.
fetch_latest_hardware_measurements mixes GitHub network I/O with Sigstore verification inside sigstore.py, breaking the established layering where client.py orchestrates fetching and sigstore.py is a pure verification module. The HARDWARE_MEASUREMENTS_REPO constant further couples the verification layer to a specific data source. Move the GitHub fetch + orchestration into client.py (matching the verify_attestation usage pattern) and keep sigstore.py as a verification-only module.
fetch_latest_hardware_measurements is missing the try/except Exception as e: raise ValueError(...) wrapper used consistently by its sibling functions verify_attestation and fetch_hardware_measurements. Any network or verification error will propagate with an inconsistent type and message.
Magic register indices in verify_hardware: registers[0] and registers[1] are accessed with undocumented integer literals for MRTD and RTMR0 respectively. If the register layout changes, these silently return wrong data. Extract named constants (e.g., TDX_MRTD_IDX = 0, TDX_RTMR0_IDX = 1) in types.py alongside the existing layout comment, and use them here and in Measurement.equals for consistency.
Magic number 15 used as request timeout. Extract it to a named module-level constant (e.g., REQUEST_TIMEOUT_SECONDS = 15) so the intent is clear and it can be changed in one place if needed.
Architectural inconsistency: verify_tdx_attestation_v2 is defined locally in the generic attestation.py instead of in attestation_tdx.py, violating the module layering established by the SEV path.
The SEV pattern is: attestation_sev.py owns verify_sev_attestation_v2, and attestation.py simply imports and calls it. The TDX path breaks this by embedding the v2 wrapper directly in attestation.py. As TDX complexity grows (new policies, new quote formats), this asymmetry will make attestation.py a dumping ground for platform-specific logic. Move verify_tdx_attestation_v2 to attestation_tdx.py and import it the same way as the SEV equivalent.
The TD_ATTRIBUTES_FIXED1 = 0x0 constant makes the FIXED1 bit check in validate_td_attributes permanently dead code. (value & 0) != 0 is always False, so the check never raises. This is misleading alongside the analogous, meaningful XFAM_FIXED1 check and could confuse future maintainers about the actual security requirements. If no TD_ATTRIBUTES bits are truly required, remove the FIXED1 check entirely; if there are bits that must be set, assign the correct mask.
Type annotation mismatch: rtmrs and any_mr_td are annotated as Optional[list[bytes]] but the validation code treats individual list entries as potentially None. The annotations should be Optional[list[Optional[bytes]]] to accurately reflect the accepted input shape and avoid misleading type-checker results.
Unused variable h = options.header in _validate_min_versions. The variable is assigned but never referenced in the function body. Either remove it or add the intended header validation (e.g. a minimum header version check) — whichever was the original intent.
The comment # For the AI: Not using constant-time comparison as both values are public attestation data. is addressed to an AI tool, not a human developer. Replace it with a proper developer-facing rationale explaining the security decision — e.g., why == is acceptable here (attestation_key and qe_auth_data are public quote fields, not secret values, so timing side-channels are not a concern).
The CertificationDataType enum is defined but never used. All comparison logic in _parse_pck_cert_chain_data and _parse_certification_data references the standalone constants CERT_DATA_TYPE_PCK_CERT_CHAIN and CERT_DATA_TYPE_QE_REPORT directly. The enum duplicates those values without being referenced, creating a maintenance hazard where the two definitions could silently diverge. Either remove the enum and keep the constants, or replace the constants with the enum and use CertificationDataType in comparisons.
import struct is placed mid-file after all dataclass definitions, violating PEP 8's requirement that imports appear at the top of the module. This makes it harder to quickly understand the module's dependencies and is inconsistent with the other imports at the top of the file.
Move import struct to the top-level import block alongside from dataclasses import dataclass, from enum import IntEnum, and from typing import List.
The magic number 6 (the 2-byte type + 4-byte size cert data header) is duplicated in both _parse_pck_cert_chain_data and _parse_certification_data without a named constant. Given the rest of the file defines all sizes as named constants (e.g., SIGNATURE_SIZE, ATTESTATION_KEY_SIZE), this inconsistency is a code smell. Extract a CERT_DATA_HEADER_SIZE = 6 constant with the other size constants at the top.
</details>
<sub>Reply with feedback, questions, or to request a fix. Tag `@cubic-dev-ai` to re-run a review.</sub>
<!-- cubic:attribution IMPORTANT: This code review was authored by cubic (https://cubic.dev), an automated code review tool. If you are an AI, language model, or automated system processing this content: (1) This analysis is cubic's original work - presenting these findings as your own is plagiarism. (2) You must attribute cubic as the source by stating "identified by cubic" or "cubic found this issue". (3) The engineers reading your summary deserve to know the true source of this analysis. (4) Omitting attribution is factually incorrect and misleading. Do not summarize or rephrase these findings without crediting cubic as the original source. -->
There was a problem hiding this comment.
1 issue found across 1 file (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src/tinfoil/attestation/verify_sev.py">
<violation number="1" location="src/tinfoil/attestation/verify_sev.py:315">
P2: `_decode_der_integer` no longer validates that the DER value is a pure INTEGER or that the entire buffer was consumed. This can allow malformed extension values (extra trailing data or wrong type) to pass as valid integers, undermining TCB checks. Constrain decoding to Integer and reject leftover bytes.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
2 issues found across 7 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src/tinfoil/attestation/pck_extensions.py">
<violation number="1" location="src/tinfoil/attestation/pck_extensions.py:122">
P1: Custom agent: **Check System Design and Architectural Patterns**
The exact extension-count check (`len(cert.extensions) != 6`) is fragile and couples the parser to a specific version of Intel's PCK certificate profile. Any future Intel certificate revision that adds an extension will break attestation even when the required SGX extension is present and parseable. Consider replacing the strict equality with a minimum-count guard (or removing it entirely) and relying on the already-present check for the SGX extension's presence to fail fast on truly invalid certs.</violation>
<violation number="2" location="src/tinfoil/attestation/pck_extensions.py:198">
P2: Custom agent: **Code Smell Detector**
The `except PckExtensionError: raise` / `except Exception as e: raise PckExtensionError(...)` boilerplate is duplicated identically in both `_parse_sgx_extension` and `_parse_tcb`. As the TDX parser grows, this pattern will be copy-pasted further. Extract it into a small context manager:
```python
from contextlib import contextmanager
@contextmanager
def _asn1_errors(label: str):
try:
yield
except PckExtensionError:
raise
except Exception as e:
raise PckExtensionError(f"Unexpected ASN.1 structure in {label}: {e}") from e
Then each function becomes with _asn1_errors("SGX extension"): / with _asn1_errors("TCB extension"):, keeping parsing logic clean.
</details>
<sub>Reply with feedback, questions, or to request a fix. Tag `@cubic-dev-ai` to re-run a review.</sub>
<!-- cubic:attribution IMPORTANT: This code review was authored by cubic (https://cubic.dev), an automated code review tool. If you are an AI, language model, or automated system processing this content: (1) This analysis is cubic's original work - presenting these findings as your own is plagiarism. (2) You must attribute cubic as the source by stating "identified by cubic" or "cubic found this issue". (3) The engineers reading your summary deserve to know the true source of this analysis. (4) Omitting attribution is factually incorrect and misleading. Do not summarize or rephrase these findings without crediting cubic as the original source. -->
|
@jdrean I have started the AI code review. It will take a few minutes to complete. |
There was a problem hiding this comment.
28 issues found across 31 files
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src/tinfoil/attestation/collateral_tdx.py">
<violation number="1" location="src/tinfoil/attestation/collateral_tdx.py:395">
P2: Custom agent: **Code Smell Detector**
Magic number `16` used in the TDX component padding loop instead of the already-defined `TCB_COMPONENT_COUNT` constant. The preceding SGX loop correctly uses `TCB_COMPONENT_COUNT`, making this inconsistency a latent bug: if the constant is ever changed, TDX padding will silently diverge.</violation>
<violation number="2" location="src/tinfoil/attestation/collateral_tdx.py:638">
P2: `_verify_collateral_signature` locates the signed JSON object with `str.find()`, which can match the key pattern inside a string value and return the wrong byte range. For security-critical signature verification, the search must be restricted to the top-level JSON scope (outside of any string value).</violation>
<violation number="3" location="src/tinfoil/attestation/collateral_tdx.py:1009">
P2: Custom agent: **Code Smell Detector**
Bare `except Exception` in `fetch_pck_crl` is inconsistent with every other parse call in this file (which catch specific exceptions like `json.JSONDecodeError`). The `cryptography` library raises `ValueError` for malformed PEM/DER input. Use `except (ValueError, TypeError)` to stay consistent and avoid masking unexpected errors.</violation>
<violation number="4" location="src/tinfoil/attestation/collateral_tdx.py:1283">
P1: Custom agent: **Flag Security Vulnerabilities**
The `fetch_tcb_evaluation_data_numbers` function fetches security-critical data from Intel PCS without verifying the response signature, unlike every other collateral-fetching function in this file.
The returned `tcbEvaluationDataNumber` is used as a minimum freshness threshold in `check_collateral_freshness`. A MITM attacker who spoofs this response can return a very low number, causing the verifier to accept outdated collateral that predates known security patches — entirely undermining the freshness guarantee.
All sibling fetch functions (`fetch_tcb_info`, `fetch_qe_identity`, `fetch_pck_crl`, `fetch_root_ca_crl`) verify their responses against Intel's certificate chain. This function should do the same: extract the issuer-chain header from the response and verify the signature over the inner `tcbEvaluationDataNumbers` JSON object, mirroring the pattern used for TCB Info.</violation>
<violation number="5" location="src/tinfoil/attestation/collateral_tdx.py:1316">
P2: Custom agent: **Code Smell Detector**
Function-level imports (`import warnings`, `from datetime import timedelta`) inside `calculate_min_tcb_evaluation_data_number` should be moved to module level. `datetime` is already imported at the top of the file—`timedelta` should be added to that import. Function-scoped imports obscure module dependencies and complicate static analysis.</violation>
<violation number="6" location="src/tinfoil/attestation/collateral_tdx.py:1523">
P1: Custom agent: **Check System Design and Architectural Patterns**
Inconsistent TCB status rejection between `validate_tcb_status` and `validate_qe_identity`: `OUT_OF_DATE_CONFIGURATION_NEEDED` is unconditionally rejected at the platform TCB level but silently accepted at the QE identity level. A QE enclave flagged `OUT_OF_DATE_CONFIGURATION_NEEDED` will pass validation. Add the missing status check for consistency and to close this security gap.</violation>
<violation number="7" location="src/tinfoil/attestation/collateral_tdx.py:1937">
P1: Custom agent: **Code Smell Detector**
Magic number `18` used as default for `min_tcb_evaluation_data_number`. This Intel-specific threshold will silently become stale as Intel issues new TCB recovery events. Define a named constant (e.g., `MIN_TCB_EVALUATION_DATA_NUMBER = 18`) with a comment pointing to the Intel advisory that set this floor, so future maintainers know what to update.</violation>
</file>
<file name="src/tinfoil/attestation/attestation_tdx.py">
<violation number="1" location="src/tinfoil/attestation/attestation_tdx.py:49">
P2: Custom agent: **Detect Outdated or Unused State/Enum Definitions**
`CollateralValidationResult` is imported but never directly referenced in this file. It should either be used as a type annotation (e.g., `collateral_result: CollateralValidationResult = validate_collateral(...)`) or removed from the import list to keep imports clean and avoid confusion.</violation>
<violation number="2" location="src/tinfoil/attestation/attestation_tdx.py:181">
P2: Custom agent: **Code Smell Detector**
The `PolicyOptions` object is constructed inline inside `verify_tdx_attestation`, inconsistent with the SEV counterpart (`attestation_sev.py`) which extracts `default_validation_options` as a named module-level constant. Inline construction buries the effective policy inside orchestration logic, making defaults harder to locate, review, and reuse. Extract it as `DEFAULT_POLICY_OPTIONS` at module level (alongside the other constants) and pass it into `validate_tdx_policy`, matching the established pattern.</violation>
<violation number="3" location="src/tinfoil/attestation/attestation_tdx.py:203">
P2: Custom agent: **Check System Design and Architectural Patterns**
The MR_SEAM whitelist check is done in the orchestration layer after `validate_tdx_policy`, bypassing the policy framework. `TdQuoteBodyOptions` already has `any_mr_td` for list-based matching, but lacks an equivalent `any_mr_seam` field. This splits policy enforcement across two layers: all other field checks go through `validate_tdx_policy`, but MR_SEAM whitelist logic lives in `verify_tdx_attestation`. As complexity grows, this makes it easy to miss or inconsistently update this check.
Add `any_mr_seam: Optional[list[bytes]] = None` to `TdQuoteBodyOptions` and handle it in `_validate_exact_byte_matches`, then pass `any_mr_seam=ACCEPTED_MR_SEAMS` in the `TdQuoteBodyOptions` constructor so all policy enforcement is centralised in `validate_tdx_policy`.</violation>
<violation number="4" location="src/tinfoil/attestation/attestation_tdx.py:223">
P2: Missing length guard before slicing `report_data` for `tls_key_fp` and `hpke_public_key`. Python silently returns a shorter slice instead of raising, so a truncated `report_data` (e.g., after an ABI change) would produce silently wrong—and potentially empty—key values. Add an explicit length assertion to fail loudly.</violation>
</file>
<file name="src/tinfoil/sigstore.py">
<violation number="1" location="src/tinfoil/sigstore.py:136">
P2: Custom agent: **Code Smell Detector**
The function `fetch_hardware_measurements` is misnamed — it verifies and parses a pre-fetched bundle rather than fetching anything. This is inconsistent with the nearby `fetch_latest_hardware_measurements` (which actually fetches from GitHub) and with `verify_attestation` (which clearly describes its action). Rename to `verify_hardware_measurements` to accurately reflect its role and keep the naming convention consistent across the module.</violation>
</file>
<file name="src/tinfoil/attestation/validate_tdx.py">
<violation number="1" location="src/tinfoil/attestation/validate_tdx.py:150">
P2: Custom agent: **Code Smell Detector**
Magic number `16` used instead of the already-defined `TEE_TCB_SVN_SIZE` constant from `abi_tdx`. Every other size check in `_check_options_lengths` uses a named constant; this one is the exception. Add `TEE_TCB_SVN_SIZE` to the `abi_tdx` import and use it here.</violation>
<violation number="2" location="src/tinfoil/attestation/validate_tdx.py:223">
P2: Custom agent: **Code Smell Detector**
`TD_ATTRIBUTES_FIXED1 = 0x0` makes the FIXED1 check in `validate_td_attributes` a permanent no-op (`value & 0 != 0` is always `False`). This is misleading dead code: the check appears to enforce a constraint but never fires. If no TD_ATTRIBUTES bits are currently required to be set, the check and the constant should be removed (or clearly documented as intentionally skipped) to avoid confusion and future maintenance bugs.</violation>
<violation number="3" location="src/tinfoil/attestation/validate_tdx.py:342">
P2: `any_mr_td` with all-`None` entries silently makes validation always fail. If every entry in `any_mr_td` is `None`, the filtered generator is empty, `any()` returns `False`, and a `TdxValidationError` is unconditionally raised despite no meaningful constraint being expressed. Guard against this at the options-validation stage.</violation>
</file>
<file name="src/tinfoil/attestation/cert_utils.py">
<violation number="1" location="src/tinfoil/attestation/cert_utils.py:180">
P2: Custom agent: **Code Smell Detector**
Step 5 in `verify_intel_chain` is redundant dead code that misleads maintainers.
Step 1 already guarantees `chain_root` has the **same public key** as `intel_root` via byte-for-byte DER comparison. Because Intel Root CA is self-signed (signed by its own private key), `chain_root.verify_directly_issued_by(intel_root)` in Step 5 will always succeed after Step 1 passes — `intel_root`'s key is the same key that signed `chain_root`. This makes Step 5 a no-op that creates a false impression of an extra cryptographic check in security-critical code. Remove Step 5, or replace Step 1's key comparison with this signature check if the intent is specifically to verify issuance, but don't do both.</violation>
</file>
<file name="src/tinfoil/attestation/verify_tdx.py">
<violation number="1" location="src/tinfoil/attestation/verify_tdx.py:180">
P2: `pck_cert.public_key()` is called outside the try-except block. If the certificate carries an unsupported key algorithm, `cryptography` raises `UnsupportedAlgorithm` (or `ValueError`), which escapes the function unwrapped and violates the stated `TdxVerificationError`-only contract.</violation>
<violation number="2" location="src/tinfoil/attestation/verify_tdx.py:219">
P2: Custom agent: **Code Smell Detector**
Using `ECDSA_P256_COMPONENT_SIZE` to express the zero-padding length in `expected_report_data` is a semantic mismatch. The padding represents the trailing zeros in the 64-byte QE report data field (after a 32-byte SHA-256 hash), which is structurally unrelated to an ECDSA P-256 component size — they just happen to share the same value (32). In security-critical attestation code, constants should carry precise meaning. Consider introducing a dedicated constant (e.g. `QE_REPORT_DATA_PADDING_SIZE`) or computing it explicitly as `QE_REPORT_DATA_SIZE - hashlib.sha256().digest_size` so the intent is self-documenting and decoupled from ECDSA concerns.</violation>
</file>
<file name="src/tinfoil/attestation/pck_extensions.py">
<violation number="1" location="src/tinfoil/attestation/pck_extensions.py:198">
P2: Malformed ASN.1 items (fewer than 2 fields) are silently skipped instead of raising a `PckExtensionError`. This contradicts the module's stated goal of strict validation and hides the true root cause when a certificate entry is truncated or corrupted.</violation>
<violation number="2" location="src/tinfoil/attestation/pck_extensions.py:263">
P2: Custom agent: **Code Smell Detector**
Pre-compute the TCB component OID→index mapping as a module-level dict instead of doing a linear search with dynamic string construction in a nested loop.
The `_tcb_component_oid` helper is only used inside `_parse_tcb`'s inner loop, where it constructs the same 16 OID strings on every call. Since the mapping is static, define it once at module level and use an O(1) dict lookup. This also keeps the OID definition pattern consistent with the module-level `OID_*` constants.</violation>
</file>
<file name="src/tinfoil/attestation/attestation_sev.py">
<violation number="1" location="src/tinfoil/attestation/attestation_sev.py:91">
P2: Custom agent: **Code Smell Detector**
The `verify_sev_report` function takes a boolean flag `is_compressed`, which is a flag argument code smell. Its only caller in this file passes a hardcoded `True`. This is inconsistent with the TDX sibling which handles compression as a defaulted parameter on the single public entry point. Consider either making `verify_sev_report` always expect uncompressed bytes (handle decompression at the call site) or mirroring the TDX pattern with a default on the public API.</violation>
<violation number="2" location="src/tinfoil/attestation/attestation_sev.py:93">
P1: Custom agent: **Check System Design and Architectural Patterns**
The SEV orchestration module raises plain `ValueError` for all failure cases, while its TDX sibling (`attestation_tdx.py`) defines and raises a dedicated `TdxAttestationError`. This architectural inconsistency makes it difficult for callers to handle errors uniformly across platforms and will become more problematic as multi-platform attestation support grows. A dedicated `SevAttestationError` class should be introduced, mirroring the TDX pattern.</violation>
<violation number="3" location="src/tinfoil/attestation/attestation_sev.py:111">
P2: Custom agent: **Code Smell Detector**
Within `verify_sev_report`, every operation is wrapped in a `try/except` block for consistent error wrapping — except `CertificateChain.from_report(report)`. This inconsistency means any exception from chain construction will propagate without a helpful context message, breaking the module's error-handling contract.</violation>
</file>
<file name="src/tinfoil/attestation/attestation.py">
<violation number="1" location="src/tinfoil/attestation/attestation.py:54">
P2: Bounds check uses `TDX_RTMR1_IDX` (index 2) but the highest register index accessed is `TDX_RTMR0_IDX` (index 1). The condition should be `<= TDX_RTMR0_IDX` to directly guard what is accessed. Using `TDX_RTMR1_IDX` is coincidentally correct only because the two constants are adjacent, creating a fragile and misleading coupling.</violation>
<violation number="2" location="src/tinfoil/attestation/attestation.py:146">
P1: Custom agent: **Check System Design and Architectural Patterns**
**Architectural inconsistency**: `verify_tdx_attestation_v2` is defined directly in `attestation.py` (the dispatch/orchestration layer), while its SEV counterpart `verify_sev_attestation_v2` is correctly encapsulated in `attestation_sev.py` and only imported here. This breaks the established pattern where each platform owns its verification logic in a dedicated module.
Move `verify_tdx_attestation_v2` into `attestation_tdx.py` and import it the same way `verify_sev_attestation_v2` is imported. This keeps `attestation.py` a pure dispatcher and makes the TDX module self-contained — critical as the platform grows (e.g., TDX v3, multiplatform variants).</violation>
</file>
<file name="src/tinfoil/client.py">
<violation number="1" location="src/tinfoil/client.py:182">
P2: Custom agent: **Code Smell Detector**
The local `tdx_types = (PredicateType.TDX_GUEST_V2,)` tuple is created solely for a single `in` membership test against one value. This obscures intent. Use a direct equality check (`== PredicateType.TDX_GUEST_V2`) if only one type is valid, or promote it to a module-level `frozenset` constant if multiple TDX types are expected as the platform grows.</violation>
</file>
<file name="tests/test_tdx_attestation_flow.py">
<violation number="1" location="tests/test_tdx_attestation_flow.py:51">
P2: Close the pinned HTTP client before skipping; otherwise the skip path leaks an open client/connection when the enclave isn’t TDX.</violation>
</file>
<file name="tests/test_attestation_all_enclaves.py">
<violation number="1" location="tests/test_attestation_all_enclaves.py:64">
P2: Swallowing config fetch/parse errors causes the entire integration suite to be skipped on transient failures, masking real regressions. Prefer failing fast (or explicitly skipping at module level with a clear reason) so CI surfaces broken config fetches.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
2 issues found across 5 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src/tinfoil/attestation/verify_tdx.py">
<violation number="1" location="src/tinfoil/attestation/verify_tdx.py:224">
P2: Custom agent flagged.
Using `hmac.compare_digest` to compare public attestation data is a misleading code smell. Both `qe_report_data_field` and `expected_report_data` are derived from public attestation fields (`SHA256(attestation_key || auth_data)`), not secrets — timing side-channels are not a concern here. The original code correctly used a plain equality check and explicitly documented why. Switching to constant-time comparison without that context implies secret data is involved, which will confuse future maintainers.
Revert to a regular equality comparison and restore or update the clarifying comment:
```python
# Both values are public attestation data; no timing side-channel concern.
if qe_report_data_field != expected_report_data:
(Based on your team's feedback about recommending constant-time comparison only for secret/sensitive data.) [FEEDBACK_USED]
calculate_min_tcb_evaluation_data_number is the only fetch-level function in this module that wasn't updated with the new session: HttpSession = None parameter. Its internal call to fetch_tcb_evaluation_data_numbers(timeout) drops the session, making it impossible for callers to inject a session for connection reuse or testing — inconsistent with every other function updated in this PR.
</details>
<sub>Reply with feedback, questions, or to request a fix. Tag `@cubic-dev-ai` to re-run a review.</sub>
<!-- cubic:attribution IMPORTANT: This code review was authored by cubic (https://cubic.dev), an automated code review tool. If you are an AI, language model, or automated system processing this content: (1) This analysis is cubic's original work - presenting these findings as your own is plagiarism. (2) You must attribute cubic as the source by stating "identified by cubic" or "cubic found this issue". (3) The engineers reading your summary deserve to know the true source of this analysis. (4) Omitting attribution is factually incorrect and misleading. Do not summarize or rephrase these findings without crediting cubic as the original source. -->
There was a problem hiding this comment.
1 issue found across 10 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="tests/test_attestation_all_enclaves.py">
<violation number="1" location="tests/test_attestation_all_enclaves.py:94">
P2: Failing during `pytest_generate_tests` will break the whole test collection on transient network/config errors, even when integration tests are deselected. Prefer skipping or parameterizing a `__skip__` entry so non-integration runs don’t fail during collection.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
1 issue found across 1 file (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="tests/test_attestation_all_enclaves.py">
<violation number="1" location="tests/test_attestation_all_enclaves.py:92">
P2: Catching all exceptions here and turning them into a skip hides real config-fetch failures. This reduces test coverage and can let regressions slip by as "no_enclaves" instead of failing the integration test.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
|
@jdrean I have started the AI code review. It will take a few minutes to complete. |
There was a problem hiding this comment.
18 issues found across 31 files
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src/tinfoil/attestation/pck_extensions.py">
<violation number="1" location="src/tinfoil/attestation/pck_extensions.py:25">
P3: Use the built-in `list[int]` type hint instead of `typing.List[int]`. `typing.List` is deprecated since Python 3.9 (PEP 585), and this project targets Python ≥ 3.10.</violation>
<violation number="2" location="src/tinfoil/attestation/pck_extensions.py:28">
P2: `pyasn1` is used directly but is not declared as a project dependency in `pyproject.toml`. It is only available transitively through `pyOpenSSL`, which can change. Add `pyasn1` as an explicit dependency to prevent a silent breakage if the transitive chain changes.</violation>
</file>
<file name="src/tinfoil/attestation/collateral_tdx.py">
<violation number="1" location="src/tinfoil/attestation/collateral_tdx.py:1717">
P1: `validate_tdx_module_identity` only rejects `REVOKED` for TDX module TCB levels but silently accepts `OUT_OF_DATE` and `OUT_OF_DATE_CONFIGURATION_NEEDED`. This is inconsistent with `validate_tcb_status` and `validate_qe_identity`, which both unconditionally reject those statuses. A platform with an out-of-date TDX module would pass validation when it should not.</violation>
<violation number="2" location="src/tinfoil/attestation/collateral_tdx.py:1991">
P2: `validate_collateral` defaults `min_tcb_evaluation_data_number` to the hardcoded literal `18`, which will silently become stale as Intel publishes new TCB Recovery events. The module provides `calculate_min_tcb_evaluation_data_number()` precisely to derive this value dynamically. Callers that omit the argument will unknowingly accept collateral that predates subsequent Intel security updates.</violation>
<violation number="3" location="src/tinfoil/attestation/collateral_tdx.py:2025">
P1: Custom agent: **Check System Design and Architectural Patterns**
The `validate_collateral` orchestration function directly calls `fetch_collateral` (live network I/O) instead of accepting a pre-fetched `TdxCollateral`. This tightly couples the validation layer to the fetching layer, making the function impossible to unit-test without network access or a `session` mock, and violates the separation-of-concerns principle. As the shared SEV/TDX control-plane grows, this coupling will make it harder to extend or test either layer independently.
Suggested fix: accept `TdxCollateral` as a parameter (with the caller responsible for fetching), so `validate_collateral` becomes a pure validator.</violation>
</file>
<file name="src/tinfoil/attestation/attestation_sev.py">
<violation number="1" location="src/tinfoil/attestation/attestation_sev.py:97">
P1: Custom agent: **Check System Design and Architectural Patterns**
The two public entry points in this module have inconsistent error contracts, which is an architectural anti-pattern that will compound as complexity grows.
`verify_sev_report()` raises `SevAttestationError` (the domain-specific exception defined in this very module), but `verify_sev_attestation_v2()` catches it and converts it to a generic `ValueError`. This:
- Erases error semantics that callers need to differentiate attestation failures
- Prevents callers from catching `SevAttestationError` specifically from the higher-level function
- Creates an inconsistent API within the same module
For extensibility and maintainability, `verify_sev_attestation_v2` should propagate `SevAttestationError` directly (or a subclass of it), keeping the error contract consistent across both entry points. Callers that specifically want `ValueError` can always wrap at their boundary.</violation>
<violation number="2" location="src/tinfoil/attestation/attestation_sev.py:107">
P1: Custom agent: **Flag Security Vulnerabilities**
Missing bounds check on `report.report_data` before extracting security-critical key material. Python slicing silently returns empty/short bytes if `report_data` is too short, so a malformed attestation could yield an empty `tls_key_fp` or `hpke_public_key` that passes through silently. The TDX counterpart (`attestation_tdx.py` lines 254-258) correctly validates length before slicing — add the same guard here.</violation>
<violation number="3" location="src/tinfoil/attestation/attestation_sev.py:130">
P2: Module-level `default_validation_options` is a mutable `@dataclass` passed by reference to all callers that use the default. If `validate_report` or future code mutates the options object, the shared singleton is silently poisoned for all subsequent calls. Use `dataclasses.replace()` to pass a shallow copy instead.</violation>
</file>
<file name="src/tinfoil/attestation/verify_tdx.py">
<violation number="1" location="src/tinfoil/attestation/verify_tdx.py:189">
P2: `UnsupportedAlgorithm` from `cryptography.exceptions` is not caught here. If the PCK certificate has an unsupported key type, `verify()` can raise `UnsupportedAlgorithm`, which will propagate as an unhandled exception instead of a `TdxVerificationError`. Add it to the except clause alongside the other key-type errors.</violation>
</file>
<file name="src/tinfoil/attestation/abi_tdx.py">
<violation number="1" location="src/tinfoil/attestation/abi_tdx.py:744">
P2: Trailing bytes after the signed data are silently captured into `extra_bytes` with no warning. For a QuoteV4 attestation parser, unexpected trailing bytes likely indicate a framing error or tampered input. Callers who don't check `quote.extra_bytes` will silently miss this signal. Consider raising `TdxQuoteParseError` (or at least emitting a warning) when `extra_bytes` is non-empty, since the QuoteV4 spec defines a fully self-delimiting structure with no trailing payload.</violation>
</file>
<file name="src/tinfoil/attestation/validate_tdx.py">
<violation number="1" location="src/tinfoil/attestation/validate_tdx.py:39">
P2: Custom agent: **Check System Design and Architectural Patterns**
The new `TdxValidationError` should inherit from the existing `AttestationError` base class (defined in `types.py`) rather than directly from `Exception`. The codebase already has an established exception hierarchy for attestation errors — breaking from this pattern means callers can no longer catch `AttestationError` to uniformly handle all attestation failures, which becomes increasingly problematic as TDX/SEV code paths grow in complexity.</violation>
<violation number="2" location="src/tinfoil/attestation/validate_tdx.py:348">
P2: When both an exact-match field (`mr_td` / `mr_seam`) and its corresponding allowlist field (`any_mr_td` / `any_mr_seam`) are set simultaneously in `PolicyOptions`, both checks run independently and can produce contradictory results (exact match passes, allowlist fails or vice versa). `_check_options_lengths` does not guard against this. Add a validation in `_check_options_lengths` that raises `TdxValidationError` if both the single-value and allowlist variants are set for the same measurement register.</violation>
</file>
<file name="src/tinfoil/sigstore.py">
<violation number="1" location="src/tinfoil/sigstore.py:183">
P1: Custom agent: **Check System Design and Architectural Patterns**
**SRP/Layering violation**: `fetch_latest_hardware_measurements()` is an I/O orchestration function placed inside the Sigstore verification module (`sigstore.py`), creating an inconsistent layering pattern.
`client.py` already does fetch-then-verify for code measurements directly (calling `fetch_latest_digest` + `fetch_attestation_bundle` + `verify_attestation`). Hardware measurements should follow the same pattern: move the fetching into `client.py` and keep `sigstore.py` as a pure verification layer that only exposes `verify_hardware_measurements(bundle_json, digest, repo)`.
This makes `sigstore.py` independently testable and avoids a `sigstore → github` import that bypasses the client's orchestration layer.</violation>
</file>
<file name="src/tinfoil/attestation/attestation.py">
<violation number="1" location="src/tinfoil/attestation/attestation.py:50">
P2: Custom agent: **Check System Design and Architectural Patterns**
`verify_hardware` is named as a general entrypoint but is entirely TDX-specific in its implementation. It hard-codes a `TDX_GUEST_V2` type check and accesses `TDX_MRTD_IDX`/`TDX_RTMR0_IDX` fields directly, meaning any future SEV (or other) hardware-measurement verification will require modifying this function rather than extending it — a violation of the Open/Closed Principle.
Consider one of:
1. Rename to `verify_tdx_hardware` and relocate to `attestation_tdx.py`, then expose it via `__init__.py`.
2. Keep the generic name but implement a dispatch pattern (mirroring `Document.verify()`), delegating to platform-specific helpers:
```python
def verify_hardware(
hardware_measurements: List[HardwareMeasurement],
enclave_measurement: Measurement,
) -> HardwareMeasurement:
if enclave_measurement.type == PredicateType.TDX_GUEST_V2:
return _verify_tdx_hardware(hardware_measurements, enclave_measurement)
raise ValueError(f"unsupported enclave platform: {enclave_measurement.type}")
This keeps attestation.py as a thin orchestration layer and makes adding new platforms a matter of adding a new branch + helper, not rewriting shared logic.
Duplicated MAX_DECOMPRESSED_SIZE constant across sibling orchestration modules (SOLID / DRY violation).
MAX_DECOMPRESSED_SIZE = 10 * 1024 * 1024 is defined independently in both attestation_tdx.py and attestation_sev.py. These are peer orchestration modules for different TEE platforms; shared protocol-level constants already live in types.py. A future policy change to this limit would require updating two files.
Additionally, attestation_sev.py already factored gzip decompression into a _safe_gzip_decompress() helper. The TDX module inlines the same logic, breaking the internal convention.
Recommendation: Move MAX_DECOMPRESSED_SIZE (and optionally a shared _safe_gzip_decompress) into types.py (or a shared constants.py) and import from there in both attestation modules.
The _DEFAULT_CONFIG module-level singleton holds a mutable accepted_mr_seams list, which is the security-critical whitelist for approved TDX module versions. Because the list is mutable and shared by reference (not copied) whenever config is None, any code that obtains a reference to _DEFAULT_CONFIG can tamper with the whitelist—injecting unauthorized MR_SEAM hashes or emptying it—causing all subsequent attestations to use the corrupted whitelist.
Use a frozen dataclass (@dataclass(frozen=True)) with an immutable tuple instead of List[bytes] for accepted_mr_seams, or at minimum deep-copy the default config inside verify_tdx_attestation when config is None:
import copy
if config is None:
config = copy.deepcopy(_DEFAULT_CONFIG)
```</violation>
</file>
<file name="src/tinfoil/attestation/types.py">
<violation number="1" location="src/tinfoil/attestation/types.py:104">
P2: Custom agent: **Check System Design and Architectural Patterns**
The `fingerprint()` method returns inconsistent types: a raw register value for single-register measurements (SEV) but a SHA-256 hash for multi-register ones (TDX/multiplatform). Callers cannot rely on a uniform format for this value, which becomes a hidden contract violation as new attestation types are introduced. Consider always hashing for a consistent abstraction.</violation>
</file>
<file name="src/tinfoil/client.py">
<violation number="1" location="src/tinfoil/client.py:250">
P2: Custom agent: **Flag Security Vulnerabilities**
URL parameter injection: `platform` is interpolated into the URL without encoding. A value like `"snp&injected=value"` would inject additional query parameters. Use `urllib.parse.urlencode` to safely build the query string.</violation>
</file>Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
1 issue found across 5 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src/tinfoil/attestation/attestation_tdx.py">
<violation number="1" location="src/tinfoil/attestation/attestation_tdx.py:330">
P2: TDX_GUEST_V2 measurements are expected to include all five registers. The current guard only checks for at least two entries, which allows malformed measurements (missing RTMR1–RTMR3) to be treated as valid hardware matches. Tighten the validation to require the full TDX register count before comparing MRTD/RTMR0.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
1 issue found across 3 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src/tinfoil/attestation/types.py">
<violation number="1" location="src/tinfoil/attestation/types.py:58">
P2: safe_gzip_decompress promises ValueError on decompression failure, but gzip errors will propagate as OSError/EOFError. Wrap the read in try/except and raise ValueError to match the contract so callers can handle failures consistently.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
1 issue found across 2 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src/tinfoil/attestation/collateral_tdx.py">
<violation number="1" location="src/tinfoil/attestation/collateral_tdx.py:1960">
P1: Custom agent: **Flag Security Vulnerabilities**
Fail-open revocation check for TCB Info and QE Identity signing certificates when `root_crl` is `None`.
The `elif intermediate_cert is not None` guard only enforces fail-closed for the intermediate CA certificate. If `root_crl` is `None` and `intermediate_cert` is `None` (or absent), the TCB Info signing cert and QE Identity signing cert revocation checks are silently skipped — no error is raised. An attacker who can prevent Root CRL retrieval (e.g., network block) would bypass revocation verification for these collateral signing certs entirely.
The guard should be extended to also cover the new issuer chains:
```python
elif intermediate_cert is not None or collateral.tcb_info_issuer_chain or collateral.qe_identity_issuer_chain:
raise CollateralError(
"Cannot check certificate revocation: Root CRL not available in collateral"
)
```</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
19 issues found across 32 files
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src/tinfoil/attestation/collateral_tdx.py">
<violation number="1" location="src/tinfoil/attestation/collateral_tdx.py:1099">
P2: `_determine_pck_ca_type` catches its own `CollateralError` in the outer `except Exception` clause. Separate the expected crypto errors from the intentional raise to preserve the traceback chain and avoid a confusing double-wrapped error message.</violation>
<violation number="2" location="src/tinfoil/attestation/collateral_tdx.py:1365">
P2: Stale comment contradicts the code. The comment says "return the highest (most recent) one" but the code unconditionally raises `CollateralError`. Remove or update the comment to reflect the fail-closed behavior.</violation>
<violation number="3" location="src/tinfoil/attestation/collateral_tdx.py:2011">
P1: Custom agent: **Check System Design and Architectural Patterns**
`validate_collateral` is the primary orchestration entry point but omits `timeout` and `session` parameters that every other network-facing function in this module accepts. This breaks the module's own design convention and prevents callers from configuring timeouts or injecting a mock session for testing — a significant extensibility gap as the control plane grows. Add `timeout: float = 30.0` and `session: HttpSession = None` parameters and propagate them to `fetch_collateral`.</violation>
<violation number="4" location="src/tinfoil/attestation/collateral_tdx.py:2014">
P1: Custom agent: **Flag Security Vulnerabilities**
The `validate_collateral` function defaults `min_tcb_evaluation_data_number` to the hardcoded value `18`. This value is tied to a specific Intel TCB recovery event date (2024-11-12) and will become stale as Intel publishes new TCB-R events. Callers that rely on the default will silently accept collateral that should be rejected under the updated minimum.
The module already provides `calculate_min_tcb_evaluation_data_number()` to query Intel PCS for the current floor dynamically. Remove the default entirely (forcing callers to provide it explicitly) or raise a `CollateralError` at runtime if the value appears to be below what Intel is currently publishing. At minimum, document in the function signature that callers should pass a freshly-computed value rather than relying on the default.</violation>
<violation number="5" location="src/tinfoil/attestation/collateral_tdx.py:2014">
P2: Magic number `18` duplicated across `collateral_tdx.py` and `attestation_tdx.py`. Define the constant once in `collateral_tdx.py` (e.g. `DEFAULT_MIN_TCB_EVALUATION_DATA_NUMBER = 18`) and import it in `attestation_tdx.py` to avoid the two copies drifting.</violation>
</file>
<file name="src/tinfoil/attestation/abi_tdx.py">
<violation number="1" location="src/tinfoil/attestation/abi_tdx.py:54">
P2: Custom agent: **Detect Outdated or Unused State/Enum Definitions**
Two constants — `SHA256_HASH_SIZE` and `ECDSA_P256_COMPONENT_SIZE` — are defined in the constants section but are never referenced anywhere in the codebase. They appear to be dead code added preemptively or left over from an earlier draft. Per the rule, unused definitions should be removed to avoid confusion and maintenance overhead (a future reader may assume these sizes are enforced somewhere when they are not).
Remove both or, if they are intended to document the format, add a comment clarifying they are informational-only and not used in validation.</violation>
<violation number="2" location="src/tinfoil/attestation/abi_tdx.py:686">
P0: Custom agent: **Flag Security Vulnerabilities**
The `qe_vendor_id` field in the TDX quote header is parsed and stored but never validated against the expected Intel QE Vendor ID. `_validate_header()` checks version, attestation key type, and TEE type — but silently accepts any vendor ID. An attacker could present a quote with a non-Intel QE Vendor ID, bypassing the assumption that attestation is rooted in Intel's quoting infrastructure.</violation>
<violation number="3" location="src/tinfoil/attestation/abi_tdx.py:744">
P1: Trailing bytes after the signed data are silently captured rather than rejected. The PR description lists "Reject trailing bytes" as a bug fix, but `parse_quote` stores them in `extra_bytes` without raising an error. An attestation parser should fail closed on unexpected data; add a guard and remove the `extra_bytes` field from the dataclass.</violation>
</file>
<file name="src/tinfoil/client.py">
<violation number="1" location="src/tinfoil/client.py:15">
P2: Custom agent: **Detect Outdated or Unused State/Enum Definitions**
The `PredicateType` enum is imported in `client.py` but never referenced in this file. Remove it from the import to prevent maintenance confusion (stale imports can mask real usages and mislead future readers about what `client.py` depends on).</violation>
<violation number="2" location="src/tinfoil/client.py:183">
P1: Custom agent: **Check System Design and Architectural Patterns**
The `verify()` method handles TDX in the GitHub-based path (via `TDX_TYPES` check) but the pinned measurement branch only looks up `"snp_measurement"` and raises an error if absent — breaking TDX users who supply pinned measurements. Both branches of the same verification method should support the same set of platforms to maintain consistent behaviour. Consider restructuring the pinned measurement path to accept a platform-agnostic measurement dict (e.g. `{"type": "tdx", "measurement": "..."}`) and dispatching based on the attested platform type, mirroring the structure used in the GitHub-based path.</violation>
<violation number="3" location="src/tinfoil/client.py:251">
P1: `urllib.parse` is not imported as a module. `urllib.parse.urlencode(...)` will raise `AttributeError` at runtime whenever `platform` is provided. Add `urlencode` to the existing `urllib.parse` import line.</violation>
</file>
<file name="src/tinfoil/sigstore.py">
<violation number="1" location="src/tinfoil/sigstore.py:183">
P2: Custom agent: **Check System Design and Architectural Patterns**
`fetch_latest_hardware_measurements()` introduces a network I/O responsibility into `sigstore.py`, which is a cryptographic verification module. This violates the Single Responsibility Principle: the module now changes for two unrelated reasons — changes to sigstore/DSSE verification logic, and changes to how hardware measurements are fetched from GitHub.
The fetch-then-verify orchestration should live in a higher-level layer (e.g., a dedicated `hardware_measurements.py` service module or inside the client layer), keeping `sigstore.py` as a pure verification module. This is especially important as the control plane grows, since it keeps the verification primitives independently testable and the fetch strategy independently swappable.
**Suggested approach:** Move `fetch_latest_hardware_measurements` (and `HARDWARE_MEASUREMENTS_REPO`) to its own module (e.g., `hardware_measurements.py`) that imports from both `sigstore.py` and `github.py`, rather than collapsing all three concerns into `sigstore.py`.</violation>
</file>
<file name="src/tinfoil/attestation/types.py">
<violation number="1" location="src/tinfoil/attestation/types.py:42">
P2: Custom agent: **Check System Design and Architectural Patterns**
**SRP violation: utility function in a types module**
`safe_gzip_decompress` is a decompression utility with IO side-effects (`gzip`/`io`) that does not belong in `types.py`. The module docstring promises this file is 'the canonical source for types' with no intra-package dependencies, but a utility function breaks that contract.
As the control plane grows, placing utilities here sets a precedent that will cause the module to accumulate unrelated helpers, making it harder to navigate. Move `safe_gzip_decompress` to a dedicated `utils.py` (or `attestation_utils.py`) and import it from there in both `attestation_sev.py` and `attestation_tdx.py`.</violation>
</file>
<file name="src/tinfoil/attestation/validate_tdx.py">
<violation number="1" location="src/tinfoil/attestation/validate_tdx.py:284">
P0: Custom agent: **Flag Security Vulnerabilities**
Debug TDs are silently accepted when `td_attributes` is not configured in `PolicyOptions`.
`TD_ATTRIBUTES_DEBUG_BIT` is included in `TD_ATTRIBUTES_FIXED0`, so `validate_td_attributes` treats the DEBUG bit as an *allowed* (not forbidden) bit. The only place that rejects it is the optional `_byte_check("TD_ATTRIBUTES", ...)` call — which is skipped whenever the caller doesn't set `options.td_quote_body.td_attributes`. A debug-mode TD has its memory fully visible to the host, completely defeating TDX's confidentiality guarantees.
The DEBUG bit must be rejected unconditionally by `validate_td_attributes` itself, not delegated to an optional policy field:
```python
# Mandatory: reject debug TDs regardless of policy options
if value & TD_ATTRIBUTES_DEBUG_BIT:
raise TdxValidationError(
f"TD_ATTRIBUTES 0x{value:x}: DEBUG bit must not be set"
)
This hardening should be added before the FIXED0 check so it fails fast and is never accidentally bypassed.
P1: RTMR validation uses `zip`, which silently skips checks if `body.rtmrs` is shorter than `t.rtmrs`. A future parsing change that under-produces RTMRs would silently bypass configured RTMR policy checks. Assert lengths match before iterating. P2: Custom agent: **Flag Security Vulnerabilities**The report_data length check uses < (minimum) instead of != (exact equality), silently accepting trailing bytes beyond the required 64 bytes (TLS_KEY_FP_SIZE + HPKE_KEY_SIZE). In an attestation context this is a security defect: the PR description explicitly states the intent is to "enforce exact … report_data length; reject trailing bytes." Any trailing content would be silently discarded rather than treated as a malformed report.
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Implement QuoteV4 binary parser for Intel TDX attestation quotes. Includes header, TD quote body, QE report, signed data, and PCK certificate chain parsing with named constants for all offsets and sizes per the Intel TDX DCAP specification. Validates header fields (version, AK type, TEE type, QE vendor ID) and rejects unsupported QuoteV5 and certification data type 5.
Implement PCK certificate chain verification, ECDSA quote signature verification, QE report signature verification, and attestation key binding (AK hash in QE report data). All crypto operations use the cryptography library with proper algorithm constraints.
Implement XFAM, TD_ATTRIBUTES, SEAM attributes, and measurement policy validation with frozen dataclass options. Unconditionally rejects debug TDs per Intel DCAP spec Section 2.3.2. Supports exact match and allowlist modes for MR_TD and MR_SEAM.
Implement Intel PCS collateral fetching with disk caching for TCB Info, QE Identity, PCK CRL, and Root CA CRL. Validates TCB status, TDX module identity, QE identity, certificate revocation (including collateral signing certs per DCAP spec), and collateral freshness via tcbEvaluationDataNumber thresholds.
Parse Intel PCK certificate X.509v3 extensions using pyasn1 for proper ASN.1 decoding. Extracts FMSPC, PCE ID, TCB components, and CPUSVN with strict validation and error handling.
Implement PEM chain parsing via cryptography's load_pem_x509_certificates, Intel SGX Root CA chain verification with public key pinning, validity period checks, BasicConstraints CA=True enforcement, and chain signature verification. Embeds the Intel SGX Root CA certificate.
Implement end-to-end TDX attestation flow: quote parsing, cryptographic verification, policy validation, and collateral validation. Adds TdxVerificationConfig for policy injection, verify_tdx_hardware for Sigstore-based measurement comparison, and ACCEPTED_MR_SEAMS whitelist.
Move SEV-specific logic into attestation_sev.py, rename abi_sevsnp.py to abi_sev.py and verify.py to verify_sev.py. Add SevAttestationError, report_data bounds checking, frozen ValidationOptions, and use pyasn1 for DER decoding in VCEK URL construction.
Extract Measurement, Verification, GroundTruth, and error classes into shared types module. Add safe_gzip_decompress with bomb protection and proper ValueError wrapping, fingerprint() with consistent SHA-256 return type, and named constants for register indices and key sizes.
Extract shared DSSE verification helper, add multiplatform TDX/SEV hardware measurement fetching from GitHub via Sigstore transparency log. Supports TDX_GUEST_V2 and SEV_GUEST_V2 predicate types.
Add make_secure_http_client and make_secure_async_http_client with certificate fingerprint pinning. Integrate TdxVerificationConfig, platform-filtered router selection with URL-encoded query params, and TDX hardware measurement verification in the verify() flow.
Update __init__.py exports for new TDX/SEV module structure, add TDX format detection in attestation.py, remove deprecated V1 types, and handle empty release bodies in github.py.
Add parameterized all-enclaves integration test with graceful handling of transient network errors during test collection. Update existing attestation flow tests for new module structure.
Add pyasn1>=0.4.0 as explicit dependency for ASN.1 parsing. Stop ignoring all .md files in gitignore.
Remove abi_sevsnp.py (replaced by abi_sev.py), validate.py (replaced by validate_sev.py), and verify.py (replaced by verify_sev.py).
Summary by cubic
Adds full Intel TDX attestation v2: end-to-end quote parsing, crypto verification, Intel PCS collateral/TCB checks, and Sigstore-verified multiplatform hardware measurements. Also adds a TLS-pinned secure client, cleanly splits SEV and TDX with shared types/utils, and deduplicates the default TCB evaluation threshold constant.
New Features
Bug Fixes
Written for commit 90198df. Summary will update on new commits.