Skip to content

feat(sdlc): supply chain hardening — threat models, audit steps, /security-review command#119

Merged
NeuralEmpowerment merged 3 commits intomainfrom
feat/supply-chain-hardening
Mar 26, 2026
Merged

feat(sdlc): supply chain hardening — threat models, audit steps, /security-review command#119
NeuralEmpowerment merged 3 commits intomainfrom
feat/supply-chain-hardening

Conversation

@NeuralEmpowerment
Copy link
Copy Markdown
Contributor

Summary

  • Add threat models feat(ci): enhance QA workflow with mypy and automated dependency updates #9 (Transitive Dependency Poisoning — litellm 2026, Shai-Halud 2025) and chore(ci): bump the github-actions group with 9 updates #10 (Dependency Sprawl) to security-hardening skill
  • Add audit steps 6.5 (native ecosystem audits: pip-audit, pnpm audit) and 6.6 (dependency tree review)
  • Add fix recipes with inline # comments explaining the security rationale: SHA-pin Actions, pip-audit CI, pnpm audit CI, hash verification
  • Add /security-review command that runs the full audit workflow and produces 2-6 prioritized action items
  • Update attack taxonomy table with litellm (2026) and Shai-Halud (2025) entries
  • Update tools reference with pip-audit, pnpm audit, deptry, depcheck

Context

Supply chain attacks (litellm, Shai-Halud, event-stream, ua-parser-js) all share the same anatomy: compromised credentials → malicious package → exfiltration from every downstream consumer. The existing skill covered 8 threat models but didn't address transitive dependency poisoning or dependency minimization as a defense strategy.

Test plan

  • /security-review command discovers and runs audit steps
  • Security-hardening skill renders correctly with new threat models
  • Fix recipe YAML blocks are valid and have inline comments

Add two new threat models to security-hardening skill:
- #9 Transitive Dependency Poisoning (litellm 2026, Shai-Halud 2025)
- #10 Dependency Sprawl (attack surface proportional to dep count)

Add audit steps:
- 6.5: Native ecosystem audits (pip-audit, pnpm audit)
- 6.6: Dependency tree review (uv tree, pnpm why)

Add fix recipes with inline comments explaining the "why":
- SHA-pin GitHub Actions (worked example)
- pip-audit CI job
- pnpm audit CI job
- Hash verification for Python dependencies

Add /security-review command that runs the full audit workflow
and produces 2-6 prioritized action items.
Copilot AI review requested due to automatic review settings March 25, 2026 22:30
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR expands the SDLC security documentation and workflows to harden supply chain posture by adding new threat models, audit steps, fix recipes, and a new /security-review command that operationalizes the audit process.

Changes:

  • Adds a comprehensive security-hardening skill covering 10 threat models, an audit workflow, and fix recipes for CI/CD and dependency hardening.
  • Introduces a /security-review command that runs the audit workflow and outputs prioritized action items (and optionally applies fixes).
  • Updates the audit workflow to include native ecosystem audits (pip-audit, pnpm audit) and dependency tree review guidance.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 11 comments.

File Description
plugins/sdlc/skills/security-hardening/SKILL.md Adds the security hardening skill with threat models, audit workflow steps, and fix recipes.
plugins/sdlc/commands/security-review.md Adds a command wrapper that runs the skill’s audit workflow and generates a prioritized action plan.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

run: |
for dir in apps/syn-dashboard-ui apps/syn-pulse-ui apps/syn-docs; do
echo "=== Auditing $dir ==="
(cd "$dir" && pnpm audit --prod --audit-level moderate) || true
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This step uses both || true inside the loop and continue-on-error: true on the step. That’s redundant and can hide unexpected failures (e.g., pnpm not installed). Prefer one mechanism (usually continue-on-error) and let the command’s exit code propagate so it’s visible in logs.

Suggested change
(cd "$dir" && pnpm audit --prod --audit-level moderate) || true
(cd "$dir" && pnpm audit --prod --audit-level moderate)

Copilot uses AI. Check for mistakes.
## Instructions

Use the **security-hardening** skill as your knowledge base. It contains the full threat
model (10 attack categories), audit workflow (12 steps), and fix recipes with inline
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This says the security-hardening skill has an “audit workflow (12 steps)”, but the command later instructs running steps 1–11 plus 6.5 and 6.6 (13 total). Please reconcile the step count to avoid confusion for users following the workflow.

Suggested change
model (10 attack categories), audit workflow (12 steps), and fix recipes with inline
model (10 attack categories), audit workflow (13 steps), and fix recipes with inline

Copilot uses AI. Check for mistakes.
Comment on lines +10 to +14
Audit a codebase for security vulnerabilities and implement fixes. Two modes:

- **audit** — read-only scan, produces a prioritized findings report
- **fix** — implements the fixes (confirm with user before each change)
- **both** (default) — audit first, then fix
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The intro says “Two modes” but lists three (audit/fix/both). Update the wording to avoid confusion for users following the prompt.

Copilot uses AI. Check for mistakes.
Comment on lines +16 to +19
MODE: $1 || "both"

---

Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This skill defines MODE: $1 || "both" without a ## Variables section, unlike the other SDLC skills (e.g. plugins/sdlc/skills/review/SKILL.md:12-16). If tooling expects variables under that heading, this may not be parsed consistently—please add ## Variables and place MODE there.

Suggested change
MODE: $1 || "both"
---
---
## Variables
MODE: $1 || "both"

Copilot uses AI. Check for mistakes.

## Threat Model

This skill covers five attack surface areas. Each has a concrete historical incident:
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This says the skill covers “five attack surface areas”, but the threat model below enumerates 10 items (1–10). Please update the count to match the actual sections so the overview is accurate.

Suggested change
This skill covers five attack surface areas. Each has a concrete historical incident:
This skill covers ten attack surface areas. Each has a concrete historical incident:

Copilot uses AI. Check for mistakes.
Comment on lines +354 to +356
# Export locked deps with hashes, then audit against PyPI advisory DB.
# --disable-pip: use the PyPI JSON API directly (faster, no pip subprocess).
# --require-hashes: verify package integrity — catches tampered artifacts.
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pip-audit comments claim hash verification / --require-hashes, but the command explicitly exports --no-hashes and doesn’t pass pip-audit --require-hashes. If you want integrity checking, export hashed requirements and run pip-audit in hash-checking mode; otherwise remove the hash-related rationale to avoid misleading guidance.

Suggested change
# Export locked deps with hashes, then audit against PyPI advisory DB.
# --disable-pip: use the PyPI JSON API directly (faster, no pip subprocess).
# --require-hashes: verify package integrity — catches tampered artifacts.
# Export locked deps, then audit against PyPI advisory DB.
# --disable-pip: use the PyPI JSON API directly (faster, no pip subprocess).

Copilot uses AI. Check for mistakes.
Comment on lines +367 to +373
# Run per-app because each has its own lock file and dependency tree.
for dir in apps/syn-dashboard-ui apps/syn-pulse-ui apps/syn-docs; do
if [ -f "$dir/pnpm-lock.yaml" ] || [ -f "$dir/package-lock.json" ]; then
echo "--- $dir ---"
(cd "$dir" && pnpm audit --prod 2>&1) || echo "⚠️ audit issues in $dir"
fi
done
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This loop hardcodes apps/syn-dashboard-ui, apps/syn-pulse-ui, apps/syn-docs, but this repository has no apps/ directory. As written, pnpm audits will silently do nothing in most repos. Prefer discovering lockfile directories dynamically (e.g., find . -name pnpm-lock.yaml -o -name package-lock.json and iterate over their parent dirs).

Suggested change
# Run per-app because each has its own lock file and dependency tree.
for dir in apps/syn-dashboard-ui apps/syn-pulse-ui apps/syn-docs; do
if [ -f "$dir/pnpm-lock.yaml" ] || [ -f "$dir/package-lock.json" ]; then
echo "--- $dir ---"
(cd "$dir" && pnpm audit --prod 2>&1) || echo "⚠️ audit issues in $dir"
fi
done
# Run per-project based on discovered lock files (monorepos and single apps).
while IFS= read -r dir; do
if [ -f "$dir/pnpm-lock.yaml" ] || [ -f "$dir/package-lock.json" ]; then
echo "--- $dir ---"
(cd "$dir" && pnpm audit --prod 2>&1) || echo "⚠️ audit issues in $dir"
fi
done < <(find . -type f \( -name pnpm-lock.yaml -o -name package-lock.json \) -print \
| xargs -n1 dirname \
| sort -u)

Copilot uses AI. Check for mistakes.
Comment on lines +623 to +627
# --no-hashes here because pip-audit fetches its own hashes to verify.
# --frozen ensures we export exactly what's locked, not a fresh resolve.
- name: Audit Python dependencies
run: |
uv export --format requirements-txt --no-hashes --frozen --quiet \
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These comments say pip-audit “verifies package hashes” and that --no-hashes is used because “pip-audit fetches its own hashes”. In practice, hash-checking requires hashed requirements plus pip-audit --require-hashes (or installer-side pip/uv pip install --require-hashes). Consider aligning the recipe with that, or removing the hash-verification claim here.

Copilot uses AI. Check for mistakes.
Comment on lines +661 to +662
for dir in apps/syn-dashboard-ui apps/syn-pulse-ui apps/syn-docs; do
echo "=== Auditing $dir ==="
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pnpm audit CI loop hardcodes apps/syn-... directories that don’t exist in this repo, which makes the recipe non-portable and likely a no-op. Consider iterating over all pnpm-lock.yaml / package-lock.json locations found in the repo instead.

Suggested change
for dir in apps/syn-dashboard-ui apps/syn-pulse-ui apps/syn-docs; do
echo "=== Auditing $dir ==="
find . \( -name 'pnpm-lock.yaml' -o -name 'package-lock.json' \) -print0 | while IFS= read -r -d '' lockfile; do
dir="$(dirname "$lockfile")"
echo "=== Auditing $dir (lockfile: $lockfile) ==="

Copilot uses AI. Check for mistakes.
Comment on lines +390 to +399
echo "=== Node.js: total package count per app ==="
for dir in apps/syn-dashboard-ui apps/syn-pulse-ui apps/syn-docs; do
if [ -f "$dir/pnpm-lock.yaml" ]; then
count=$(grep -c 'resolution:' "$dir/pnpm-lock.yaml" 2>/dev/null || echo "?")
echo " $dir: ~$count packages"
elif [ -f "$dir/package-lock.json" ]; then
count=$(grep -c '"resolved":' "$dir/package-lock.json" 2>/dev/null || echo "?")
echo " $dir: ~$count packages"
fi
done
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same hardcoded apps/... directory list appears here for dependency counting. Since apps/ doesn’t exist in this repo, the counts will never be reported. Consider deriving the list of Node project dirs from discovered lock files so this works across repos/monorepos.

Suggested change
echo "=== Node.js: total package count per app ==="
for dir in apps/syn-dashboard-ui apps/syn-pulse-ui apps/syn-docs; do
if [ -f "$dir/pnpm-lock.yaml" ]; then
count=$(grep -c 'resolution:' "$dir/pnpm-lock.yaml" 2>/dev/null || echo "?")
echo " $dir: ~$count packages"
elif [ -f "$dir/package-lock.json" ]; then
count=$(grep -c '"resolved":' "$dir/package-lock.json" 2>/dev/null || echo "?")
echo " $dir: ~$count packages"
fi
done
echo "=== Node.js: total package count per project ==="
node_projects=$(find . -maxdepth 4 \( -name "pnpm-lock.yaml" -o -name "package-lock.json" \) 2>/dev/null | sed -E 's#/[^/]+$##' | sort -u)
if [ -z "$node_projects" ]; then
echo " (no Node.js lockfiles found — looking for pnpm-lock.yaml or package-lock.json)"
else
for dir in $node_projects; do
if [ -f "$dir/pnpm-lock.yaml" ]; then
count=$(grep -c 'resolution:' "$dir/pnpm-lock.yaml" 2>/dev/null || echo "?")
echo " $dir: ~$count packages"
elif [ -f "$dir/package-lock.json" ]; then
count=$(grep -c '"resolved":' "$dir/package-lock.json" 2>/dev/null || echo "?")
echo " $dir: ~$count packages"
fi
done
fi

Copilot uses AI. Check for mistakes.
- Fix mode count (two → three), attack surface count (five → ten)
- Add ## Variables section per SDLC skill convention
- Fix grep \b word boundary → \< \> for GNU/BSD compatibility
- Remove misleading pip-audit hash verification claims
- Replace hardcoded apps/syn-* paths with dynamic lockfile discovery
- Remove redundant || true alongside continue-on-error
- Fix step count in security-review command (12 → 13)
Copilot AI review requested due to automatic review settings March 26, 2026 01:55
@NeuralEmpowerment NeuralEmpowerment merged commit c05acb7 into main Mar 26, 2026
18 checks passed
@NeuralEmpowerment NeuralEmpowerment deleted the feat/supply-chain-hardening branch March 26, 2026 02:00
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 9 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +148 to +153
**Why Docker Scout over alternatives:** You already trust Docker for the container runtime —
Scout extends that existing trust rather than introducing a new third-party security
dependency. Trivy (aquasecurity) is commonly cited but aquasecurity has had multiple
security incidents; a compromised scanner is a supply chain risk that undermines the
whole point. Scout is maintained by the same team that ships the runtime, keeping the
trust chain short.
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The justification for preferring Docker Scout over Trivy is phrased too broadly: “aquasecurity has had multiple security incidents” is ambiguous (sounds like repeated corporate breaches) and doesn’t specify what was compromised. Please rephrase to a precise, sourced statement (e.g., note the 2026 compromises in Trivy distribution/integration ecosystem) or make it a neutral recommendation with caveats (pin versions / verify provenance if using Trivy).

Suggested change
**Why Docker Scout over alternatives:** You already trust Docker for the container runtime —
Scout extends that existing trust rather than introducing a new third-party security
dependency. Trivy (aquasecurity) is commonly cited but aquasecurity has had multiple
security incidents; a compromised scanner is a supply chain risk that undermines the
whole point. Scout is maintained by the same team that ships the runtime, keeping the
trust chain short.
**Why Docker Scout over alternatives:** If you already rely on Docker for your container
runtime and registry, Scout integrates directly with that ecosystem and avoids adding
another vendor to your critical security toolchain. Other scanners such as Trivy
(by Aqua Security) are also widely used and capable; however, any vulnerability
scanner becomes part of your supply chain, so treat it as a sensitive dependency:
pin versions, verify binary provenance (e.g., checksums or signatures), and monitor
security advisories for whichever tool(s) you adopt.

Copilot uses AI. Check for mistakes.

echo ""
echo "=== CI using npm install instead of npm ci ==="
grep -rn "npm install\b" .github/workflows/ 2>/dev/null \
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This grep also uses \b for word boundaries, which standard grep doesn’t support. The check may not reliably detect npm install usage. Consider grep -P "npm install\b" or a portable alternative like grep -E "npm install([[:space:]]|$)".

Suggested change
grep -rn "npm install\b" .github/workflows/ 2>/dev/null \
grep -rnE "npm install([[:space:]]|$)" .github/workflows/ 2>/dev/null \

Copilot uses AI. Check for mistakes.
Comment on lines +300 to +305
grep -rn "npm install\b\|npm ci\b\|pnpm install\b\|yarn install\b" \
.github/workflows/ 2>/dev/null

echo ""
echo "=== Missing script blocking (flag for review) ==="
grep -rn "npm ci\b\|npm install\b\|pnpm install\b" .github/workflows/ 2>/dev/null \
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This follow-up check also relies on \b for word boundaries, which isn’t supported by POSIX/basic/extended grep. Consider grep -P or a portable pattern (e.g., npm (ci|install)([[:space:]]|$) with grep -E).

Suggested change
grep -rn "npm install\b\|npm ci\b\|pnpm install\b\|yarn install\b" \
.github/workflows/ 2>/dev/null
echo ""
echo "=== Missing script blocking (flag for review) ==="
grep -rn "npm ci\b\|npm install\b\|pnpm install\b" .github/workflows/ 2>/dev/null \
grep -Ern "npm (install|ci)([[:space:]]|$)|pnpm install([[:space:]]|$)|yarn install([[:space:]]|$)" \
.github/workflows/ 2>/dev/null
echo ""
echo "=== Missing script blocking (flag for review) ==="
grep -Ern "npm (ci|install)([[:space:]]|$)|pnpm install([[:space:]]|$)" .github/workflows/ 2>/dev/null \

Copilot uses AI. Check for mistakes.
Comment on lines +300 to +305
grep -rn "npm install\b\|npm ci\b\|pnpm install\b\|yarn install\b" \
.github/workflows/ 2>/dev/null

echo ""
echo "=== Missing script blocking (flag for review) ==="
grep -rn "npm ci\b\|npm install\b\|pnpm install\b" .github/workflows/ 2>/dev/null \
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This grep uses \b to indicate word boundaries, but standard grep doesn’t support \b as a word-boundary operator. The search may miss matches or behave unexpectedly. Use grep -P if you want \b, or switch to portable boundaries like \</\> or explicit ([[:space:]]|$) patterns.

Suggested change
grep -rn "npm install\b\|npm ci\b\|pnpm install\b\|yarn install\b" \
.github/workflows/ 2>/dev/null
echo ""
echo "=== Missing script blocking (flag for review) ==="
grep -rn "npm ci\b\|npm install\b\|pnpm install\b" .github/workflows/ 2>/dev/null \
grep -rPn "npm install\b\|npm ci\b\|pnpm install\b\|yarn install\b" \
.github/workflows/ 2>/dev/null
echo ""
echo "=== Missing script blocking (flag for review) ==="
grep -rPn "npm ci\b\|npm install\b\|pnpm install\b" .github/workflows/ 2>/dev/null \

Copilot uses AI. Check for mistakes.
Comment on lines +463 to +470
-e "api_key\s*=\s*['\"][^'\"]\{10,\}" \
-e "secret\s*=\s*['\"][^'\"]\{10,\}" \
-e "password\s*=\s*['\"][^'\"]\{8,\}" \
-e "token\s*=\s*['\"][^'\"]\{10,\}" \
--include="*.py" --include="*.ts" --include="*.js" --include="*.yml" --include="*.yaml" \
--exclude-dir=".git" --exclude-dir="node_modules" --exclude-dir=".venv" \
. 2>/dev/null \
| grep -v "example\|placeholder\|your_\|<\|env\.\|os\.\|process\.env\|getenv\|secret_key\s*=" \
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These heuristic patterns use \s* (e.g., api_key\s*=), but \s isn’t recognized by POSIX grep, so this will effectively search for literal s and miss matches. Use grep -P for \s, or replace \s* with [[:space:]]* in all expressions here.

Suggested change
-e "api_key\s*=\s*['\"][^'\"]\{10,\}" \
-e "secret\s*=\s*['\"][^'\"]\{10,\}" \
-e "password\s*=\s*['\"][^'\"]\{8,\}" \
-e "token\s*=\s*['\"][^'\"]\{10,\}" \
--include="*.py" --include="*.ts" --include="*.js" --include="*.yml" --include="*.yaml" \
--exclude-dir=".git" --exclude-dir="node_modules" --exclude-dir=".venv" \
. 2>/dev/null \
| grep -v "example\|placeholder\|your_\|<\|env\.\|os\.\|process\.env\|getenv\|secret_key\s*=" \
-e "api_key[[:space:]]*=[[:space:]]*['\"][^'\"]\{10,\}" \
-e "secret[[:space:]]*=[[:space:]]*['\"][^'\"]\{10,\}" \
-e "password[[:space:]]*=[[:space:]]*['\"][^'\"]\{8,\}" \
-e "token[[:space:]]*=[[:space:]]*['\"][^'\"]\{10,\}" \
--include="*.py" --include="*.ts" --include="*.js" --include="*.yml" --include="*.yaml" \
--exclude-dir=".git" --exclude-dir="node_modules" --exclude-dir=".venv" \
. 2>/dev/null \
| grep -v "example\|placeholder\|your_\|<\|env\.\|os\.\|process\.env\|getenv\|secret_key[[:space:]]*=" \

Copilot uses AI. Check for mistakes.
Comment on lines +358 to +360
uv export --format requirements-txt --no-hashes --quiet \
| pip-audit --disable-pip -r /dev/stdin 2>&1 \
|| echo "⚠️ pip-audit found issues (see above)"
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the pip-audit audit step, uv export is run without --frozen. That can silently resolve/export a dependency set that differs from what’s locked, which undermines the goal of auditing the actual supply chain you ship. Prefer adding --frozen (and handle the failure as a finding if the lock is out of sync).

Suggested change
uv export --format requirements-txt --no-hashes --quiet \
| pip-audit --disable-pip -r /dev/stdin 2>&1 \
|| echo "⚠️ pip-audit found issues (see above)"
tmp_req="$(mktemp)"
if ! uv export --format requirements-txt --no-hashes --frozen --quiet >"$tmp_req"; then
echo "❌ uv export failed (lockfile may be out of sync; run 'uv lock' and retry)"
elif ! pip-audit --disable-pip -r "$tmp_req" 2>&1; then
echo "⚠️ pip-audit found issues (see above)"
fi
rm -f "$tmp_req"

Copilot uses AI. Check for mistakes.
```yaml
# pip-audit: checks Python dependencies against the PyPI advisory database.
# Complements OSV Scanner — pip-audit catches PyPI-specific advisories that
# OSV may not yet index, and verifies package hashes for tamper detection.
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment claims pip-audit “verifies package hashes for tamper detection,” but pip-audit is primarily a vulnerability auditor; hash/integrity checking only applies if you run it in hash-checking mode (e.g., --require-hashes / hashed requirements). As shown, the recipe exports with --no-hashes, so no hash verification is happening. Please adjust the wording to avoid overstating integrity guarantees, or update the example to demonstrate hash-checking mode if that’s the intent.

Suggested change
# OSV may not yet index, and verifies package hashes for tamper detection.
# OSV may not yet index. For hash-based tamper detection, use hashed
# requirements / hash-checking mode in your install workflow; this job
# focuses on vulnerability auditing.

Copilot uses AI. Check for mistakes.
Comment on lines +463 to +470
-e "api_key\s*=\s*['\"][^'\"]\{10,\}" \
-e "secret\s*=\s*['\"][^'\"]\{10,\}" \
-e "password\s*=\s*['\"][^'\"]\{8,\}" \
-e "token\s*=\s*['\"][^'\"]\{10,\}" \
--include="*.py" --include="*.ts" --include="*.js" --include="*.yml" --include="*.yaml" \
--exclude-dir=".git" --exclude-dir="node_modules" --exclude-dir=".venv" \
. 2>/dev/null \
| grep -v "example\|placeholder\|your_\|<\|env\.\|os\.\|process\.env\|getenv\|secret_key\s*=" \
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This negative-filter pattern also uses secret_key\s*=; with standard grep, \s won’t match whitespace. Use grep -P or rewrite the whitespace matching with [[:space:]]* so the filter behaves as intended.

Suggested change
-e "api_key\s*=\s*['\"][^'\"]\{10,\}" \
-e "secret\s*=\s*['\"][^'\"]\{10,\}" \
-e "password\s*=\s*['\"][^'\"]\{8,\}" \
-e "token\s*=\s*['\"][^'\"]\{10,\}" \
--include="*.py" --include="*.ts" --include="*.js" --include="*.yml" --include="*.yaml" \
--exclude-dir=".git" --exclude-dir="node_modules" --exclude-dir=".venv" \
. 2>/dev/null \
| grep -v "example\|placeholder\|your_\|<\|env\.\|os\.\|process\.env\|getenv\|secret_key\s*=" \
-e "api_key[[:space:]]*=[[:space:]]*['\"][^'\"]\{10,\}" \
-e "secret[[:space:]]*=[[:space:]]*['\"][^'\"]\{10,\}" \
-e "password[[:space:]]*=[[:space:]]*['\"][^'\"]\{8,\}" \
-e "token[[:space:]]*=[[:space:]]*['\"][^'\"]\{10,\}" \
--include="*.py" --include="*.ts" --include="*.js" --include="*.yml" --include="*.yaml" \
--exclude-dir=".git" --exclude-dir="node_modules" --exclude-dir=".venv" \
. 2>/dev/null \
| grep -v "example\|placeholder\|your_\|<\|env\.\|os\.\|process\.env\|getenv\|secret_key[[:space:]]*=" \

Copilot uses AI. Check for mistakes.
```bash
echo "=== Credential patterns in .gitignore ==="
for pattern in "*.pem" "*.key" "*.p12" "id_rsa" "id_ed25519" "*.cer" "*.crt" ".env"; do
if grep -q "$pattern" .gitignore 2>/dev/null; then
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The .gitignore checks use regex grep with patterns that include glob metacharacters (e.g., *.pem). Because */. are regex metacharacters, this can be unreliable and/or match unintended lines. Prefer fixed-string matching (e.g., grep -Fq -- "*.pem" .gitignore) or escape metacharacters for literal matching.

Suggested change
if grep -q "$pattern" .gitignore 2>/dev/null; then
if grep -Fq -- "$pattern" .gitignore 2>/dev/null; then

Copilot uses AI. Check for mistakes.
NeuralEmpowerment added a commit that referenced this pull request Mar 27, 2026
…urity-review command (#119)

* feat(sdlc): add supply chain hardening skill + /security-review command

Add two new threat models to security-hardening skill:
- #9 Transitive Dependency Poisoning (litellm 2026, Shai-Halud 2025)
- #10 Dependency Sprawl (attack surface proportional to dep count)

Add audit steps:
- 6.5: Native ecosystem audits (pip-audit, pnpm audit)
- 6.6: Dependency tree review (uv tree, pnpm why)

Add fix recipes with inline comments explaining the "why":
- SHA-pin GitHub Actions (worked example)
- pip-audit CI job
- pnpm audit CI job
- Hash verification for Python dependencies

Add /security-review command that runs the full audit workflow
and produces 2-6 prioritized action items.

* fix(sdlc): address PR review comments on security-hardening skill

- Fix mode count (two → three), attack surface count (five → ten)
- Add ## Variables section per SDLC skill convention
- Fix grep \b word boundary → \< \> for GNU/BSD compatibility
- Remove misleading pip-audit hash verification claims
- Replace hardcoded apps/syn-* paths with dynamic lockfile discovery
- Remove redundant || true alongside continue-on-error
- Fix step count in security-review command (12 → 13)

* chore(sdlc): bump plugin version 1.3.1 → 1.3.2
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants