Skip to content

fix: 4 pipeline bugs — domain knowledge passthrough, torch import guard, bare excepts#52

Open
cauchyturing wants to merge 1 commit intoLancelot39:mainfrom
cauchyturing:fix/upstream-bug-fixes
Open

fix: 4 pipeline bugs — domain knowledge passthrough, torch import guard, bare excepts#52
cauchyturing wants to merge 1 commit intoLancelot39:mainfrom
cauchyturing:fix/upstream-bug-fixes

Conversation

@cauchyturing
Copy link
Copy Markdown


Problem

Found 4 bugs while working with the pipeline — domain knowledge never reaches the LLM prompts despite being collected, import torch crashes on CPU-only environments, and bare except: blocks silently swallow KeyboardInterrupt / SystemExit.

Changes

1. causal_discovery/filter.py — dead [DOMAIN_KNOWLEDGE] placeholder

The prompt template algo_select_prompt.txt has a [DOMAIN_KNOWLEDGE] placeholder at line 25, but create_prompt() never included it in the replacement dict. Domain knowledge collected from the user was silently dropped during algorithm selection.

Fix: Extract knowledge_docs from global_state.user_data.knowledge_docs, format it, and add "[DOMAIN_KNOWLEDGE]": knowledge to the replacements dict.

Also: import torch at module level crashes in environments without CUDA/torch (e.g., lightweight containers, CI). Wrapped in try/except ImportError and guarded torch.cuda.is_available() with a torch is not None check.

2. postprocess/judge.pyknowledge_docs never forwarded

quality_judge() receives knowledge_docs as a parameter (line 34) and it's passed correctly from forward() (line 146), but the actual call to llm_evaluation_new() at line 94 never forwarded it. The Judge's LLM edge evaluation was always running without domain knowledge, even when the user provided it.

Fix: Add knowledge_docs=knowledge_docs to the llm_evaluation_new() call.

3. postprocess/judge_functions.py — accept + inject domain knowledge, fix bare excepts

llm_evaluation_new() didn't accept a knowledge_docs parameter, so even after fixing the call site in judge.py, there was nowhere for the knowledge to go.

Fix:

  • Added knowledge_docs=None parameter to llm_evaluation_new().
  • When provided, formats it into a **Domain Knowledge** section and appends to the [RELATIONSHIP] replacement so it reaches the pruning prompt.
  • Replaced 5 bare except: with except Exception: — bare excepts catch KeyboardInterrupt, SystemExit, GeneratorExit which makes debugging painful and can prevent clean shutdown.

4. llm/__init__.py — hard crash without openai package

from llm import LLMClient fails with ImportError if openai isn't installed, even in code paths that never use the LLM client (e.g., rule-based algorithm selection). Same for OllamaClient.

Fix: Wrap both imports in try/except ImportError, defaulting to None. Callers that actually need these classes will get a clear error at instantiation time rather than a cryptic import crash at module load.

Testing

Ran targeted verification on each fix:

=== Test 1: torch import guard in filter.py ===
PASS: filter.py loads without torch

=== Test 2: llm/__init__.py import guard ===
PASS: 2 try/except blocks found (expected 2)

=== Test 3: judge.py passes knowledge_docs to llm_evaluation_new ===
PASS: knowledge_docs forwarded to llm_evaluation_new()

=== Test 4: judge_functions.py signature + knowledge injection ===
PASS: llm_evaluation_new() accepts knowledge_docs parameter
PASS: knowledge_docs injected into pruning prompt

=== Test 5: no bare except: in modified functions ===
PASS: no bare except: in judge_functions.py

=== Test 6: [DOMAIN_KNOWLEDGE] placeholder connected in filter.py ===
PASS: [DOMAIN_KNOWLEDGE] placeholder in replacements dict
PASS: knowledge_docs extracted from global_state

Confirmed algo_select_prompt.txt:25 contains [DOMAIN_KNOWLEDGE] — the placeholder was always there, just never replaced.

Confirmed no bare except: remain in any of the 4 modified files.

Impact

  • Domain knowledge now flows end-to-end: user input → knowledge_docs → Filter prompt (algorithm selection) → Judge prompt (edge pruning). Previously it was collected but dropped at both stages.
  • No behavior change when knowledge_docs is None — all new code paths are guarded with if knowledge_docs checks, so existing pipelines without domain knowledge are unaffected.
  • No new dependencies. All changes are backward compatible.

…rd, bare excepts

1. filter.py: Connect [DOMAIN_KNOWLEDGE] placeholder to actual knowledge_docs
   from GlobalState — was dead code, placeholder never replaced. Also wrap
   `import torch` in try/except so pipeline works without GPU packages.

2. judge.py: Pass knowledge_docs through to llm_evaluation_new() — the Judge
   had the parameter but never forwarded it, so LLM pruning ignored domain knowledge.

3. judge_functions.py: Accept knowledge_docs param, inject into pruning prompt
   so LLM edge evaluation uses domain knowledge. Also change bare `except:`
   to `except Exception:` (5 instances) to avoid masking KeyboardInterrupt/SystemExit.

4. llm/__init__.py: Wrap LLMClient/OllamaClient imports in try/except so
   `from llm import LLMClient` doesn't crash when openai package isn't installed.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant