fix: 4 pipeline bugs — domain knowledge passthrough, torch import guard, bare excepts#52
Open
cauchyturing wants to merge 1 commit intoLancelot39:mainfrom
Open
fix: 4 pipeline bugs — domain knowledge passthrough, torch import guard, bare excepts#52cauchyturing wants to merge 1 commit intoLancelot39:mainfrom
cauchyturing wants to merge 1 commit intoLancelot39:mainfrom
Conversation
…rd, bare excepts 1. filter.py: Connect [DOMAIN_KNOWLEDGE] placeholder to actual knowledge_docs from GlobalState — was dead code, placeholder never replaced. Also wrap `import torch` in try/except so pipeline works without GPU packages. 2. judge.py: Pass knowledge_docs through to llm_evaluation_new() — the Judge had the parameter but never forwarded it, so LLM pruning ignored domain knowledge. 3. judge_functions.py: Accept knowledge_docs param, inject into pruning prompt so LLM edge evaluation uses domain knowledge. Also change bare `except:` to `except Exception:` (5 instances) to avoid masking KeyboardInterrupt/SystemExit. 4. llm/__init__.py: Wrap LLMClient/OllamaClient imports in try/except so `from llm import LLMClient` doesn't crash when openai package isn't installed. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Problem
Found 4 bugs while working with the pipeline — domain knowledge never reaches the LLM prompts despite being collected,
import torchcrashes on CPU-only environments, and bareexcept:blocks silently swallowKeyboardInterrupt/SystemExit.Changes
1.
causal_discovery/filter.py— dead[DOMAIN_KNOWLEDGE]placeholderThe prompt template
algo_select_prompt.txthas a[DOMAIN_KNOWLEDGE]placeholder at line 25, butcreate_prompt()never included it in the replacement dict. Domain knowledge collected from the user was silently dropped during algorithm selection.Fix: Extract
knowledge_docsfromglobal_state.user_data.knowledge_docs, format it, and add"[DOMAIN_KNOWLEDGE]": knowledgeto the replacements dict.Also:
import torchat module level crashes in environments without CUDA/torch (e.g., lightweight containers, CI). Wrapped intry/except ImportErrorand guardedtorch.cuda.is_available()with atorch is not Nonecheck.2.
postprocess/judge.py—knowledge_docsnever forwardedquality_judge()receivesknowledge_docsas a parameter (line 34) and it's passed correctly fromforward()(line 146), but the actual call tollm_evaluation_new()at line 94 never forwarded it. The Judge's LLM edge evaluation was always running without domain knowledge, even when the user provided it.Fix: Add
knowledge_docs=knowledge_docsto thellm_evaluation_new()call.3.
postprocess/judge_functions.py— accept + inject domain knowledge, fix bare exceptsllm_evaluation_new()didn't accept aknowledge_docsparameter, so even after fixing the call site injudge.py, there was nowhere for the knowledge to go.Fix:
knowledge_docs=Noneparameter tollm_evaluation_new().**Domain Knowledge**section and appends to the[RELATIONSHIP]replacement so it reaches the pruning prompt.except:withexcept Exception:— bare excepts catchKeyboardInterrupt,SystemExit,GeneratorExitwhich makes debugging painful and can prevent clean shutdown.4.
llm/__init__.py— hard crash withoutopenaipackagefrom llm import LLMClientfails withImportErrorifopenaiisn't installed, even in code paths that never use the LLM client (e.g., rule-based algorithm selection). Same forOllamaClient.Fix: Wrap both imports in
try/except ImportError, defaulting toNone. Callers that actually need these classes will get a clear error at instantiation time rather than a cryptic import crash at module load.Testing
Ran targeted verification on each fix:
Confirmed
algo_select_prompt.txt:25contains[DOMAIN_KNOWLEDGE]— the placeholder was always there, just never replaced.Confirmed no bare
except:remain in any of the 4 modified files.Impact
knowledge_docs→ Filter prompt (algorithm selection) → Judge prompt (edge pruning). Previously it was collected but dropped at both stages.if knowledge_docschecks, so existing pipelines without domain knowledge are unaffected.