fix: use MacOS M1 series to train evaluator and inference #37
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Issue
Improve macOS setup & inference robustness (macOS requirements, MPS guidance, optional vLLM backend)
Summary
This PR improves the macOS developer experience and prevents inference from failing due to missing
vllm. It adds a macOS-friendly requirements file, documents Apple Siliconmpsusage, makesvllmoptional inCRAG_Inference.py(with a Transformers fallback), and adds a repo-wide.gitignore.Key Changes
.gitignore: Ignore Python caches, virtual envs, build artifacts, uv caches, and common ML outputs.requirements-macos.txt: Provide a CPU-friendly dependency set for macOS users.requirements-macos.txton macOS.mps(Metal) when available for PyTorch workloads.vllmis unavailable.scripts/CRAG_Inference.py):vllmimport.--generator_backend {auto|vllm|transformers}(default:auto).Motivation
requirements.txtincludes packages that frequently fail on macOS.CRAG_Inference.pypreviously requiredvllm, causing macOS users to hit ImportError after following macOS install guidance.mps.How to Test
pip install -r requirements-macos.txtvllmis not installed.--generator_backend transformers.--generator_backend vllm.