Thanks for your interest in contributing! This project thrives on community-reported hallucination patterns.
If you've found a pattern that AI models consistently get wrong:
- Open an issue with the title:
[Rule Request] Description of the pattern - Include:
- The wrong code the AI generates
- The correct code it should generate
- Why it's wrong (security flaw, deprecated API, performance issue)
- Which models you've tested it with (Claude, GPT-4, Gemini)
If you've already written a .mdc rule:
- Fork the repo
- Add your rule to
.cursor/rules/ - Ensure it follows the format:
--- description: Brief description of what this rule prevents globs: ["**/relevant/path/**"] alwaysApply: false --- # Rule Title CLEAR INSTRUCTION about what to do and not do. ✅ CORRECT: example code ❌ WRONG: example code that AI typically generates
- Open a PR with a description of what pattern this prevents
Found a typo? Want to improve an explanation? PRs for documentation are always welcome.
git clone https://github.com/vibestackdev/vibe-stack.git
cd vibe-stack
npm install
cp .env.example .env.local
npm run devBe helpful. Be respectful. We're all here to ship better software with AI.