All-in-one fullstack AI application — your intelligent companion at the intersection of front-end, back-end, and brainy models.
- About
- Features
- Architecture
- Getting Started
- Environment Variables / Configuration
- Usage
- Folder Structure
- Roadmap & Future Plans
- Contributing
- License
- Acknowledgements
AuraAI is a fullstack platform built to unify front-end UI, back-end services, and AI capabilities (e.g. LLMs, embeddings, agents). It’s designed as a modular, extensible foundation so you can plug & play AI features while maintaining sanity in your codebase.
Live demo / deployment: aura-ai-server-one.vercel.app (if maintained) :contentReference[oaicite:0]{index=0}
Here’s what AuraAI aims to offer (or already offers):
- Modular architecture: front end + back end + AI models all in one
- API routing, authentication, user session logic
- Integration with language models (OpenAI, etc.)
- Embedding store / vector DB (optional)
- Agent / tool framework support
- Logging, error handling, monitoring hooks
- Real-time / websocket support (future)
- Extensibility: plugin your own modules or models
[ Client / Frontend ] ↔ [ Server / API Layer ] ↔ [ AI / Model / DB Layer ]
- Client (in
client/) — UI, state management, request orchestration - Server (in
server/) — API endpoints, orchestration, model calls, business logic - AI / Model Layer — embedding store, calling LLMs, agent logic, etc.
You can swap or upgrade any part without rewriting the rest.
Make sure you have:
- Node.js (>= 16.x or your chosen version)
- npm or yarn
- (Optional) A cloud hosting / deployment setup
- API keys for LLM / AI provider (e.g. OpenAI)
- (If using embeddings vector DB) credentials or local instance
# Clone the repo
git clone https://github.com/SP4567/AuraAI.git
cd AuraAI
# Install dependencies in both server & client
cd server && npm install
cd ../client && npm installOpen two terminal tabs/windows:
# In server folder
cd server
npm run dev # or whatever your start script is
# In client folder
cd client
npm run devThen open your browser to http://localhost:5000 (or your configured port).
Create a .env (or .env.local) file (in server and/or client as needed). Example variables:
OPENAI_API_KEY=your_openai_key MODEL_PROVIDER_URL=... DB_URL=... EMBEDDING_KEY=...
NEXT_PUBLIC_API_ENDPOINT=http://localhost:3000/api
Adjust according to your setup.
Once up and running, you can:
- Authenticate / login (if built)
- Send prompts / queries to the AI
- Use embedding-based retrieval & reasoning
- Extend with new tools or modules
- Monitor logs, errors, performance
- Deploy to production (Vercel, AWS, etc.)
You might want to include screenshots or GIFs here in future.
Here’s a sample (adjust to your actual) layout:
AuraAI/
├── client/ # Frontend app (React / Next.js / etc.)
│ ├── pages/
│ ├── components/
│ └── ...
├── server/ # Backend / API / business logic
│ ├── api/
│ ├── services/
│ ├── models/
│ └── ...
├── README.md
└── .gitignore
- 🔧 Add WebSocket / real-time features
- 🧩 Plugin system (extension marketplace)
- 🔐 Full auth & user permissions
- 📊 Dashboard / analytics
- ☁️ One-click deploy / CI/CD
- 🧠 Fine-tuning & model switching support
You’re welcome to contribute — code, docs, tests, ideas. Here’s how:
- Fork the repo
- Create a branch:
feature/your-thing - Make changes & add tests
- Submit a pull request with a clear description
- We’ll review, discuss, iterate
Please abide by the repository’s code style, tests, and commit conventions.
This project is under the MIT License (or whichever you prefer). See LICENSE file.
- Inspiration / code snippets from various open-source AI fullstack projects
- Thanks to the open source community & model providers