Thanks for helping improve Quadtrix.cpp. This project is a transformer learning lab with several execution paths: native C++ training and inference, PyTorch experiments, a FastAPI backend, and a React + TypeScript chat UI. Contributions are easiest to review when they keep those paths clear and testable.
Useful contributions include:
- Fixing correctness bugs in the C++ transformer implementation.
- Improving training, inference, checkpoint loading, or export scripts.
- Making the FastAPI backend more reliable and easier to run locally.
- Improving the React chat UI without hiding the model/backend behavior.
- Adding focused documentation for setup, model files, datasets, or run commands.
- Tightening CI, dependency versions, packaging, or release steps.
For larger model architecture changes, open an issue first so the design can be discussed before a big patch lands.
Quadtrix.cpp/
main.cpp Native C++ entry point
include/ C++ headers
src/ C++ source files
engine/ PyTorch training, inference, export, and model files
backend/ FastAPI backend and session handling
frontend/ React + TypeScript chat UI
iGPU/ Integrated GPU experiments
config/ Runtime configuration
data/ Local datasets and helpers
.github/workflows/ CI and release workflows
From the repository root:
python -m venv .venv
.\.venv\Scripts\python.exe -m pip install --upgrade pip
cd backend
..\.venv\Scripts\python.exe -m pip install -r requirements.txtInstall frontend dependencies:
cd frontend
npm.cmd installBuild the native C++ runtime from the repository root:
g++ -std=c++17 -O2 -I. -Iinclude -o quadtrix main.cppRun the C++ model:
.\Quadtrix.exe data\input.txt --chatRun the C++ HTTP server:
.\Quadtrix.exe data\input.txt --server --port 8080Run the FastAPI backend:
cd backend
..\.venv\Scripts\python.exe -m uvicorn main:app --host 127.0.0.1 --port 3001Run the frontend:
cd frontend
npm.cmd run devOpen the app at:
http://localhost:5173
Run the checks that match your change.
C++:
g++ -std=c++17 -O2 -I. -Iinclude -o quadtrix main.cppPython:
.\.venv\Scripts\python.exe -m compileall backend engine iGPU
cd backend
..\.venv\Scripts\python.exe -c "from main import app; print(app.title)"Frontend:
cd frontend
npm.cmd run buildIf you cannot run a relevant check, mention that in the pull request and explain why.
- Keep changes focused on one problem or feature.
- Use the existing style of the file you are editing.
- Avoid committing generated artifacts unless the project already expects them.
- Do not commit
.envfiles, secrets, private datasets, or personal checkpoints. - Update
README.md,run.md, or related docs when commands or behavior change. - Include screenshots or short notes for UI changes.
- Mention any change that affects model files, ports, CORS, service workers, or packaging.
The pull request template asks for:
- Summary and user-facing impact.
- C++ build status.
- Backend smoke-test status.
- Frontend build status.
- Documentation or screenshot updates when needed.
For C++ changes:
- Prefer clear, debuggable code over clever abstractions.
- Keep the educational value of the implementation visible.
- Be careful with tensor shapes, bounds, and ownership.
- Add comments only where the math or control flow is not obvious.
For Python changes:
- Keep backend behavior explicit and local-development friendly.
- Avoid broad exception swallowing around model loading or inference.
- Treat model paths, datasets, and request payloads as untrusted inputs.
For frontend changes:
- Keep the chat UI practical and responsive.
- Preserve the ability to switch between the C++ backend and the
.ptpath. - Make loading, error, and disconnected states clear.
Use concrete commands and paths. Quadtrix.cpp has multiple runtime paths, so say exactly which path a command belongs to: C++, PyTorch, FastAPI backend, frontend, iGPU, or packaging.
When documenting training results, include the hardware, dataset, iteration count, elapsed time, and validation metric so results can be compared fairly.