Tool to manage LLAMA.CPP model configurations.
- Load / Unload a model
- Store multiple model configurations.
- PWA, can be installed as a desktop app.
- Monitor usage of CPU, GPU, RAM and VRAM
- Install dependencies:
npm install
- Start the server:
node run start
- Open browser to
http://localhost:3001
- Click on Setting button to open the settings dialog box and set the model directory path:

-
Click on Add button to add a new configuration for a model, clicking on Launch would auto save the model settings.

-
Once you have added a bunch of configs, you can use presets feature of llama.cpp which lets you load and unload models from the llama.cpp interface itself.
