Description
When Ollama is installed and running but has no local models pulled, NemoClaw onboarding still presents Ollama as a valid inference option and shows nemotron-3-nano:30b as a selectable model. After selection, onboarding fails later with model not found. This is misleading because the UI implies a usable local
model exists when it does not.
Reproduction Steps
- Install Ollama on Ubuntu.
- Ensure Ollama is running.
- Confirm ollama list is empty.
- Run curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash.
- During onboarding, choose Local Ollama.
- Observe that nemotron-3-nano:30b is shown as the available model.
- Select that model and continue.
Environment
- OS: Ubuntu
- Container runtime: Docker CE
- GPU: Available
- Ollama: Installed
- Ollama models: None (ollama list is empty)
- NemoClaw install method: curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash
Debug Output
Logs
Checklist