Skip to content

[NeMoClaw][Ubuntu + Docker CE] Ollama is offered during onboarding even when no local models are installed, leading to a misleading default model selection and late failure #710

@JoyceChenNV

Description

@JoyceChenNV

Description

When Ollama is installed and running but has no local models pulled, NemoClaw onboarding still presents Ollama as a valid inference option and shows nemotron-3-nano:30b as a selectable model. After selection, onboarding fails later with model not found. This is misleading because the UI implies a usable local

Image

model exists when it does not.

Reproduction Steps

  1. Install Ollama on Ubuntu.
  2. Ensure Ollama is running.
  3. Confirm ollama list is empty.
  4. Run curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash.
  5. During onboarding, choose Local Ollama.
  6. Observe that nemotron-3-nano:30b is shown as the available model.
  7. Select that model and continue.

Environment

  • OS: Ubuntu
  • Container runtime: Docker CE
  • GPU: Available
  • Ollama: Installed
  • Ollama models: None (ollama list is empty)
  • NemoClaw install method: curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash

Debug Output

Logs

Checklist

  • I confirmed this bug is reproducible
  • I searched existing issues and this is not a duplicate

Metadata

Metadata

Assignees

No one assigned

    Labels

    Getting StartedUse this label to identify setup, installation, or onboarding issues.Local ModelsRunning NemoClaw with local modelsNV QABugs found by the NVIDIA QA TeambugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions