This repository is designed to teach students how to use Ollama via the official Python API.
To get started, first make sure you have Ollama installed from their website.
⚠️ Warning: On Windows 11, Smart App Control may block the Ollama installer. If it happens, temporarily disable Smart App Control during installation, then re-enable it.
Then, you need to install the required dependencies for your Python environment. Activate one using your favourite Python environment manager (e.g., Anaconda).
Then, you can install all the dependencies by running the following command:
pip install -r requirements.txtTo run the script, first make sure you have the Ollama server running (by opening the Ollama app on your computer).
Then, you need to pull the model you want to use. For this tutorial, we will use the llama3.2:1b model, therefore you need to run the following command:
ollama pull llama3.2:1b💡 Note: you can check out other available models on Ollama model library.
Finally, use the following command to run the script:
python llm_client.pyor just run it in your favourite Python IDE.
⚠️ Warning: If you get an error ("command not found" or something similar), you need to close and reopen your IDE.
This script demonstrates the basic usage of the Ollama API for text-only LLMs.
You need to pull a vision model you want to use. For this tutorial, we will use the qwen3-vl:2b-instruct model, therefore you need to run the following command:
ollama pull qwen3-vl:2b-instructTo run the second script, use the following command:
python vlm_client.pyThis script provides an example of how to run vision+language models that are available in Ollama.
💡 Note: You can check out other vision models such as
llama3.2-vision, however, they might require more system memory than is available on your machine. If you want to use another model, you need to change the value of themodel_namevariable invlm_client.pyaccordingly.
You can list all pulled models by running:
ollama listTo save space on your local machine, you can delete the model you pulled earlier by running:
ollama rm MODEL_NAME