Skip to content

aleSuglia/ollama-tutorial

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Ollama Tutorial

This repository is designed to teach students how to use Ollama via the official Python API.

Installation

To get started, first make sure you have Ollama installed from their website.

⚠️ Warning: On Windows 11, Smart App Control may block the Ollama installer. If it happens, temporarily disable Smart App Control during installation, then re-enable it.

Then, you need to install the required dependencies for your Python environment. Activate one using your favourite Python environment manager (e.g., Anaconda).

Then, you can install all the dependencies by running the following command:

pip install -r requirements.txt

Running the Scripts

Interacting with an LLM: llm_client.py

To run the script, first make sure you have the Ollama server running (by opening the Ollama app on your computer).

Then, you need to pull the model you want to use. For this tutorial, we will use the llama3.2:1b model, therefore you need to run the following command:

ollama pull llama3.2:1b

💡 Note: you can check out other available models on Ollama model library.

Finally, use the following command to run the script:

python llm_client.py

or just run it in your favourite Python IDE.

⚠️ Warning: If you get an error ("command not found" or something similar), you need to close and reopen your IDE.

This script demonstrates the basic usage of the Ollama API for text-only LLMs.

Interacting with a VLM: vlm_client.py

You need to pull a vision model you want to use. For this tutorial, we will use the qwen3-vl:2b-instruct model, therefore you need to run the following command:

ollama pull qwen3-vl:2b-instruct

To run the second script, use the following command:

python vlm_client.py

This script provides an example of how to run vision+language models that are available in Ollama.

💡 Note: You can check out other vision models such as llama3.2-vision, however, they might require more system memory than is available on your machine. If you want to use another model, you need to change the value of the model_name variable in vlm_client.py accordingly.

Useful Commands

You can list all pulled models by running:

ollama list

To save space on your local machine, you can delete the model you pulled earlier by running:

ollama rm MODEL_NAME

About

A brief tutorial on using LLM via Ollama

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages