Ollama

Running local models


A step-by-step guide on how to run local models using Ollama
πŸ¦™Ollama suggest you should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
  1. Download Ollama

    Visit Ollama official website at ollama.com↗ and download it.

  2. Launch Ollama

  3. Download and enable models

    Go to Kerlig Settings β†’ Integrations β†’ Ollama and enable the models you wish to use by toggling the switches.

    Please note that this will initiate the download process. Since models are typically several gigabytes in size, downloading them will take some time. You can close the settings window, the models will continue to download in the background.

Adding custom models

The list of Ollama models in the Settings displays only the most popular and newest models. However, Ollama supports 100+ models. Explore the full library at ollama.com/library↗. To use a model not listed in Settings, follow these steps:

  1. Choose a model

    Visit ollama.com/library↗ and click on a model you wish to use.

  2. Decide which version you want to use

    Models come with multiple versions (sizes), for example codegemma comes in 2b and 7b version, so the model names are codegemma:2b and codegemma:7b respectively.

    2b/7b/13b/33b etc. stands for billions of parameters used during training. The higher it is, the better the model and the more resources it uses.

  3. Add custom model

    Go to Kerlig Settings β†’ Integrations β†’ Ollama, and in the Add Custom Model section, fill in the Model Display Name, e.g. Code Gemma, and the actual name (e.g. codegemma:7b). Then, click Add.

  4. Download and enable models

    Once the model appears on the list, click the toggle. After the download is complete, the model will be ready for use.

Β© 2024 Kerligβ„’. All rights reserved.