Running local models
A step-by-step guide on how to run local models using Ollama
-
Download Ollama
Visit Ollama official website at ollama.comβ and download it.
-
Launch Ollama
-
Download and enable models
Go to Kerlig
Settings β Integrations β Ollama
and enable the models you wish to use by toggling the switches.Please note that this will initiate the download process. Since models are typically several gigabytes in size, downloading them will take some time. You can close the settings window, the models will continue to download in the background.
Adding custom models
The list of Ollama models in the Settings displays only the most popular and newest models. However, Ollama supports 100+ models. Explore the full library at ollama.com/libraryβ. To use a model not listed in Settings, follow these steps:
-
Choose a model
Visit ollama.com/libraryβ and click on a model you wish to use.
-
Decide which version you want to use
Models come with multiple versions (sizes), for example codegemma comes in
2b
and7b
version, so the model names arecodegemma:2b
andcodegemma:7b
respectively.2b/7b/13b/33b etc. stands for billions of parameters used during training. The higher it is, the better the model and the more resources it uses.
-
Add custom model
Go to Kerlig
Settings β Integrations β Ollama
, and in the Add Custom Model section, fill in the Model Display Name, e.g.Code Gemma
, and the actual name (e.g.codegemma:7b
). Then, click Add. -
Download and enable models
Once the model appears on the list, click the toggle. After the download is complete, the model will be ready for use.
Β© 2024 Kerligβ’. All rights reserved.