Ollama: Difference between revisions

Change Install to using installer script
work in progress
Line 4: Line 4:


== Installing it ==
== Installing it ==
Visit https://ollama.com/download and use the installer shell script.
 
=== Use the Install script ===
Visit https://ollama.com/download and use the installer shell script. IOW,
 
<code>curl -fsSL <nowiki>https://ollama.com/install.sh</nowiki> | sh</code>
 
The script describes what it is doing:
<pre>
>>> Installing ollama to /usr/local
>>> Downloading Linux amd64 bundle
######################################################################## 100.0%
>>> Creating ollama user...
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
Created symlink /etc/systemd/system/default.target.wants/ollama.service → /etc/systemd/system/ollama.service.
>>> NVIDIA GPU installed.
</pre>
 


{{Collapsible
{{Collapsible
Line 386: Line 406:
</syntaxhighlight>
</syntaxhighlight>
}}
}}
<code>ollama -v</code>
shows you that it's running by printing the version.
ollama version is 0.9.1


=== Simply running it as a Docker Image ===
=== Simply running it as a Docker Image ===
Line 427: Line 453:
</pre>
</pre>


The important part seems to be '<tt>Auto-detected mode as legacy</tt>'
The important part seems to be '<tt>Auto-detected mode as legacy</tt>' '''and''' the nvml driver/library mismatch error is certainly a problem. (Is it the same problem; or two separate problems?)


Running the image from Docker Desktop, with setting optons for ports and volumes, and copying the 'run' command spits out:
Running the image from Docker Desktop, with setting options for ports and volumes, and copying the 'run' command spits out:


<code>docker run --hostname=3f50cd4183bd --mac-address=02:42:ac:11:00:02 --env=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin --env=LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 --env=NVIDIA_DRIVER_CAPABILITIES=compute,utility --env=NVIDIA_VISIBLE_DEVICES=all --env=OLLAMA_HOST=0.0.0.0:11434 --volume=ollama:/root/.ollama --network=bridge -p 11434:11434 --restart=no --label='org.opencontainers.image.ref.name=ubuntu' --label='org.opencontainers.image.version=20.04' --runtime=runc -d ollama/ollama:latest</code>
<code>docker run --hostname=3f50cd4183bd --mac-address=02:42:ac:11:00:02 --env=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin --env=LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 --env=NVIDIA_DRIVER_CAPABILITIES=compute,utility --env=NVIDIA_VISIBLE_DEVICES=all --env=OLLAMA_HOST=0.0.0.0:11434 --volume=ollama:/root/.ollama --network=bridge -p 11434:11434 --restart=no --label='org.opencontainers.image.ref.name=ubuntu' --label='org.opencontainers.image.version=20.04' --runtime=runc -d ollama/ollama:latest</code>
Line 439: Line 465:
Clearly, the full ollama setup is supposed to be run as 'root'. It is not designed to be run as a regular user who has ''docker'' or ''sudo'' / ''adm'' group membership.
Clearly, the full ollama setup is supposed to be run as 'root'. It is not designed to be run as a regular user who has ''docker'' or ''sudo'' / ''adm'' group membership.


== Docs ==
The [https://github.com/ollama/ollama/blob/main/docs/linux.md docs] tell you how you can customize and update or uninstall the environment.
Looking at the logs with <code>journalctl -e -u ollama</code>told me what my new generated public key is, but also that it could not load a compatible GPU
=== Problems with GPU ===
The docs tell you to check with


{{References}}
<code>nvidia-smi</code>
Failed to initialize NVML: Driver/library version mismatch
NVML library version: 550.144
So, apparently I'm supposed to install the CUDA Toolkit and the Driver{{References}}
[[Category:Artificial Intelligence]]
[[Category:Artificial Intelligence]]