Ollama: Difference between revisions
No edit summary |
No edit summary |
||
| (11 intermediate revisions by 2 users not shown) | |||
| Line 1: | Line 1: | ||
Ollama is a tool that allows users to run large language models (LLMs) directly on their own computers, making powerful AI technology accessible without relying on cloud services. It provides a user-friendly way to manage, deploy, and integrate LLMs, offering greater control, privacy, and customization compared to traditional cloud-based solutions. | Ollama is a tool that allows users to run large language models (LLMs) directly on their own computers, making powerful AI technology accessible without relying on cloud services. It provides a user-friendly way to manage, deploy, and integrate LLMs, offering greater control, privacy, and customization compared to traditional cloud-based solutions. | ||
[https://www.ycombinator.com/companies/ollama Ollama] was funded by [https://www.ycombinator.com/people/jared-friedman Jared Friedman] out of Y Combinator (YC). Founders Jeffrey Morgan and Michael Chiang wanted an easier way to run LLMs than having to do it in the cloud. In fact, they were previously founders of a startup project named Kitematic which was the early UI for Docker. Acquired by Docker, it was the precursor to [[Docker Desktop]]. | [https://www.ycombinator.com/companies/ollama Ollama] was funded by [https://www.ycombinator.com/people/jared-friedman Jared Friedman] out of Y Combinator (YC). Founders Jeffrey Morgan and [https://2024.allthingsopen.org/speakers/michael-chiang Michael Chiang] wanted an easier way to run LLMs than having to do it in the cloud. In fact, they were previously founders of a startup project named Kitematic which was the early UI for Docker. Acquired by Docker, it was the precursor to [[Docker Desktop]]. | ||
Ollama will enable you to get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language models. | |||
== Installing it == | |||
I had some problems getting off the ground with Ollama. Some details are in [[Ollama/install]] The problems were described better in [[Nvidia on Ubuntu]] where we cover one of the greatest challenges of Linux computing: graphics drivers. | |||
== | == Docs == | ||
The docs | The [https://github.com/ollama/ollama/blob/main/docs/linux.md docs] tell you how you can customize and update or uninstall the environment. | ||
< | Looking at the logs with <code>journalctl -e -u ollama</code> told me what my new generated public key is, but also that it could not load a compatible GPU so I spent time fixing that. | ||
Start with the [https://github.com/ollama/ollama/blob/main/README.md README] for an intro. | |||
== Interface == | |||
Although you can instantly begin using a model from the command line with something like | |||
<code>ollama run gemma3</code> <ref>This will download and run a 4B parameter model.</ref> there are many User Interfaces or front-ends that can be coupled with Ollama such as [https://github.com/open-webui/open-webui Open-Webui].{{References}} | |||
{{References}} | |||
[[Category:Artificial Intelligence]] | [[Category:Artificial Intelligence]] | ||