I noticed chatgpt today being pretty slow compared to the local deepseek I have running which is pretty sad since my computer is about a bajillion times less powerful
Any recommendations for communities to learn more?
Frustratingly Their setup guide is terrible. Eventually managed to get it running. Downloaded a model and only after it download did it inform me I didn’t have enough RAM to run it. Something it could have known before the slow download process. Then discovered my GPU isn’t supported. And running it on a CPU is painfully slow. I’m using an AMD 6700 XT and the minimum listed is 6800 https://github.com/ollama/ollama/blob/main/docs/gpu.md#amd-radeon
If you’re setting up from scratch I recommend using open webui, you can install it with GPU support onto docker/podman in a single command then you can quickly add any of the ollama models through its UI.
Thanks, I did get both setup with Docker, my frustration was neither ollama or open-webui included instructions on how to setup both together.
In my opinion setup instructions should guide you to a usable setup. It’s a missed opportunity not to include a docker-compose.yml connecting the two. Is anyone really using ollama without a UI?
Do they not know it works offline too?
I noticed chatgpt today being pretty slow compared to the local deepseek I have running which is pretty sad since my computer is about a bajillion times less powerful
Is it possible to download it without first signing up to their website?
You can get it from Ollama
Thanks
Any recommendations for communities to learn more?
Frustratingly Their setup guide is terrible. Eventually managed to get it running. Downloaded a model and only after it download did it inform me I didn’t have enough RAM to run it. Something it could have known before the slow download process. Then discovered my GPU isn’t supported. And running it on a CPU is painfully slow. I’m using an AMD 6700 XT and the minimum listed is 6800 https://github.com/ollama/ollama/blob/main/docs/gpu.md#amd-radeon
If you’re setting up from scratch I recommend using open webui, you can install it with GPU support onto docker/podman in a single command then you can quickly add any of the ollama models through its UI.
Thanks, I did get both setup with Docker, my frustration was neither ollama or open-webui included instructions on how to setup both together.
In my opinion setup instructions should guide you to a usable setup. It’s a missed opportunity not to include a
docker-compose.yml
connecting the two. Is anyone really using ollama without a UI?The link I posted has a command that sets them up together though, then you just go to https://localhost:3000/