Just wanted to say, 'Hi'

I’m something a a noob to OpenWebUI, but I do have a docker stack running WebUI, Ollama, LiteLLM, and associated support services. My intent is to run smaller 3-7B paramaeter models locally, along with some RAG capabilities for learning purposes. I believe the real future for personal LLMs is to have smaller, yet capable LLMs running on edge devices. To that end, I am using a Ryzen 9, 16 core, 32 thread, CPU only system with 64 GB of RAM, running Ubuntu 25.04. So far I’m pleased with the performance.

I have much to learn about my docker stack and am hoping that I find a community here to help me.

Regards

Hi ya! You’ve set up roughly what I have. I have a slightly less beefy system but a 3080 video card. I try to use it over the platforms, and have openrouter credits so I can use up-to-date big models too. I love that. Sadly the local models are a bit stupid, but they are nice for occasional queries. BTW I think this site is sleeping, but nice to meet you anyway.
Lucy

1 Like

Oh, and I have a cloudflare tunnel (free!). That was a game changer… so I can access my openwebui anywhere. You probably know about that tho. :slight_smile:

1 Like

Hi Lucy. Thanks for replying to my post. I’ve been trying to decide how I want provide external access to my lab. I’d be interested to hear your success story. Everytime I think about exposing it to the outside world my blood runs cold. I realluy fells like I’d be diving into hungary shark infested waters. :scream:

I am a little disappointed in the low traffic volume on here. I’ve read most posts and am working my way through the videos, gleaning what I can from them. It looks like it started with enthusiasm but didn’t catch on. That is too bad. I think that there is a need. With the popularity of OWUI one would hope that there is an active community of users. Have you found a more vibrant landing spot of kindred spirits?

I remain hopeful that smaller models will get a little smarter, perhaps not generally but more task specific. One of the primary goals of my little lab is to play with multiple small model, trying to leverage them off each other and call in the big guns only when necessary. Having LiteLLM proxy service running, overwhelms me with models to choose from.