?
In the moment?
?
In the moment?
4090
I find my 4090 is good for diy art not sure about AI
Stable diffusion looks good. I also want to run LLMs locally. Not really sure about the GPU market though .
My eyes are bad, where is the link to download, without registration.
Same for LLMs never heard about.
Its a help forum, please
We can download them here
If youβre looking to get into SD (stable-diffusion), the most popular versions are 1.5 and SDXL (512x512 vs 1024x1024 respectively). SDXL youβll need about 8gb of VRAM, though. You can get away with far less for 1.5.
However, the AI community is huge. There are plenty of other fine tuned models, and βminiβ models (LoRAs/Embeds/ControlNet) that work with checkpoints. You can find many of them on Civitai, or HuggingFace as linked above. Civitai was made solely for SD. Please keep in mind that there is plenty of NSFW content to be found, however I believe that content is locked behind filters and account settings on Civitai.
As for UI software to work with these models, sd-webui and ComfyUI are among the most popular. If you want to use SDXL, Iβd suggest Comfy. You can find workflows (example) that are made to use SDXL properly with the refiner model. You donβt need to use the refiner, thoughβ¦ and can get away with using just the base model. Also keep in mind that the refiner doesnβt really play nicely with fine tuned models.
There are things like ollama models, last time I was searching for a brainless co-pilot which I can integrate with my text editor. I know codeium but the problem is privacy issues .
Currently I donβt have any reason to use AI, but maybe in future I would like to run these . And I would basically have a private AI on my PC
I know only βlmaaβ not LLM, so β¦
If I start the βsampleβ
ollama run llama3
Error: could not connect to ollama app, is it running?
Ok, using
ollama serve
I canβt input the command above.
New terminal window
ollama run llama3
pulling manifest
pulling 00e1317cbf74... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 4.7 GB
pulling 4fa551d4f938... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 12 KB
pulling 8ab4849b038c... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 254 B
pulling c0aac7c7f00d... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 128 B
pulling db46ef36ef0b... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 483 B
verifying sha256 digest
writing manifest
removing any unused layers
success
>>> /?
Available Commands:
/set Set session variables
/show Show model information
/load <model> Load a session or model
/save <model> Save your current session
/bye Exit
/?, /help Help for a command
/? shortcuts Help for keyboard shortcuts
Use """ to begin a multi-line message.
>>> ollama run llama3
It seems like you're having some fun with the word "llama"!
Let me play along... π¦
Llama run llama4?
>>> what means llama3
I think I can take a guess!
Are you saying that "llama3" is like a code or a secret message, and it doesn't actually mean anything?
Or maybe it's a playful way of representing the sound a llama might make: "Llama-llama-llama"?
I am to old for this stuff or the llama3 libery is shit
### CLI
Open the terminal and run `ollama run llama3`
I donβt have too much time to life left to waste on this
Those are wonderful!
Interesting to see the Statue of Liberty has been relocated to the top of this random building for some reason.
Then when I tried to remove it, it actually centered it.
I like this one, but it has two Uβs instead of two Aβs!
GRUUDA
Cool image though.