Check out Stable Diffusion and the llama model family. You can run those offline on your local hardware and wont have to worry about sharing private details with some cloud service that openly says they will look at your discussions and data and use it for training.
That sounds awesome! What kinda hardware would we need for that? Our machines are a 9900k/3070ti and a 12600k/3070; I would assume they should suffice?
Yes, that should work. Check out stable-diffusion-webui (automatic1111) and text-generation-webui (oobabooga). And grab the models from civitai (stable diffusion) and huggingface (llms like llama, vicuna, gpt-j, wizard, etc.).
This is impressive how great you can generate pictures with IA, to make a 4 pannel comic in minutes.
I really need to get around to playing with Midjourney and ChatGPT.
Check out Stable Diffusion and the llama model family. You can run those offline on your local hardware and wont have to worry about sharing private details with some cloud service that openly says they will look at your discussions and data and use it for training.
That sounds awesome! What kinda hardware would we need for that? Our machines are a 9900k/3070ti and a 12600k/3070; I would assume they should suffice?
Yes, that should work. Check out stable-diffusion-webui (automatic1111) and text-generation-webui (oobabooga). And grab the models from civitai (stable diffusion) and huggingface (llms like llama, vicuna, gpt-j, wizard, etc.).