i have been testing this on my Framework Desktop. ComfyUI generally causes an amdgpu kernel fault after about 40 steps (across multiple prompts), so i spent a few hours building a workaround here https://github.com/comfyanonymous/ComfyUI/pull/11143
overall it's fun and impressive. decent results using LoRA. you can achieve good looking results with as few as 8 inference steps, which takes 15-20 seconds on a Strix Halo. i also created a llama.cpp inherence custom node for prompt enhancement which has been helping with overall output quality.
I don't know about online forums, but all my IRL friends have a lot more balanced takes on AI than this forum. And honestly it extends beyond this forum to the wider internet. Online, the discourse seems extremely polarized: either it's all a pyramid scheme or stories about how development jobs are already defunct and AI can supervise AI etc.
reply