The free tier now supports connecting to local AI models running on LM Studio or Ollama, but it still doesn't actually function without an internet connection.
If you block access to the internet or to their AI API servers [1], it refuses to start a new chat invocation. If you block access halfway through a conversation, the conversation continues just fine, so there's no technical barrier to them actually running offline, they just don't allow it.
Their settings page also says that they can't even guarantee that they implemented the offline toggle properly, a flag that should be the easiest thing in the world to enforce:
>Prevents most remote calls, prioritizing local models. Despite these safeguards, rare instances of cloud usage may still occur.
So you can't even block access to the very servers that they say their faulty offline toggle would leak data to.
I disconnect from the internet sometimes and noticed this morning that my previous night's chat was invisible. I could only see it once I connected again.
This puts me off a bit to finally try local models. Anyone know what kind data is collected in those rare instances of cloud usage?
Hi, here are our data collection policies for the cloud-based LLMs. We've worked out agreements that heavily restrict how third party companies can use your data, including not storing it or using it for model training: https://www.jetbrains.com/help/ai/data-collection-and-use-po...
When running inference workloads via something like llama.cpp, only 1 GPU is ever used at a time, so you would have 1 active GPU and 4 idle GPUs. That should make the power usage less insane in practice than you expect.
I think the last time any of my computers had a case was back when I realized the pair of 900gx2 cards I was running was turning my computer into an easy bake.
A sandbox service identifies it as Lumma Stealer, so it'll at least steal all your passwords, cookies and cryptocurrency, and then anything else after that is fair game too:
You can also use the Apple AAC encoder using ffmpeg (on a Mac only), with the argument "-c:a aac_at". Handy as it allows you to use it for encoding the audio track of videos, in addition to its support for pure audio container formats:
They reply with the message to new issues, which makes their notification email look like a direct solution to the submitter's issue, so they'll probably catch out a lot of people...
If you block access to the internet or to their AI API servers [1], it refuses to start a new chat invocation. If you block access halfway through a conversation, the conversation continues just fine, so there's no technical barrier to them actually running offline, they just don't allow it.
Their settings page also says that they can't even guarantee that they implemented the offline toggle properly, a flag that should be the easiest thing in the world to enforce:
>Prevents most remote calls, prioritizing local models. Despite these safeguards, rare instances of cloud usage may still occur.
So you can't even block access to the very servers that they say their faulty offline toggle would leak data to.
[1] https://www.jetbrains.com/help/ai-assistant/disable-ai-assis...