Hacker Newsnew | past | comments | ask | show | jobs | submit | more thenickdude's commentslogin

The free tier now supports connecting to local AI models running on LM Studio or Ollama, but it still doesn't actually function without an internet connection.

If you block access to the internet or to their AI API servers [1], it refuses to start a new chat invocation. If you block access halfway through a conversation, the conversation continues just fine, so there's no technical barrier to them actually running offline, they just don't allow it.

Their settings page also says that they can't even guarantee that they implemented the offline toggle properly, a flag that should be the easiest thing in the world to enforce:

>Prevents most remote calls, prioritizing local models. Despite these safeguards, rare instances of cloud usage may still occur.

So you can't even block access to the very servers that they say their faulty offline toggle would leak data to.

[1] https://www.jetbrains.com/help/ai-assistant/disable-ai-assis...


I disconnect from the internet sometimes and noticed this morning that my previous night's chat was invisible. I could only see it once I connected again.

This puts me off a bit to finally try local models. Anyone know what kind data is collected in those rare instances of cloud usage?


Hi, here are our data collection policies for the cloud-based LLMs. We've worked out agreements that heavily restrict how third party companies can use your data, including not storing it or using it for model training: https://www.jetbrains.com/help/ai/data-collection-and-use-po...


1x 3090 (350W power limit) already makes it feel like I'm running a fan heater under my desk, 5x would be nuts.


Place and time your use right, and you'll save a bit on heating at winter and/or at nights.


When running inference workloads via something like llama.cpp, only 1 GPU is ever used at a time, so you would have 1 active GPU and 4 idle GPUs. That should make the power usage less insane in practice than you expect.


I think the last time any of my computers had a case was back when I realized the pair of 900gx2 cards I was running was turning my computer into an easy bake.



Except 100 yen stores which are actually 110 yen stores.


You can buy crypto on PayPal?? Our New Zealand version doesn't mention it either.


Not in New Zealand. The USA, the UK, and Luxembourg only.


Only two total MIPI ports (DSI/CSI, for displays and cameras respectively) compared to 4 ports on the CM4.



A sandbox service identifies it as Lumma Stealer, so it'll at least steal all your passwords, cookies and cryptocurrency, and then anything else after that is fair game too:

https://socradar.io/malware-analysis-lummac2-stealer/


You can also use the Apple AAC encoder using ffmpeg (on a Mac only), with the argument "-c:a aac_at". Handy as it allows you to use it for encoding the audio track of videos, in addition to its support for pure audio container formats:

https://trac.ffmpeg.org/wiki/Encode/AAC#aac_at


Same here on my issue tracker, seems to be GitHub wide.

https://i.imgur.com/jJljJp8.png

They reply with the message to new issues, which makes their notification email look like a direct solution to the submitter's issue, so they'll probably catch out a lot of people...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: