Hacker Newsnew | past | comments | ask | show | jobs | submit | winrid's commentslogin

I'm working on Watch.ly - a remote human-in-the-loop networking and FS sandbox for AI agents like openclaw: https://watch.ly/

Also this week launching https://dirtforever.net/ which is an open alternative to RaceNet Clubs for Dirt Rally 2, since EA is shutting that down.

I'm also expanding the SDK and plugin space for https://fastcomments.com and am planning on adding AI agents because everyone expects that now :) a big challenge is building it in a way that doesn't make half the users mad. So I'm planning on marking any comments left by AI mods with a "bot" tag, and having the system email users on why it made certain decisions, with an option to contest that loops in a real person. I'm hoping this provides value to site owners while not angering real people. The agents could also just do non-invasive things like notify certain moderators when comments violate community standards in some way, or give out awards. I'm also hoping at some point I can run my own hardware for the LLMs so I don't have to share people's data with third parties.


you also get a much better query execution engine, so if you need to run reports or analytics they will be faster

Queues aside, mixing these loads will probably always be a bad idea unless your database gives you really fine control over cache/buffer pools, so the tables you run analytics on can't dirty the entire cache.

The Wii UI looked fine and was very easy to understand, too.

Related - I'm working on launching Watch.ly[0] (human-in-the-loop for remotely approving network and file system access for agents) in the next week or so. It works similarly, via eBPF (although we can also fall back to NFQUEUE). Supporting 5.x+ linux kernels[1], osx, and windows.

Did not know about LittleSnitch, will definitely check it out.

[0] https://watch.ly/

[1] https://app.watch.ly/status/


They're terrible. A $200 SLAM equipped vacuum (like open box or something off eBay) will do in 15mins what those took an hour to do

JimTV on YT is great too

Higher res icons probably add a couple hundred megs alone

Well if you have a 512x512 icon uncompressed it is an even megabyte, so that makes the calculations fairly easy.

But raw imagery is one of the few cases where you can legitimately require large amounts of RAM because of the squaring nature of area. You only need that raw state in a limited number of situations where you are manipulating the data though. If you are dealing with images without descending to pixels then there's pretty much no reason to keep it all floating around in that form, You generally don't have more than a hundred icons onscreen, and once you start fetching data from the slowest RAM in your machine you get pretty decent speed gains from using decompression than trying to move the uncompressed form around.


Aren't they usually all preloaded to prevent pop-in (or using some sort of heuristic)?

anyways, I bet there's like a million little buffers all over the place in the graphics stack. It would be neat to go through all that and just see how slim you could get it, even if it broke a bunch of stuff.


Win11 IOT runs great on 4gb if that matters :) I have a few machines in the field running it and my java app, still over a gig free usually.

(and yes I tried linux, I don't think my users would have the patience to deal with the sleep/wake issues :) )

Probably because it reads a little like an LLM and also has an emdash

I’ve seen a lot of anti AI people use ChatGPT to write why AI bubble is about to pop.

Ironic, isn’t it?


  > I ship code every day. I use Claude, I use GPT, I run llama locally.
An "Anti AI" person...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: