Hacker Newsnew | past | comments | ask | show | jobs | submit | anabis's commentslogin

Maybe. People have run wildly insecure phpBB and Wordpress plugins, so maybe its the same cycle again.

Those usually didn't have keys to all your data. Worst case, you lost your server, and perhaps you hosted your emails there too? Very bad, but nothing compared to the access these clawdbot instances get.

> Those usually didn't have keys to all your data.

As a former (bespoke) WP hosting provider, I'd counter those usually did. Not sure I ever met a prospective "online" business customer's build that didn't? They'd put their entire business into WP installs with plugins for everything.

Our step one was to turn WP into static site gen and get WP itself behind a firewall and VPN, and even then single tenant only on isolated networks per tenant.

To be fair that data wasn't ALL about everyone's PII — until by ~2008 when the Buddy Press craze was hot. And that was much more difficult to keep safe.


> are running

The vocabulary has been long poisoned, but original definition of CSAM had the neccessary condition of actual children being harmed in its production. Although I agree that is not worse than murder, and this Claude's constitution is using it to mean explicit material in general.

I wonder if later challenges would be cheaper if summary of lesser challenges and solutions were also provided? Building up difficulty.

OpenCode has 11 options for installation.

I knew curl, npm, and docker, but asked Gemini about the rest:

bun, pnpm, yarn, brew, scoop, chocolatey, paru, mise.

Although its wonderful that people are building and creating, I also hope it calms down somewhat so I can choose from well tested few options in the future.


> One surprising thing that codex helped with is procrastination.

The Roomba effect is real. The AI models do all the heavy implementation work, and when it asks me to setup an execute tests, I feel obliged to get to it ASAP.


Would some sparks fly when easy decompile of MSOffice and Photoshop are available, I wonder.


This is where I would guess the world destroing AGI/ASI will come about. The neverending cat-and-mouse game of ads/blockers driven by profit motive. LLMs will used by both sides in a escalating game, with humans with its attention and wallet stuck in the middle.


> delivering 97% of the performance at 10% of the cost is a distraction.

Not if you are running RL on that model, and need to do many roll-outs.


>The ideal case would be something that can be run locally, or at least on a modest/inexpensive cluster.

It's obviously valuable, so it should be coming. I expect 2 trends:

- Local GPU/NPU will have a for-LLM version that has 50-100GB VRAM and runs MXFP4 etc.

- Distillation will come for reasoning coding agents, probably one for each tech stack (LAMP, Android app, AWS, etc.)x business domain (gaming, social, finance, etc.)


Not complaining too loudly because improvement is magical, but trying to stay on top of model cards and knowing which one to use for specific cases is bit tedious.

I think the end game is decent local model that does 80% of the work, and that also knows when to call the cloud, and which models to call.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: