Hacker Newsnew | past | comments | ask | show | jobs | submit | roncesvalles's commentslogin

>One key point is to use their initial output as a draft, as a starting point that still needs to be checked and iterated, often through pair programming with the same tool.

This matches my experience. It's not useful for producing something that you wouldn't have been able to produce yourself, because you still need to verify the output itself and not just the behavior of the output when executed.

I'd peg this as the most fundamental difference in use between LLMs and deterministic compilers/transpilers/codegens.


I think AI is completely optional for software development.

There seems to be a new kind of anxiety wherein devs feel that they aren't leveraging AI to the fullest in order to make them more productive at developing software, and that since they aren't doing so, they should just give up writing software completely.

This anxiety is unfounded. The difference between using AI vs not using AI is not like using a physical spreadsheet vs Microsoft Excel. It's more like using a text editor versus an IDE. If you're happy producing software the way that you always were sans AI, you should just keep doing it that way and don't even pay mind to AI tooling.

Those guys who have super sophisticated MCP/tool use setups, and a roster of agents, and testdrive all the latest tools and plugins, and curate a personal library of carefully tuned prompts that make them the "LLM Whisperer" — they're not actually doing that much better than you at producing software. Or at least not to the extent that you are totally obsolete.


I think it’s exactly like going from paper spreadsheet to Excel in some very important aspects of engineering (but not all).

I really encourage you to update your priors since capabilities are very different than even 6 months ago.


I like to think I'm very much abreast of the bleeding edge because I feel this anxiety myself. At this point I can't code without LLMs because I just notice things that I could hand off to LLMs and they will do it faster and there's no reason for me to do it myself (although I still could).

But the overall gain in efficiency is still a low single digit speedup. It's not a multi-OOM speedup as if e.g. doing 1000 long divisions by hand over many days versus letting a computer program do them in a split second. The "wall" that is irreducible complexity was never OOMs away from how modern pre-AI software development was done.


For me the speed-up has not been in doing things I was already an expert at doing quickly with high quality. It has been in skipping the learning curve for adjacent things.

Does it make the curve easier or do you skip learning it entirely and just trust the LLM? I wouldn't do the latter.

So far I've skipped learning it entirely. For things I want to learn, I learn the old school way--maybe with an LLM as an unreliable thesaurus and/or second search engine (where I distrust its output, but read its links). For things I want to just get done, I use an LLM. It's something close to blind trust, but not completely.

For example, I've used LLMs to write ~1600 lines of Rust in the past few days. I'm having it make Ratatui bindings for Ruby. I haven't ever learned Rust, but I can read C-like languages so I kinda understand what's happening. I could tell when it needed to be modularized. I have a sneaking suspicion most of the Rust tests it's written are testing Ratatui, rather than testing its own bindings. But I've had the LLM cover the functionality in Ruby tests, a language I do know. So I've felt comfortable enough to ship it.


Will you remember it if you don't "break your teeth" on it though? At the same level as the things you're already an expert on?

I'm a big believer in desirable difficulty for learning. But I'm a big believer in reduced difficulty for non-learning-oriented getting-things-done.

The 6 months ago is a meme at this point.

Don‘t worry, these „LLM Whisperers“ are too busy on social media to get anything done.

Mostly FUD from grifters and accelerationists. Coding AI isn't useful for producing things that you couldn't have produced yourself, which means you're still important. Fundamentally it's still "just" an autocomplete, whether it's snippets at your cursor or whole files inside your directory. I actually quite enjoy LLMs as a programmer. Contrast this with compilers, which produce machine code that you couldn't have possibly written yourself.

This is true, you only grow when you have nothing to do. At least, nothing that other people are telling you to do. If there's something you want to learn really bad I highly recommend taking a sabbatical and just spending the whole year learning that topic deeply. You can get to the bleeding edge of most topics in one year of study, especially ones adjacent to what you already know. I did this in my 20s and can't wait for the stars to align to be able to do this again.

> It is a truism, too, in workplaces, that faster employees get assigned more work.

And why exactly is this desirable?


AI slop blogvert. The first example is disingenuous btw. Everyone these days uses requestIDs to be able to query all log lines emanated by a single request, usually set by the first backend service to receive the request and then propagated using headers (and also set in the server response).

There isn't anything radical about his proposed solutions either. Most log storage can be set with a rule where all warning logs or above can be retained, but only a sample of info and debug logs.

The "key insight" is also flawed. The reason why we log at every step is because sometimes your request never completes and it could be for 1000 reasons but you really need to know how far it got in your system. Logging only a summary at the end is happy path thinking.


>A web site logs traffic in a sort of defacto way, but no one actually reviews the traffic, and it's not sent to 3rd parties.

If data exists, it can be subpoenaed by the government.

Personally, I don't understand people's mindless anathema about being profiled by ad companies, as if the worst thing ever in the world is... being served more relevant ads? In fact I love targeted ads, I often get recommended useful things that genuinely improve my life and save me hours in shopping research.

It's the government getting that data that's the problem. Because one day you might do something that pisses off someone in the government, and someone goes on a power trip and decides to ruin your life by misusing the absolute power of the state.


The private sector - banks, insurances, your e-mail provider, cloud storage provider... - can mess with you pretty well, too.

Adtech sells that to creeps, goverments, police, insurance, banks, creeps, criminals, lawyers, data brokers. There absolutely IS a case for defending vehemently against the ads and tracking.

And that's even before malvertising comes into picture.


If a correlation has the data it will sell it to anyone, including the government

If a government has the data there’s a chance it will stay in the government at least

You either

1) don’t want it stored

2) are happy for government to have it but not companies

3) are happy for everyone to have it


The government would need to know what to subpoena, and what to prioritize as well. In principle could the government subpoena my ISP, learn I'd used a VPN, subpoena the VPN, learned I visited Wikipedia, then subpoena Wikipedia to finally learn what articles I'd written. Yes, but in practice this will never happen. There's no interest in doing so, and it's unclear a judge would be convinced that useful information could be obtained from such a path.

On the other hand, if I'm making death threats on Facebook, there's a much more realistic path: view the threats from a public source --> subpoena Facebook for private data.

Treating the two risks as similar is madness.



I'd argue forget about Postgres completely. If you can shell out $90/month, the only database you should use is GCP Spanner (yes, this also means forget about any mega cloud other than GCP unless you're fine paying ingress and egress).

And for small projects, SQLite, rqlite, or etcd.

My logic is either the project is important enough that data durability matters to you and sees enough scale that loss of data durability would be a major pain in the ass to fix, or the project is not very big and you can tolerate some lost committed transactions.

A consensus-replication-less non-embedded database has no place in 2025.

This is assuming you have relational needs. For non-relational just use the native NoSQL in your cloud, e.g. DynamoDB in AWS.


You seem insanely miscalibrated. $90 gets you a dedicated server that covers most projects' needs. data durability isnt some magic that only cloud providers can get you.

If you can lose committed transactions in case of single node data failure, you don't have durability. Then it comes down to do you really care about durability.

Jonathan Greenblatt (CEO of the ADL) was on record saying "TikTok is Al Jazeera on steroids" before the ban bill got a lot of wind.

What's ironic is that ultimately their suspicion that TikTok was influenced by the PRC to push an anti-Israel agenda was most probably incorrect. Israel lost the narrative in the West because it simply did a lot of shitty things in the war, and everyone from homeless people to war refugees carry around an HD camcorder in their pocket now. I still see shocking videos of what the IDF is doing in Gaza on a monthly basis, on Instagram of all places.


Scaremongering about the PRC was just the public facing justification for the ban bill. Its not like they can just come out and say "The Israelis have me on tape violating children and so I need to pass this bill to let them take over the biggest social media platform they don't control already so they can face less criticism for their genocide".

But I don't want SSR, period. My backend is an HTTP API that speaks JSON. My frontend is whatever thingimajig can talk to an HTTP API in JSON. That's it. I love it this way and see no reason why I should blur the lines between frontend and backend.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: