Hacker Newsnew | past | comments | ask | show | jobs | submit | Bukhmanizer's commentslogin

I think a lot of the discourse around LLMs fails because of organizational differences.

I work in science, and I’ve recently worked with a couple projects where they generated >20,000 LOC before even understanding what the project was supposed to be doing. All the scientists hated it and it didn’t do anything that it was supposed to. But I still felt like I was being “anti-ai” when criticizing it.

I understand that it’s way better when you deeply understand the problem and field though.


I'm starting to see this. It starting to seem like a lot of the people making the most specious, yet wild AI SLDC claims are:

* Hobbyist or people engaged in hobby and personal projects

* Startup bros; often pre-funding and pre-team

* Consultancies selling an AI SDLC as that wasn't even possible 6 months ago as "the way; proven, facts!"

It's getting to the point I'd like people to disclose the size of the team and org they are applying these processes at LOL.


The rule of thumb I have in my head right now is that AI will benefit people with deep specialized knowledge a lot, but people with shallow knowledge or skills can’t build anything that your average SWE with a Claude code subscription can’t replicate in a few hours.

Most LinkedIn influencers, startup bros and consultancies kind of fall into the latter.


You don’t use pots or pans?

Ovens are a special occasion thing in my house because our oven is huge and I can usually do the same thing in the air fryer, which is just a small convection oven.


For me it’s that usually I can figure out if I’m going to like something way more easily if I’m just clicking through and watching samples of a show. I don’t want to be constrained to a predetermined algorithmic category.

This essay somehow sounds worse than AI slop, like ChatGPT did a line of coke before writing this out.

I use AI everyday for coding. But if someone so obviously puts this little effort into their work that they put out into the world, I don’t think I trust them to do it properly when they’re writing code.


I wrote it myself. But the irony isn't lost on me. "Who did what" is kind of the whole point of the article. Appreciate the feedback.


FWIW I reported your post to the mods because it reads completely AI generated to me. My judgement was that it might have been slightly edited but is largely verbatim LLM output.

Some tells that you might wanna look at in your writing, if you truly did write it yourself without Any LLM input are these contrarian/pivoting statements. Your post is full of these and it is imo the most classic LLM writing tell atm. These are mostly variants of the 'Its not X but Y" theme:

- "Not whether they've adopted every tool, but whether they're curious"

- "I still drive the intuition. The agents just execute at a speed I never could alone."

- "The model doesn't save you from bad decisions. It just helps you make them faster."

- "That foundation isn't decoration. It's the reason the AI is useful to me in the first place."

- "That's not prompting. That's engineering"

It is also telling that the reader basically cant take a breather most of the sentences try to emphasize harder than the last one. There is no fluff thought, no getting side tracked. It reads unnatural, humans do not think like this usually.


The LLMs are training "us" now.

First we develop the machines, then we contort the entire social and psychic order to serve their rhythms and facilitate their operation.


FWIW I thought it read fine and enjoyed the take. As I'm exploring more AI tooling I'm asking myself some of the same questions.


Yours is maybe the first good post on managing a team of AIs that I've read. There is no spoon.

I've been shifting from being the know-it-all coder who fixes all of the problems to a middle manager of AIs over the past few months. I'm realizing that most of what I've been doing for the last 25 years of my career has largely been a waste of time, due to how the web went from being an academic pursuit to a profit-driven one. We stopped caring about how the sausage was made, and just rewarded profit under a results-driven economic model. And those results have been self-evidently disastrous for anyone who cares about process or leverage IMHO. So I ended up being a custodian solving other people's mistakes which I would never make, rather than architecting elegant greenfield solutions.

For example, we went from HTML being a declarative markup language to something imperative. Now rather than designing websites like we were writing them in Microsoft Word and exporting them to HTML, we write C-like code directly in the build product and pretend that's as easy as WYSIWYG. We have React where we once had content management systems (CMSs). We have service-oriented architectures rather than solving scalability issues at the runtime level. I could go.. forever. And I have in countless comments on HN.

None of that matters now, because AI handles the implementation details. Now it's about executive function to orchestrate the work. An area I'm finding that I'm exceptionally weak in, due to a lifetime of skirting burnout as I endlessly put out fires without the option to rest.

So I think the challenge now is to unlearn everything we've learned. Somehow, we must remember why we started down this road in the first place. I'm hopeful that AI will facilitate that.

Anyway, I'm sure there was a point I was making somewhere in this, but I forgot what it was. So this is more of a "you're not alone in this" comment I guess.

Edit: I remembered my point. For kids these days immersed in this tech matrix we let consume our psyche, it's hard to realize that other paradigms exist. Much easier to label thinking outside the box as slop. In the age of tweets, I mean x's or whatever the heck they are now, long-form writing looks sus! Man I feel old.


Yeah, I came here to ask if you're Vibe Writing as well ;)

I wasn't quite sure though. Sometimes it's clearly GPT, sometimes clearly Claude, and this article was like a blend.


I generally don’t have faith that many consumer boycotts will work, but boy is it ever easy to switch away from openAI.


I’m not sure this is a good thing..


Not needing to charge as much due to much better battery capacity and/or usage efficiency is objectively a good thing, full stop.

How that additional time is actually spent is a whole separate story, but that's entirely tangential to assessing the impact of battery life improving.


Having lived in Vancouver and NYC and now LA I think I’ve seen both sides of things, and I don’t think these things are quite as insurmountable as you think.

I don’t think public transit is ever that pleasant, but I rarely felt unsafe in Vancouver or even NYC compared to LA.

One thing that I disagree with is the timing. In a lot of cases I’d rather spend 20 minutes more on the bus than driving. It’s much easier to hop on a bus, listen to music and walk to my destination than deal with traffic or parking. Also, in cities that have properly invested in transit, there are things to do around the transit points. Grocery stores, coffee shops, general stores etc, so I’m often doing 2-3 things in a single trip. Whereas in LA, each of those things is a separate car journey away for me, so overall things are less efficient.


I'm from the East Coast. I lived a bit in Vancouver. The bus is the place to be. Everybody from all walks of life is on the bus.

I went to Seattle for one weekend and experienced the sad view of only the poorest people taking the bus. It was enlightening and changed my outlook on life.


As with most continuous arguments in SWE, it really depends. I used to do a lot of debugging of random (i.e. not written by me) bioinformatics tools and being able to just fire up gdb and get an immediate landscape of what the issues were in the program was just invaluable.

Times when I was more familiar with the program or there were fewer variables to track it was less helpful


I agree that’s kind of what should happen. What seems to have happened is that people have figured out it’s easier to game the system than produce more complicated or technical projects.


You’re right best reserve such observations for small nations like China


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: