Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And the irony is that those of us using AI to amplify our output to produce at exponential speeds feel like your comments are gaslighting us instead! Ive never seen such an outright divide in practitioners of a technology in terms of perception and outcomes. I got into LLMs super early, using them daily since 2022; so that may have bolstered the way I’ve augmented my approaches and tooling. Now almost everything I build uses AI at runtime to generate better tools for my AI to generate tools at runtime.


Can we use this micro moment to try to bridge the gap? I was sold on cocaine but all I've gotten so far was corn starch. Is there like a definitive tutorial on this? I mean look I am proud of my work but if I can drop 200-1000/month for the "blue stuff" I'm not gonna turn my nose up at it.

I've been pretty deeply into LLMs myself since 2023 and built several small models myself from scratch and (SFT) trained many more so it's not like I'm ignorant of how it works, I'm just not getting the workflow results.


It's going to depend heavily on what you're doing. If you're doing common tasks in popular languages, and not using cutting edge library features, the tools are pretty good at automating a large amount of the code production. Just make sure the context/instruction file (i.e. claude.md) and codebase are set up to properly constrict the bot and you can get away with a lot.

If you're not doing tasks that are statistically common in the training data however you're not going to have a great experience. That being said, very little in software is "novel" anymore so you might be surprised.


Just because it's not strictly novel doesn't mean that the LLM is outputting the right thing

We used to caution people not to copy and paste from StackOverflow without understanding the code snippets, now we have people generating "vibe code" from nothing using AI, never reading it once and pushing it to master?

It feels like an insane fever dream


> amplify our output to produce at exponential speeds

I think I blacked out when my brain tried to process this phrase.

Nothing personal, but I automatically discount all claims like this (something something require extraordinary evidence and all that…).


Maybe I need to watch some videos on YouTube to understand what other people are seeing.

I couldn't even get Zed hooked up to GitHub Copilot. I use ChatGPT for snippets and search and it's okay but I don't want to bother checking its work on a large scale


> And the irony is that those of us using AI to amplify our output

I'm guessing you don't care about quality very much, since you are focusing on your output volume




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: