Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is, once you start down that sequence of the AI telling you what you want to hear, it disables normal critical reasoning. It’s the “yes man” problem— you’re even less able to solve the problem effectively than with no information. I really enjoy LLMs, but it is a bit of a trap.




I hit that too. If I asked it about the O2 sensor, it was the O2 sensor. iirc I had to ask if what PIDs to monitor, give all of that to it at once, then try a few experiments it suggested. It also helped that it told me how to self-confirm by watching that the fuel trim doesn't go too high, which was also my cue to shut off the engine if it did.

At no point was I just going to commit to some irreversable decision it suggested without confirming it myself or elsewhere, like blindly replacing a part. At the same time, it really helped me because I'm too noob to even know what to Google (every term above was new to me).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: