Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You didn't list the most important reason:

- Assume LLMs will be more intelligent and cheaper, and the cost of switching to a new LLM model is non-existent. How does improving the custom/heuristic compare in that future?



That's kind of what I was getting at in point 2, about "new use cases" opening up, but yeah you stated it more directly. It's hard to argue with. With a heuristic driven approach we know we will need expertise, dev hours, etc to improve the feature. With LLMs, well, some lab out there is basically doing all the hard work for us, all we need to do is sit back and wait for a year or two and then change one line of code, model="gpt-4o" to model="gpt-5o" or whatever.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: