Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>It used to rightfully be something we looked forward to

Science fiction has always been mixed. In Star Trek the cool technology and AGI like computer is accompanied by a post-scarcity society where fundamental needs are taken care of. There are countless other stories where technology and AI is used as a tool to enrich some at the expense of others.

>We let pearl-clutching loom smashers hijack the narrative to the point where a computer making a drawing based on natural language is "slop" and you're a bad person if you like it

I don't strongly hold one opinion or the other, but I think fundamentally the roots of people's backlash is that it is something that jeopardizes their livelihood. Not in some abstract "now the beauty and humanity of art is lost" sort of way, but much more concretely, in that because of LLM adoption (or at least hype), they are out of a job and cannot make money—which hurts their quality of life much more than the increase in quality of life from access to LLMs. Then those people see the "easy money" pouring into this bubble, and it would be hard not to get demoralized. You can claim that people just need to find a different job, but that's ignoring the reality that the over the past century the skill-floor has basically risen and the ladder pulled up; and perhaps even worse, trying to reach for that higher bar still results in one "treading water" without any commensurate growth in earnings.





> In Star Trek the cool technology and AGI like computer is accompanied by a post-scarcity society where fundamental needs are taken care of.

The Star Trek computer doesn't even attempt to show AGI, and Commander Data is the exception, not the rule. Star Trek has largely been anti-AGI for its entire run, for a variety of reasons - dehumanization, unsafe/going unstable, etc.


I think you're confusing AGI for ASI or sentience? The enterprise's computer clearly meets the definition for AGI, in that it can basically do any task the humans require of it (limited only by data, which humans need to go out and gather). Especially consider that it also runs the holodeck.

Unlike modern LLMs it also correctly handles uncertainty, stating when there is insufficient information. However they seem to have made a deliberate effort to restrict/limit the extent of its use for planning and command (no "long-running agentic tasks" in modern parlance), requiring human input/intervention in the loop. This is likely because as you mentioned there is a theme of "losing humanity when you entrust too much to the machine".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: