Hacker Newsnew | past | comments | ask | show | jobs | submit | sheeshe's commentslogin

Well that’s because if you look at the structure of the brain there’s a lot more going on than what goes on within an LLM.

It’s the same reason why great ideas almost appear to come randomly - something is happening in the background. Underneath the skin.


In essence it is a thing that is actually promoting your own brain… seems counter intuitive but that’s how I believe this technology should be used.


This technology (which I had a small part in inventing) was not based on intelligently navigating the information space, it’s fundamentally based on forecasting your own thoughts by weighting your pre-linguistic vectors and feeding them back to you. Attention layers in conjunction of roof later allowed that to be grouped in higher order and scan a wider beam space to reward higher complexity answers.

When trained on chatting (a reflection system on your own thoughts) it mostly just uses a false mental model to pretend to be a desperate intelligence.

Thus the term stochastic parrot (which for many us actually pretty useful)


Thanks for your input - great to hear from someone involved that this is the direction of travel.

I remain highly skeptical of this idea that it will replace anyone - the biggest danger I see is people falling for the illusion. That the thing is intrinsically smart when it’s not - it can be highly useful in the hands of disciplined people who know a particular area well and augment their productivity no doubt. Because the way we humans come up with ideas and so on is highly complex. Personally my ideas come out of nowhere and mostly are derived from intuition that can only be expressed in logical statements ex-post.


Is intuition really that different than LLM having little knowledge about something? It's just responding with the most likely sequence of tokens using the most adjacent information to the topic... just like your intuition.


With all due respect I’m not even going to give a proper response to this… intuition that yields great ideas is based on deep understanding. LLM’s exhibit no such thing.

These comparisons are becoming really annoying to read.


I think you need to first understand what the word intuition means, before writing such a condescending reply.


Meant to say prompting*


It’s not that.

It’s ego and desperation for one last hurrah. Disney has a history of being a corporate governance nightmare - which Iger ironically contributed toward fixing. He’s undoing all that now.


Lmao your user name.

I like the phrase “vulture capital”


Which is no surprise as the data for web development stuff exists in large amounts on the web that the models feed off.


Ok so why isn’t there mass lay offs ensuing right now?


Because from my experience using codex in a decently complex c++ environment at work, it works REALLY well when it has things to copy. Refactorings, documentation, code review etc. all work great. But those things only help actual humans and they also take time. I estimate that in a good case I save ~50% of time, in a bad case it's negative and costs time.

But what I generally found, it's not that great at writing new code. Obviously an LLM can't think and you notice that quite quickly, it doesn't create abstractions, use abstractions or try to find general solution to problems.

People who get replaced by Codex are those who do repetitive tasks in a well understood field. For example, making basic websites, very simple crud applications etc..

I think it's also not layoffs but rather companies will hire less freelancers or people to manage small IT projects.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: