Yes, you may be. But you still have an internal world model - through conditioning or otherwise that you're playing off against.
An LLM doesn't have that. It's very impressive parlour trick (and of course a lot more), but it's use is hence limited (albeit massive) to that.
Chaining and context assists resolving that to some extent, but it's a limited extent.
That's the argument anyway, that doesn't mean it's not incredibly impressive, but comparing it to human self-awareness, however small, isn't a fair comparison.
It's next token prediction, which is why it does classification so well.
At this point does it even matter, or do Nvidia just run away with the hypetrain, the valuation and the momentum. Tough to catch. Apple the only real counter player.
- my view on GPTs is that rather than worry about AGI for now, they're the greatest lateral thinking machines ever made
- I asked chatGPT to create 50 dancing dots in vanilla JS and HTML
- I then asked it to interpret Dylan, Vivaldi and Hip-hop
- Creating methods to control the dancing dots for each one
- The output is some bizarre interpretation of each of the styles, sure it's random, it's also beautiful
- Amazingly, through a quirk, it can also combine each of the styles.
Feel free to try:
- Click start
- Each of the buttons acts as on/off
- So if you click each on, then you get all 3 combined styles
I know the argument about creativity rages on, but this feels that by definition the system acts laterally and therefore creatively. There's a lot wrong with that last statement, but the exploration continues.
An LLM doesn't have that. It's very impressive parlour trick (and of course a lot more), but it's use is hence limited (albeit massive) to that.
Chaining and context assists resolving that to some extent, but it's a limited extent.
That's the argument anyway, that doesn't mean it's not incredibly impressive, but comparing it to human self-awareness, however small, isn't a fair comparison.
It's next token prediction, which is why it does classification so well.