Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s not. People in academia are not using this term anymore because it’s utterly clear it can output knowledge that doesn’t exist.


> output knowledge that doesn’t exist

So can parrots. They'll gladly generate neologisms. I'm interested in how academics define "knowledge that doesn't exist".


Of course parrots can output knowledge that doesn’t exist. Stochastic parrot is a different term.

> "knowledge that doesn't exist".

I said that term. So there’s no official definition but you already know that.

Basically it’s clear among everyone academics included that LLMs can rudimentarily do what humans do. That means composing knowledge and working things out to form new knowledge that doesn’t exist.


"Stochastic Parrot" implies that the thing producing the noise doesn't understand it. I'm not sure how that's currently disproven. Even acting as an agent, it is my understanding that it's just acting on it's own messages in the exact same way it'd act on one of ours.

> That means composing knowledge and working things out to form new knowledge that doesn’t exist.

That's not a terribly useful criteria, though. A Markov chain can produce novel sentences, hell a bingo machine can if you write words on the balls. "Knowledge" is kind of meaningless but also seemingly profound.


> That's not a terribly useful criteria, though. A Markov chain can produce novel sentences, hell a bingo machine can if you write words on the balls. "Knowledge" is kind of meaningless but also seemingly profound.

I don’t know why you came up with this pedantic example. Perhaps you’re autistic? If so then I apologize for assuming you aren’t.

Everyone knows that we are talking about more than just knowledge consisting of a random sting of letters. We are talking about actual useful knowledge.


I don't know if you appreciate how this argument has gone from arguing how "everyone academic" understands LLMs to arguing things that "everyone knows", and now you're othering me when I don't find you're increasingly tenuous arguments untenable.

A certain amount of pedantry is required for these discussions, otherwise we're left in a place where we can't define "actual useful knowledge". At this moment I assume you're defining "actual useful knowledge" as simply anything you find convincing, which is a criteria that could be easily gamed. How are you determining that knowledge is actually novel?


I’m not willing to go into that level of “pedantry”. I like to assume I’m talking with people that have relevant context so we don’t have to go into stupid detail and assume random data generated by a freaking random number generator constitutes as knowledge.


Are you unwilling or unable? This really feels like a vibes based definition. Is "knowledge" like pornography, you know it when you see it? How does it differ from information? How do we know it's novel? The term implies that the AI "knows" something, which is a big claim to make, and I don't think it can be in any way considered to be self evident.

I get it, you like AI, so much so that you're willing to throw out personal attacks to defend it, but it's important to be critical or it's easy to be suckered.


Totally able. But totally unwilling. Let’s be clear. I’m not engaging with the rest of your pedantry.

I don’t love AI. I hate it. But im not deluded for what it is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: