Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A reminder that as long as it demands training and reinforcement it's almost certainly low on induction and production of new things.

Very artificial. Not very intelligent.



Humans need training and reinforcement.


Yes, undeniably true. But what they acquire is inductive reasoning skills, and the production of new things.


Will we see this in a publicly facing AI? Creativity relies on avoiding existing truths, and embracing/testing “hallucinations”, something that’s being actively stomped out for “safety”.


Thats one take on 'creativity' -but I don't think it's the only one.

I am an AI skeptic these last 40 years. Personally, I don't think we will see it in my lifetime, if ever. I think what we have now is at best a predictive model which can expose inferences and aide people, humans, to make inductive reasoning outcomes. Its a decision-support mechanism.

The false data is a huge problem. It's very easy to make disastrous decisions on apparently reasonable inductive reasoning, and thats what I think GPT does at BEST. At worst, more normally? It's "regurgitating"

AGI is not in this. Sorry if thats a downer, but I don't think even openai think there is any evidence of a pathway to AGI from what they're doing.

They are pretty overtly riding the hype wave.


Do you have examples of human-created "new things" that aren't essentially novel combinations of old things? Because I come up blank. And this current crop of AI generators are very good at combining old things in novel ways.

I do agree with your general point that these generators aren't really "intelligent", however. Will have to ponder if I agree about the induction bit.


RSA, and the GCHQ PhD equivalent from the 1970s were really remarkably new. Crypto systems before then were symmetrical. Inventing a form of encryption which was a-symmetrical was new.

One time cipher streams were new.

the invention of packet switched networks (Louis Pouzin, Len Kleinrock) was new. It wasn't inherent in prior methods, it's an inductive consequence of time division multiplexing but with addressing and routing.

There is no good analogue in nature to either the internal combustion engine, or the steam engine: the conversion of linear force to rotary force and vice-versa was a really novel thing. I would argue the wankel engine as a diversion from pistons was pretty good reasoning.

But in the same way Kurt Vonnegut says there is a small fixed number of plot models for a novel, almost all late-stage human endevour is derivitive. It's in the nature of the beast. To claim GPT is therefore 'meeting the mark' because the burden of human existence has less discovery and more inductive reasoning simply comes back to my first point: where's the evidence of the GPT doing inductive reasoning with discrimination, beyond the syllogistic?


Water wheels have existed for thousands of years. They convert linear force generated by flowing water to rotary force.


That's a fascinating list and far beyond my capacity to argue, so thanks for that.

> To claim GPT is therefore 'meeting the mark'

Pretty sure the vast majority of people who are attributing some kind of personhood to GPT aren't doing so from an analytical perspective, but because the conversational generation exceeds whatever human-detection threshold they have that is inbuilt. Asking for evidence of genuine inductive reasoning won't make a dent on those feels. The nature of the systems involved lack any reasoning, deductive or inductive. It's all statistics.

The rest of the positive camp is claiming that "the mark" is the production of useful work, I think. At most, this is seen as a step towards AGI, not the finish line.


Well said. I think we are in agreement that most "it's alive" is feeling based hype because people see a step function (I want to avoid saying a quantum leap) in quality compared to e.g. Markov chain games on a corpus.

I would dispute that this is a step toward AGI. I agree it's what proponents are saying. I just think they're wrong. We are no closer to understanding what underpins intelligence and this statistics model isn't informing us of the basis of it, or a purported AGI in particular.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: