Also, in 2009 someone suggested re-implementing Eurisko[1], and Yudkowsky cautioned against it:
> This is a road that does not lead to Friendly AI, only to AGI. I doubt this has anything to do with Lenat's motives - but I'm glad the source code isn't published and I don't think you'd be doing a service to the human species by trying to reimplement it.
To my mind -- and maybe this is just the benefit of hindsight -- this seems way too overcautious on Yudkowsky's part.
Machinery can be a lot simpler than biology. Birds are incredibly complex systems: wing structure, musculature, feathers, etc. An airplane can be a vaguely wing-shaped piece of metal and a pulse jet. It doesn’t seem super implausible that there is some algorithm that is to human consciousness what a pulse jet with wings is to a bird. Maybe LLMs are that, but maybe they’re far more than is really needed because we don’t yet know what we are doing.
I would bet against it being possible to implement consciousness on a PDP, but I wouldn’t be very confident about it.
> This is a road that does not lead to Friendly AI, only to AGI. I doubt this has anything to do with Lenat's motives - but I'm glad the source code isn't published and I don't think you'd be doing a service to the human species by trying to reimplement it.
To my mind -- and maybe this is just the benefit of hindsight -- this seems way too overcautious on Yudkowsky's part.
[1]: https://www.lesswrong.com/posts/t47TeAbBYxYgqDGQT/let-s-reim...