Another instance of confused language. One can not converse with a program because a conversation implies some notion of semantics and understanding. Software, represented as a binary sequence, has no semantics other than encoding some numbers and operations on those numbers. So at best, what we can say is that this post is about some arithmetic that looks like a linguistic interaction. Any semantics associated with the arithmetic performed by GPT-3 is simply self deception.
GPT-3 and all language models are just search engines that respond to queries by uncompressing the content encoded in the matrices of the model.
His book "The Act of Creation" is also very good. Somewhat related to prison experiences, John Leray invented sheaves while in a prison camp:
> Jean Leray (November 7, 1906–November 10, 1998) was confined to an officers’ prison camp (“Oflag”) in Austria for the whole of World War II. There he took up algebraic topology, and the result was a spectacular flowering of highly original ideas, ideas which have, through the usual metamorphism of history, shaped the course of mathematics in the sixty years since then.
It seems that solitude is generally conducive to creative activity for those that are somewhat positively oriented towards such activity.
by drastically restricting freedom, and with a serious fear of your life in hand each day, certain kinds of inner climbing become vivid and accessible. be kind with this knowledge
One can not be loyal to an entity that has no conception of what it means. Language like this is why people are constantly confused about where their loyalties should actually be. One can be loyal to people, one can not be loyal to a workplace.
It's definitely in a gray area because the AI models are essentially compression engines that encode the code samples/data into the weights of the matrices that represent the ML model and then "uncompress" it to serve queries. I think it would be easy to argue that a compressed data set no matter how illegible would need to conform to the same license as the data set it was encoding but I don't think any lawyer is smart enough to make that case. So at the moment it remains a very convenient loophole for companies that have enough compute to mangle the data set beyond recognition and then use it to their advantage. So this will probably remain a convenient loophole for large companies to sidestep licensing restrictions by encoding whatever data/code they want to use into some neural network and then sell it as AI.
For why these things are essentially mangled compression engines one can take a look at "Hopfield Networks is all you need": https://arxiv.org/abs/2008.02217. It allows representing all modern transformer networks (which is what CoPilot is using) as a bunch of hopfield networks which are essentially memory modules connected in some complicated topology to encode some data set.
This is the inevitable endgame of financialization. People think financial instruments have meaning when the reality is its just a bunch of numbers in databases and fluffy narratives about GDP and growth.
The marketing hype has overtaken reality. The current AI technology is not ready for real world deployment and it will not be ready for the foreseeable future: https://rodneybrooks.com/my-dated-predictions/.
Probably never will. ML is not a golden Hammer. It's an analog information encoder and decoder. It's excellent at parsing images, but it can't make decisions, that's not how thought processes works. On AI day when they explained that they were training their car by throwing all sorts of random garbage scenarios to it because if they didn't it would run stuff over I facepalmed. Like they spend all this time avoiding the actual solution because they want an AI breakthrough.. GUYS you already have an amazing 3d parser, just write a fucking if-else to stop when somethings ahead. You don't need to add a moose in flipflops to your regression testing sets...
It’s one thing to sit there making predictions, saying all along “See, I said so”. It’s another thing to go out there and fail and learn and move the needle step by step.
There’s Churchill. There’s Semmelweis. There’s JFK. There are the pioneers of computing. They have all been ridiculed by many. Luckily for the world, they have also persisted in their folly. And have inspired many others.
I gotta apologize. My post was a knee-jerk reaction to the headline of the original post, and to the notion that Tesla's FSD program was "marketing hype". The blog post by Brooks is actually very interesting to read. Thank you for sharing it - I just upvoted.
Regarding the "marketing hype":
Phrasing it this way, to me, implies that Tesla wasn't really seriously working on the problem. Or that FSD by way of cameras + neural nets was an approach that has already been demonstrated to be a dead end. Both of which I don't think is true.
It's true that Tesla has been promising a coast-to-coast autonomous drive since 2016. They have, as they say themselves, "egg on their face". Also, it's true that one can argue over several of their design decisions. But I think there is a difference between a group of people working towards a hard goal, and a group of people knowingly creating unrealistic expectations for marketing reasons.
> The infamous thought experiment, flawed as it is, does demonstrate one thing: physics alone can’t explain consciousness
Neither can math by the way and by extension, computation. There is no such thing as a computational theory of consciousness unless consciousness is redefined to be whatever is done by a Turing machine.
GPT-3 and all language models are just search engines that respond to queries by uncompressing the content encoded in the matrices of the model.