If you use it on a project “True intelligence” is not the word I would use to describe it. I have spent 2-8 hours using it every day since launch as a dev tool. It is very good for learning stuff, scaffolding solutions etc. but it is very bad when you try to do something obscure.
For example, I was learning Bazel using it and I spent 2-3 hours trying to debug an issue going back and forth with it. Eventually I went to the docs and found the solution in 5 mins.
The problem is it doesn’t know when it is wrong, it will always spit out an answer and it will make up libraries that sound real in order to give you an answer. The problem is it doesn’t understand the logic behind what it is saying. It’s just able to spit out a reasonable looking answer because it is a generalizable statistical model of language.
For example try asking it “Generate a 5 python homework questions on Classes that students cannot cheat on using ChatGPT, GPT3 or Assistant.” The questions it generates are not ones that are hardened against itself. It is not able to think logically because it does not think like a human, it’s doing something else entirely so calling it “true intelligence” is not accurate.
Yeah I'm surprised you're the first person I've seen mentioning that it makes up libraries. I asked it to create a Clojure function to render the mandelbrot set as ascii art, which apparently someone got to work in Erlang with only minimal modifications to the code. For Clojure it seemingly invented the clojure.math.complex namespace and functions it thought should belong there.
The problem with that example is that it's not GPT-3 telling you about itself, it's GPT-3 serving up a roughly averaged version of every text explanation it's ever seen that might provide a convincing answer to your question.
I agree, this was more of a test of a technique where I got ChatGPT to generate prompts to interactively improve a prompt from another ChatGPT instance: https://news.ycombinator.com/item?id=33857328
If you ask ChatGPT if it is intelligent or something like that it will always say something like “I am large language model trained by openai etc.” so I worked with another ChatGPT instance to interactively get an increasingly detailed answer to whether it is intelligent or conscious.
I also use this technique for other things like for example instead of saying “Generate a React implementation of cookie clicker” ask one ChatGPT instance to “Generate 10 prompts to ChatGPT to generate a React implementation of cookie clicker” this meta-prompt engineering technique is the most useful technique I have come up with so far.
ChatGPT is a doctor's secretary, isn't it? It knows the answers to things like what prescription to get for some illness, but doesn't have the models to actually be a doctor.
For example, I was learning Bazel using it and I spent 2-3 hours trying to debug an issue going back and forth with it. Eventually I went to the docs and found the solution in 5 mins.
The problem is it doesn’t know when it is wrong, it will always spit out an answer and it will make up libraries that sound real in order to give you an answer. The problem is it doesn’t understand the logic behind what it is saying. It’s just able to spit out a reasonable looking answer because it is a generalizable statistical model of language.
For example try asking it “Generate a 5 python homework questions on Classes that students cannot cheat on using ChatGPT, GPT3 or Assistant.” The questions it generates are not ones that are hardened against itself. It is not able to think logically because it does not think like a human, it’s doing something else entirely so calling it “true intelligence” is not accurate.
If you want to read how it itself describes it’s own intelligence and identity I found a prompt to get it to do that, it does not describe itself as a human intelligence: https://twitter.com/faizlikethehat/status/159949598085168332...