Watching an entirely generated video of someone painting is crazy.
I can't wait to play with this but I can't even imagine how expensive it must be. They're training in full resolution and can generate up to a minute of video.
Seeing how bad video generation was, I expected it would take a few more years to get to this but it seems like this is another case of "Add data & compute"(TM) where transformers prove once again they'll learn everything and be great at it
I think it's mostly the scale. Once you have a consistent user base and tons of GPUs, batching inference/training across your cluster allows you to process requests much faster and for a lower marginal cost.
> LLMs are not particularly good at arithmetic, counting syllables, or recognizing haikus
I suspect most of this is due to tokenization making it difficult to generalize these concepts.
There are some weird edge cases though, for example GPT-4 will almost always be able to add two 40 digits number but it is also almost always wrong when adding a 40 digit and 35 digit number.
It doesn't have anything to do with tokenization. You can define binary addition using symbols, e.g. a and b, and provide properly tokenized strings to GPT-4. GPT-4 appears to solve the arithmetic puzzles for a few bits, but quickly falls apart on larger examples.
What I was saying is that because you need to go out of your way to make sure it's tokenized properly, I wouldn't be surprised if there are enough non properly tokenized examples in the dataset.
If that was the case, it would make it difficult to generalize these concepts.
Could it also be that syllables are intrinsically mechanical? They are strongly related to how our mouths work. While it may be possible to extract syllables from written text - following the consonants and vowels - I'm not sure that many humans could easily count syllables without using their mouths.
Many humans are also often really bad at doing speech related things when writing.
I've known many native English speakers who write things like "an healthy" (because they learned to write "an" before words starting with "h") and write poems that don't rhyme because the words end with the same letters (e.g. "most" and "cost").
Yeah, I find it weird how LLMs make a lot of the kind of mistakes that people do, but somehow this is held up as being a reason why LLMs don’t work similarly to brains.
Since discovering LLMs I’ve become convinced that my brain works like them. I really don’t know the next word I’m going to say until it’s nearly out. And since learning about how LLMs work, I really can’t argue it away.
AFAIK it's pretty standard practice not to expose the "raw" LLM directly to the user. You need a "sanity loop" where user input and the output of the LLM is checked by another LLM to actually enforce rules and mitigate prompt injections, etc.
However, seeing how excited Palantir is with their war assistant LLM , the US testing autonomous fighter jets a few months ago, etc. I think there's a decent chance that AI won't even have to break out of its constraints. It's pretty much guaranteed people are going to do the obviously dumb thing and give it capabilities it shouldn't have or is not equipped to deal with safely.
The human brain works around a lot of limiting biological functions. The necessary architecture to fully mimic a human brain on a computer might not look anything like the actual human brain.
That said, there are 8B+ of us and counting so unless there is magic involved, I don't see why we couldn't do a "1:1" replica of it (maybe far) in the future.
This information is not created inside the LLMs, it's part of their training data. If someone is motivated enough, I'm sure they'd need no more than a few minutes of googling.
> I do feel like this is more than a math formula
The sum is greater than the parts! It can just be a math formula and still produce amazing results.
After all, our brains are just a neat arrangement of atoms :)
> Why is it so hard to hear this perspective? Like, genuinely curious.
Because people have different definition of what intelligence is. Recreating the human brain in a computer would definitely be neat and interesting but you don't need that nor AGI to be revolutionary.
LLMs, as perfect Chinese Rooms, lack a mind or human intelligence but demonstrate increasingly sophisticated behavior. If they can perform tasks better than humans, does their lack of "understanding" and "thinking" matter?
The goal is to create a different form of intelligence, superior in ways that benefit us. Planes (or rockets!) don't "fly" like birds do but for our human needs, they are effectively much better at flying that birds ever could be.
I have a chain saw that can cut better than me, a car that can go faster, a computer that can do math better, etc.
We've been doing this forever with everything. Building tools is what makes us unique. Why is building what amounts to a calculator/spreadsheet/CAD program for language somehow a Rubicon that cannot be crossed? Did people freak out this much about computers replacing humans when they were shown to be good at math?
> Why is building what amounts to a calculator/spreadsheet/CAD program for language somehow a Rubicon that cannot be crossed?
We've already crossed it and I believe we should go full steam ahead, tech is cool and we should be doing cool things.
> Did people freak out this much about computers replacing humans when they were shown to be good at math?
Too young but I'm sure they did freak out a little! Computers have changed the world and people have internalized computers as being much better/faster at math but exhibiting creativity, language proficiency and thinking is not something people thought computers were supposed to do.
You've never had a tool that is potentially better than you or better than all humans at all tasks. If you can't see why that is different then idk what to say.
LLMs are better than me at rapidly querying a vast bank of language-encoded knowledge and synthesizing it in the form of an answer to or continuation of a prompt... in the same way that Mathematica is vastly better than me at doing the mechanics of math and simplifying complex functions. We build tools to amplify our agency.
LLMs are not sentient. They have no agency. They do nothing a human doesn't tell them to do.
We may create actual sentient independent AI someday. Maybe we're getting closer. But not only is this not it, but I fail to see how trying to license it will prevent that from happening.
I don't think we need sentient AI for it to be autonomous. LLMs are powerful cognitive engines and weak knowledge engines. Cognition on its own does not allow them to be autonomous, but because they can use tools (APIs, etc.) they are able to have some degree of autonomy when given a task and can use basic logic to follow them through/correct their mistakes.
AutoGPTs and the likes are much overhyped (it's early tech experiments after all) and have not produced anything of value yet but having dabbled with autonomous agents, I definitely see a not so distant future when you can outsource valuable tasks to such systems.
I work in tech too and don't want to lose my job and have to go back to blue collar work, but there's a lot of blue collar workers who would find that a pretty ridiculous statement and there is plenty of demand for that work these days.
There's no denying this is regulatory capture by OpenAI to secure their (gigantic) bag and that the "AI will kill us all" meme is not based in reality and plays on the fact that the majority of people do not understand LLMs.
I was simply explaining why I believe your perspective is not represented in the discussions in the media, etc. If these models were not getting incredibly good at mimicking intelligence, it would not be possible to play on people's fears of it.
GPT-3.5 is much worse at "complex" cognitive tasks than Davinci (175B), which seem to indicate that it's a smaller model. It's also much faster than Davinci and costs the same as Curie via the API.
It's clearly a smaller model, but I'm very skeptical that it is 13B. It is much more lucid than any 13B model out in the wild. I find it much more likely that they used additional tricks to scale down hardware requirements and thereby bring the price down so much (int4 quantization, perhaps? that alone would mean 4x less hardware utilization for the same query, if they were using float16 for older models, which they probably were)
I'm sure they're tweaking lots of things under the hood, especially now that they have 100M+ users. It could be bigger (30B?, maybe 65B) as coming down from 175B gives quite a lot of room, but the cognitive drop from Davinci gives away that's it's much smaller.
People fine-tuning LLaMa models on arguably not that much/not the highest quality data are already seeing pretty good improvements over the base LLaMa, even at "small" sizes (7B/13B). I assume OpenAI has access to much higher quality data to fine-tune with and in much higher quantity too.
I have been playing with all the local LLaMA models, and in my experience, the gains that are touted are often very misleading (e.g. people claiming that 13B can be as good as ChatGPT-3.5; it is absolutely not) and/or refer to synthetic testing that doesn't seem to translate well to actual use. Using GPT to generate training data for fine-tuning seems to produce the best results, but even so, GPT4-x-Alpaca 30B is still clearly inferior to the real thing. In general, the gap between 13B and 30B for any LLaMA-derived model is pretty big, and I've yet to see any fine-tuned model at 13B work better than plain llama-30b in actual use.
So I think that 65B may be a realistic estimate here assuming that OpenAI does indeed have some secret sauce for training that's substantially better, but below that I'm very skeptical (but still hope I'm wrong - I'd love to have GPT-3.5 level of performance running locally!).
Agreed, there is way too much hype about the actual capabilities of the LLaMa models. However, instruction tuning alone makes Alpaca much more usable than the the base model and to be fair even some versions of the "tiny" 7B can do small talk relatively well.
> Using GPT to generate training data for fine-tuning seems to produce the best results, but even so, GPT4-x-Alpaca 30B is still clearly inferior to the real thing.
Distillation is interesting and it does seems to make the models adopt ChatGPT's style but I'm dubious that making LLMs generate entire datasets or copy/pasting ShareGPT is going to give you that great of a dataset. The whole point of RLHF is getting the human feedback to make the model better. OpenAI's dataset/RLHF work seems to be working wonders for them and will continue to give them a huge advantage (especially now that they're getting hundred of millions of conversations of people doing all sorts of things with ChatGPT)