Says who? And more importantly, is this the boulder? All I (and many others here) see is that people engage others to sponsor pushing some boulder, screaming promises which aren’t even that consistent with intermediate results that come out. This particular boulder may be on a wrong mountain, and likely is.
It all feels like doubling down on astrology because good telescopes aren’t there yet. I’m pretty sure that when 5 comes out, it will show some amazing benchmarks but shit itself in the third paragraph as usual in a real task. Cause that was constant throughtout gpt evolution, in my experience.
even if it kills us
Full-on sci-fi, in reality it will get stuck around a shell error message and either run out of money to exist or corrupt the system into no connectivity.
The buzzkill when you fire up the latest most powerful model only for it to tell you that peanut is not typically found in peanut butter and jelly sandwiches.
I don't think providing accurate answers to context free questions is even something anyone is seriously working on making them do. Using them that way is just a wrong use case.
People are working -very- seriously on trying to kill hallucinations. I'm not sure how you surmised the use case here, as nothing was given other than an example of a hallucination.
There's a difference between trying to get it to accurately answer based on the input you provide (useful) and trying to get it to accurately answer based on whatever may have been in the training data (not so useful)
It all feels like doubling down on astrology because good telescopes aren’t there yet. I’m pretty sure that when 5 comes out, it will show some amazing benchmarks but shit itself in the third paragraph as usual in a real task. Cause that was constant throughtout gpt evolution, in my experience.
even if it kills us
Full-on sci-fi, in reality it will get stuck around a shell error message and either run out of money to exist or corrupt the system into no connectivity.