Absolutely not. One thing happens because of a set of physical laws that govern the universe. These laws were discovered due to a massive number of observations of multiple phenomena by a huge number of individuals over (literally) thousands of years, leading to a standard model that is broadly comprehensive and extremely robust in its predictions of millions or possibly even billions of seperate events daily.
The other thing we have a small number of observations of happening over the last 50 or 60 years but mostly the last 5 years or so. We know some of the mathematical features of the phenomena we are observing but not all and there is a great deal going on that we don't understand (emergence in particular). The things we are seeing contradict most of the academic field of linquistics so we don't have a theoretical basis for them either outside of the maths. The maths (linear algebra) we understand well, but we don't really understand why this particular formulation works so well on language related problems.
Probably the models will improve but we can't naively assume this will just continue. One very strong result we have seen time and time again is that there seems to be an exponential relationship between computation and trainingset size required and capability. So for every delta x increase we want in capability, we seem to pay (at least) x^n (n>1) in computation and training required. That says at some point increases in capability become infeasible unless much better architectures are discovered. It's not clear where that inflection point is.
The other thing we have a small number of observations of happening over the last 50 or 60 years but mostly the last 5 years or so. We know some of the mathematical features of the phenomena we are observing but not all and there is a great deal going on that we don't understand (emergence in particular). The things we are seeing contradict most of the academic field of linquistics so we don't have a theoretical basis for them either outside of the maths. The maths (linear algebra) we understand well, but we don't really understand why this particular formulation works so well on language related problems.
Probably the models will improve but we can't naively assume this will just continue. One very strong result we have seen time and time again is that there seems to be an exponential relationship between computation and trainingset size required and capability. So for every delta x increase we want in capability, we seem to pay (at least) x^n (n>1) in computation and training required. That says at some point increases in capability become infeasible unless much better architectures are discovered. It's not clear where that inflection point is.