they regulary and often downgrade their gpts.
gpt4 now is about as good as gpt3.5 was in the beginning.
like 6-7 months ago there was a gpt4 version that was really good, it could understand context and stuff extremly well, but it just went downhill from there. i wont pay for current chatgpt 4 anymore
While I agree that GPT4 (in the web app) is not as good as it used to be, I don't think it is anywhere near GPT3.5 level. There are many things web app GPT4 can do that GPT3.5 couldn't do at ChatGPT's release (or now afaik).
One thing I really dislike about hosted models is how opaque that behavior is. As a user I should never be guessing if they've reduced a model's capabilities to save on compute for example.
This is why I'm excited for the growth of local model capabilities. I can much more reasonably expect that the model has not degraded and that it is using the full hardware capabilities it has been granted.
thats not an issue with open source, but an issue with trust and structure. in closed source it can also happen and it would be even harder to spot, since you have no clue what is going on and there would be maybe 1-2 ppl in the whole world that could identify that issue.
with open source anyone can. the joke is that he found it because the ssh login felt too long. this dude needs a medal.
what needs to be controlled are system relevant libraries that open up such possibilites and also not including not code data into builds. any binary blob is bad, anything that cant simply be read is bad.
if you put in code, ppl will control it and most of the time see what it does.
but if you hide the real code in a binary blob, such files should not be included in builds, but shipped separately
like 6-7 months ago there was a gpt4 version that was really good, it could understand context and stuff extremly well, but it just went downhill from there. i wont pay for current chatgpt 4 anymore