It's pretty amazing how much we've lowered our collective standards for article quality since the advent of AI generation. (Not just here, but everywhere). It's not like it's a rote spelling mistake deep in the article, the spelling mistake is the very first thing you see.
Why would a serious author go with this image? Just a few years ago, misspelling "climate" and having nonsensical political cartoon to headline your article would have just been disqualifying.
Whereas before the air of sophistication conned you into thinking the authors knew what they were talking about, it took AI slop for you to see how bad things really are.
No, things are worse now. Our standards have lowered. There was no way to quickly produce low-effort vacuous text without writing it or copying it from another source before now.
Of course people could feign intelligence before, but it's much easier now and our standards are lower. This is a double whammy.
It’s the only thing I saw before I closed the browser tab. If you’re going to use AI to generate an the very first things reader sees, proofread the damned thing so it doesn’t come off as amateurish.
Wow, that is a terrible image (yellow tinge indicative of gpt-image-1, spelling errors). I don't mind generative images being used in articles, provided that they:
A. Have some relevance to the actual content.
B. Don't exhibit glaringly obvious AI flaws (polydactyly, faces like melted wax candles, etc.).
It's amazing how little time people take to vet images that are intended to be the first thing viewers will see.
Reminds me of the image attached to Karpathy's (one of the founding members of OpenAI) Twitter post on founding an education AI lab: