As a new college grad I might be able to add some insight.
We're stuck in a stalemate where the sheer volume of applications for employers to handle and applicants to send makes them take shortcuts, leaving both sides wonder why people aren't trying.
If somebody has to send in 300-500 applications (which is not unheard of) and answer the same questions till they go blind, it's not surprising that certain things are missing or people don't care. Applicants don't have any reason to believe their info isn't thrown in the trash by an LLM as soon as it is sent.
Lazy people will always be a problem but until there is transparency or trust developed I doubt we will see meaningful change.
Look at the power dynamics then. Who has more power in this situation: people with rent and mortgages? Or companies with more money than God? Companies could simply stop using LLMs and tomorrow and be fine. They brag about laying off thousands while turning record profits; they can turn off the slop machines.
Let's not blame the people with no power in this situation.
I agree that is excessive. But I would hate equally, if not more, learning with magic rules delivered by the professor in the sky. The info doesn't stick for me unless I understand the intuition behind the reasoning.
Yeah I had a magic-rules-first style experience in my EE program and it really didn't work for me at all. The nebulous reasoning made it for me where I just really couldn't internalize the pretty basic "rules" because I couldn't help but mess myself up overthinking the more abstract modes of conceptualizing everything which just confused me more. I'm thankful because it gave me the opportunity to quickly learn that I was a lot better at code than circuits, I probably would've been screwed if it took me that long to get to that point in my educations, but I will say the magic rules just did not work for me personally as a way to understand things. I'm sure others would do a lot better at just jumping right in though.
UConn had a Racket programming course for maybe a decade up until last year. Enough people complained that it was too hard and a weed-out course and the administration dropped it. Yet another blunder by the CSE department.
I appreciate your faith in humanity. However you would be surprised to the lengths people would go to avoid thinking for themselves. Ex: a person I sit next to in class types every single group discussion question into chatgpt. When the teacher calls on him he word for word reads the answer. When the teacher follows up with another question, you hear "erh uhm I don't know" and fumbles an answer out. Especially in the context of learning, people who have self control and deliberate use of AI will benefit. But for those who use AI as a crutch to keep up with everyone else are ill prepared. The difference now is that shoddy work/understanding from AI is passable enough that somebody who doesn't put in the effort to understand can get a degree like everybody else.
I'd suggest this is a sign that most "education" or "work" is basically pointless busy work with no recognizable value.
Perpetuating a broken system isn't an argument about the threat of AI. It's just highlighting a system that needs revitalization (and AI/LLMs is not that tool).
We're stuck in a stalemate where the sheer volume of applications for employers to handle and applicants to send makes them take shortcuts, leaving both sides wonder why people aren't trying.
If somebody has to send in 300-500 applications (which is not unheard of) and answer the same questions till they go blind, it's not surprising that certain things are missing or people don't care. Applicants don't have any reason to believe their info isn't thrown in the trash by an LLM as soon as it is sent.
Lazy people will always be a problem but until there is transparency or trust developed I doubt we will see meaningful change.