We talk a lot about the risks of AI in schools, but those same risks apply in any learning environment.
I recently started a new job and I find that AI is making it so much harder for me to onboard. I am adjusting to my role much slower than my peers who are using AI less. I am coding in a language I am unfamiliar with, which makes the lure of vibe coding stronger. I am at least skilled enough to recognize when Claude gives me an answer that either makes no sense or is unnecessarily verbose. But the more time I spend asking Claude to write code, the less I feel like I'm developing the skills that the job requires. Plus, when I submit a PR, I lack the necessary confidence in my own work, which just feels bad.
Honestly, another part of this is that I'm asking Claude to search through Slack and docs for answers to questions when I should just ask another person. The AI is feeding my social anxiety, luring me into avoiding human contact that I know will be good for my understanding as well as my general need for social interaction.
That all sounds like I am absolving myself of responsibility, but I think it's important to point out how a given technology is especially addictive for a certain type of person, and traps them in a negative behavioral cycle. If I hold off on relying on AI now, I suspect I can grow in my skills to the point that I can delegate tasks to AI that are rote and easy for me to verify their results. It feels challenging, but it's necessary.
I'd suggest going the route of having Claude teach you what you need to know. How do I uppercase this string? What's the best way to tackle this problem? Is there a standard way to do this thing? Then you learn along the way. You don't have to use it as a search engine, just ask it what you need to know in the moment. It will shake it's token chains and give you something that's useful, especially for a beginner in the language. This way you can implement your plan of growing your skills and then starting to delegate to it later.
I've been doing this, and it's a nice balance for me. Having Claude code things when you don't know how to evaluate it's code seems like madness to me, but I guess I'm in the minority on this.
> How do I uppercase this string? What's the best way to tackle this problem? Is there a standard way to do this thing?
I strongly believe that it's better to just take a couple week of slow times and read a good book about the technical matter you're dealing with. Having bite sized answers isn't a good way to be proficient.
No, and if you aren't going through mental effort, your brain isn't learning. It's just being briefly entertained by the information. You need practice and association building to actually build knowledge and skill.
It is the worst time for the apprenticeship system (internship). everyone expect you to ship fast and good with ai, but you can barely have the time to pick up any skills during the fast iteration.
I survived two rounds of layoffs at a company. Each time there was a message from the CEO, and a drawn out process of learning who had kept their job and who hadn't. It was supposed to be more humane but it ended up making things much worse.
When I was finally laid off, there was no notice at all. Ripping off the band-aid was better (though it still sucked).
Have you considered that different people have different beliefs, or do you just think everyone who disagrees with you is just one person who thinks the same thing?
When challenged by the dictum that people often confuse nature with history, the response is to abandon arguments from human nature and replace them with a fantasy caveman land.
Caveman land implies human nature without needing to make an argument for it. It is so far in the past that there is limited evidence, and most people you encounter aren't anthropologists. So you can justify all your unexamined assumptions about present society with an appeal to the caveman land.
Ironically, all you need to craft a fantasy caveman land is an imagination. "Picture hunter gatherers, sitting around a campfire, carving rocks into Pokemon cards and trading them." What a great story! Anything is possible in caveman land.
But what if the only real way to break through avoidance patterns is to stop avoiding? What if the tradeoffs of LLMs are instant gratification and further atrophying of your executive functions?
I have gained a paranoid suspicion that our capacity to decrease immediate distress with technology has become so great that we are creating a world where people with certain temperaments can have their personalities become more and more extreme through the assistance of technologies which, for example, decrease the amount of interpersonal interaction required or prevent the need for deep focus.
> (Myself, I'm in a strange position with doppelgängers, because I simultaneously want real human connection and keep getting disappointed with many of the real humans).
I don't think you're in a strange position at all. You are just being honest about what is the undercurrent of much of modern technological development. AI is simple the apex of a trend to replace human interaction with a simulacrum, which without your awareness, reduces your own tolerance for the frustrations of real human interaction, thus making you more dependent on the simulacrum.
They've been rendered neurotic, or their neurotic tendencies have become dominant through habituation, or they have developed anti-social behaviors in response to a combination of temperament and a society that is unable to promote pro-sociality.
I recently started a new job and I find that AI is making it so much harder for me to onboard. I am adjusting to my role much slower than my peers who are using AI less. I am coding in a language I am unfamiliar with, which makes the lure of vibe coding stronger. I am at least skilled enough to recognize when Claude gives me an answer that either makes no sense or is unnecessarily verbose. But the more time I spend asking Claude to write code, the less I feel like I'm developing the skills that the job requires. Plus, when I submit a PR, I lack the necessary confidence in my own work, which just feels bad.
Honestly, another part of this is that I'm asking Claude to search through Slack and docs for answers to questions when I should just ask another person. The AI is feeding my social anxiety, luring me into avoiding human contact that I know will be good for my understanding as well as my general need for social interaction.
That all sounds like I am absolving myself of responsibility, but I think it's important to point out how a given technology is especially addictive for a certain type of person, and traps them in a negative behavioral cycle. If I hold off on relying on AI now, I suspect I can grow in my skills to the point that I can delegate tasks to AI that are rote and easy for me to verify their results. It feels challenging, but it's necessary.
reply