Eh, this just feels like "software engineering simulator." I don't have autism but a good bit of this feels familiar (am I on the spectrum?) I'm an introvert and have struggled to cope with corporate work for a while.
What helps:
- Challenging the idea that you need to mask to be successful. If masking is a recipe for burnout, then it actually seems like it's a strategy that will lower your chance for success. How much of the need here is self-imposed?
- Owning your calendar and timing for meetings to better suit your energy.
- Regular therapy and reflection, honestly.
- Regular exercise, doesn't matter who you are or what form, this is essential.
I can respect that this "simulation" fosters empathy, but worry that it also awfulizes/catastrophizes solvable problems. Figuring out functional routines and managing burnout is just as big a part of the job as writing code. It's very much a personal responsibility, maybe not in the job description, maybe harder for some than others, but it is our responsibility.
Heck, this isn’t even specific to software engineering. It’s basically just a “getting through the workday” simulator. I think there are a great many people who find working in an office exhausting. Personally, I was so much happier once I switched to remote work.
> Challenging the idea that you need to mask to be successful... How much of the need here is self-imposed?
Autistic people don't come into the world as fully formed adults with irrational ideas about the need to mask. They start off as children and attempt to socialize with other children. The autistic child in a neurotypical world just "being themselves" finds themselves repeatedly kicked out of friend groups and rejected by everyone sometimes including by their parents. This is deeply traumatic to a young child's psyche. Unloved and rejected, a solution appears! I'll just pretend to be like the other kids, even though they're stupid and wrong. They may actually objectively be stupid, but apparently they don't like being told that to their face. Pile on another decade or two of this, and hey, this child, now older and wiser, has autistic masking tendencies that cause them to burn out. Blame the now-adult person with autism all you want to absolve yourself of a need to concern yourself with other people's problems, but that's not actually helpful for those people suffering from autistic burnout.
I mask as a coping mechanism for ADHD and Social Anxiety. This masking causes me harm. I learned it in the way you describe.
The most helpful learning I've gotten through years of therapy has been to: (1) recognize what I'm doing (2) not beat myself up about it (3) try small steps to change my behavior so that I can feel good about it.
I'm the only person who can unlearn this for myself. I don't blame anyone who masks, and have nothing but empathy for the experience, but I'm proposing they can find a different way.
We'd need to have a rigorous definition on what it means to mask, and to agree on which behaviors should be considered masking, and which are simply being socialized, and what's necessary to exist in a society and what's not, before we could have a detailed productive conversation.
Therapy absolutely helps. Unlearning maladaptive behaviors rooted in childhood trauma is part of being a well-adjusted adult.
It takes energy to not do every impulsive thing that comes to mind. Fine, don't call it masking to not give into them. Whatever you want to call it though, it's exhausting.
> Challenging the idea that you need to mask to be successful. If masking is a recipe for burnout, then it actually seems like it's a strategy that will lower your chance for success. How much of the need here is self-imposed?
Masking is not always conscious, in fact it's largely unconscious. So many autistic people will go through their day around neurotypical people and feel burnt out by lunch and have no idea why. They don't necessarily realize they're burning tons of mental effort just talking to people or dealing with stimuli.
Autistic people learn to mask just to get by day to day. It's not like they got issued a "How to be Autistic: Masking for Success" guide book when they were born.
I still think it's important to (1) notice what's causing the problem, bring it into consciousness (2) understand the behavior (in this case: masking) and reckon with it if if's causing a bad outcome (burnout.)
Easier said than done. For me, therapy has been life-changing for helping me notice and understand unintentional behaviors.
Isn't that just being introverted? Also, if it's unconscious then a "simulator" shouldn't present an option. The PC should simply react automatically to the detriment of some stat. It sounds like for something to qualify as "masking" it must be a conscious choice, otherwise it's some other thing.
> It's very much a personal responsibility, maybe not in the job description, maybe harder for some than others, but it is our responsibility.
You might as well be telling a wheelchair-bound person that it's their responsibility to find a way up a flight of stairs or maneuver a cramped bathroom stall.
Eh. No not really. There is a threshold to even be considered on the spectrum.
Most people have 2 legs and 2 arms. Some people don't (birth defects, injuries, accidents, disease, etc). There is a spectrum of missing appendages, but to say everyone is missing at least part of an appendage is not correct.
Ok, I'll bite. What's that threshold to be considered "on the spectrum"? Is there a threshold on the other end? If so, what is that? My point is that everybody exhibits some of the symptoms typically associated with autism or Asberger's. For example: getting exhausted from being around people; sensory overload; pattern-finding in everything. It differs for each person. I frequently look for visual patterns around me, and it's satisfying to find one. Does that put me "on the spectrum"? Some sounds make me cringe. What about that? How many do there have to be? The whole reason it's called a "spectrum" is that there is no one thing that can define it.
DSM-5 is the current standard for diagnosing and classifying mental health conditions. I don't have the direct quote from the book handy, but I believe this guide from Stanford is accurate: https://med.stanford.edu/content/dam/sm/neonatology/document...
Essentially, there's a collection of behaviors you need to exhibit to be considered autistic. Then, the "spectrum" part is the severity of those behaviors.
"Visible light" is just a moniker assigned to a subset of the electromagnetic spectrum. Gamma waves are on the spectrum, orange is on the spectrum, infrared is on the spectrum.
The definition of autism has changed to pull in masses more people over the years, so if you're an older software engineer you may be autistic using the up-to-date definition.
With the DSM-5 and it's removal of Asperger's as a separate diagnosis the diagnosis criteria has been made stricter. People that would have formerly been diagnosed as Asperger could theoretically not be anymore under ASD.
The percentage of people with autism in a population is very stable and we know there is a huge genetic component to it.
People are getting diagnosed more but the amount of people with autism has likely stayed stable.
Which is really, really good thing. A diagnosis is live changing. The earlier you get diagnosed and the more supportive your network is, the better the outcome.
Hot take: Junior devs are going to be the ones who "know how to build with AI" better than current seniors.
They are entering the job market with sensibilities for a higher-level of abstraction. They will be the first generation of devs that went through high-school + college building with AI.
Where did they learn sensibility for higher-level of abstraction? AI is the opposite, it will do what you prompt and never stop to tell you its a terrible idea, you will have to learn yourself all the way down into the details that the big picture it chose for you was faulty from the start. Convert some convoluted bash script to run on Windows because thats what the office people run? Get strapped in for the AI PowerShell ride of your life.
The self-taught programmer's idea was coded by someone who is no smarter than they are. It will never confuse them, because they understand how it was written. They will develop along with the projects they attempt.
The junior dev who has agents write a program for them may not understand the code well enough to really touch it at all. They will make the wrong suggestions to fix problems caused by inexperienced assumptions, and will make the problems worse.
i.e. it's because they're junior and not qualified to manage anybody yet.
The LLMs are being thought of as something to replace juniors, not to assist them. It makes sense to me.
AI is the opposite, it will do what you prompt and never stop to tell you its a terrible idea
That's not true at all, and hasn't been for a while. When using LLMs to tackle an unfamiliar problem, I always start by asking for a comparative review of possible strategies.
In other words, I don't tell it, "Provide a C++ class that implements a 12-layer ABC model that does XYZ," I ask it, "What ML techniques are considered most effective for tasks similar to XYZ?" and drill down from there. I very frequently see answers like, "That's not a good fit for your requirements for reasons 1, 2, and 3. Consider UVW instead." Usually it's good advice.
At the same time I will typically carry on the same conversation with other competing models, and that can really help avoid wasting time on faulty assumptions and terrible ideas.
Do you think that kids growing up now will be better artists than people who spent time learning how to paint because they can prompt an LLM to create a painting for them?
Do you think humanity will be better off because we'll have humans who don't know how to do anything themselves, but they're really good at asking the magical AI to do it for them?
I think the argument is that growing up with something doesn't necessarily make you good at it. I think it rings especially true with higher level abstractions. The upcoming generation is bad with tech because tech has become more abstract, more of a product and less something to tinker with and learn about. Tech just works now and requires little in assistance from the user, so little is learned.
Yeah, I have a particular rant about this with respect to older generations believing "kids these days know computers." (In this context, probably people under 18.)
The short version is that they mistake confidence for competence, and the younger consumers are more confident poking around because they grew up with superior idiot-proofing. The better results are because they dare to fiddle until it works, not because they know what's wrong.
I think this disregards the costs associated with using AI.
It used to be you could learn to program with a cheap old computer a majority of families can afford. It might have run slower, but you still had all the same tooling that's found on a professional's computer.
To use LLMs for coding, you either have to pay a third party for compute power (and access to models), or you have to provide it yourself (and use freely available ones). Both are (and IMO will remain) expensive.
I'm afraid this builds a moat around programming that will make it less accessible as a discipline. Kids won't just tinker they way into a programming career as they used to, if it takes asking for mom's credit card from minute 0.
As for HS + college providing a CS education using LLMs, spare me. They already don't do that when all it takes is a computer room with free software on it. And I'm not advocating for public funds to be diverted to LLM providers either.
I'm curious about the "environmentally friendly" promise. How meaningful/measurable would removing these tags be?
I think there's something to be said for efficiency/no wasted bytes, but better for the environment feels like a major stretch/beside the point...
It's like saying "if we all cut out unnecessary adjectives from our speech, it would save 100k gallons of water per day." Could that be true, maybe? But that's not the thing that makes brevity meaningful/important.
(or: Tongue-in-cheek joke that went over my <head>?)
Feels reasonable in the first few paragraphs, then quickly starts reading like science fiction.
Would love to read a perspective examining "what is the slowest reasonable pace of development we could expect." This feels to me like the fastest (unreasonable) trajectory we could expect.
Like an exponentially growing compute requirement for negligible performance gains, on the scale of the energy consumption of small countries? Because that is where we are, right now.
Even if this were true, it's not quite the end of the story is it? The hype itself creates lots of compute and to some extent the power needed to feed that compute, even if approximately zero of the hype pans out. So an interesting question becomes.. what happens with all the excess? Sure it probably gets gobbled up in crypto ponzi schemes, but I guess we can try to be optimistic. IDK, maybe we get to solve cancer and climate change anyway, not with fancy new AGI, but merely with some new ability to cheaply crunch numbers for boring old school ODEs.
> Important Corollary Two. If you show a nonprogrammer a screen which has a user interface which is 100% beautiful, they will think the program is almost done.
> People who aren’t programmers are just looking at the screen and seeing some pixels. And if the pixels look like they make up a program which does something, they think “oh, gosh, how much harder could it be to make it actually work?“
> Extremely conservatively, if this saves a developer ~30 minutes per day, it pays for itself.
And when it sends the same developer down a rabbit hole of "something weird is going on" for a couple days (using your math - 2 x 8 * $50 / hr = $800), does it still pay for itself?
If you lose 8 hrs to weirdness, but save 10 hrs somewhere else, then yeah it does pay for itself.
My argument is that it doesn't need to improve overall productivity much to have positive ROI. Whether you agree that these tools improve productivity or not is another question.
> If you lose 8 hrs to weirdness, but save 10 hrs somewhere else, then yeah it does pay for itself.
> My argument is that it doesn't need to improve overall productivity much to have positive ROI.
That makes sense.
Problem is, there exists a paradox when relying upon LLM's to make software solutions:
People use them to "save time."
Saving time by outsourcing understanding to LLM's
circumvents learning.
Without learning, the tool becomes a crutch.
When the crutch breaks, there is nothing behind it to fall
back onto.
What helps:
- Challenging the idea that you need to mask to be successful. If masking is a recipe for burnout, then it actually seems like it's a strategy that will lower your chance for success. How much of the need here is self-imposed?
- Owning your calendar and timing for meetings to better suit your energy.
- Regular therapy and reflection, honestly.
- Regular exercise, doesn't matter who you are or what form, this is essential.
I can respect that this "simulation" fosters empathy, but worry that it also awfulizes/catastrophizes solvable problems. Figuring out functional routines and managing burnout is just as big a part of the job as writing code. It's very much a personal responsibility, maybe not in the job description, maybe harder for some than others, but it is our responsibility.