If there's a legit, measurable performance or data integrity problem, start with that. If most of your production bugs come from a specific module or service, document it.
If it is only technical debt that is hard to understand or maintain, but otherwise works, you're going to have a tougher time of building a case unless you build a second, better version and show the differences. But you could collect other opinions and present that.
Ultimately you have to convince them to spend the time (aka money) on it and do it without making things worse and that is easiest to do with metrics instead of opinions
In my experience development has become too compartmentalized. This is why this game of telephone is so inefficient and frustrating just to implement basic features.
The rise of AI actually is also raising (from my observations) the engineer's role to be more of a product owner. I would highly suggest engineers learn basic UI/UX design principles and understand gherkin behavior scenarios as a way to outline or ideate features. It's not too hard to pick up if you've been a developer for awhile, but this is where we are headed.
If it's been around for a while, look at the last year's worth of projects and estimate the total delay caused by the specific piece of tech debt. Go through old Jira tickets etc. and figure out which ones were affected.
You don't need to be anywhere close to exact, it's just helpful to know whether it costs more like 5 hours a year or 5 weeks a year. Then you can prioritize tech debt along with other projects.
It takes guts to say “this 1 month feature would be done in a couple days by a competent competitor using modern technology and techniques”, and the legendary “I reimplemented it in <framework> over the weekend” is often not well received.
But - sometimes drastic measures and hurt feeling are needed to break out of a bad attractor. Just be sure you’re OK with leaving the company/org if your play does not succeed.
And know that as the OP describes, it’s a lot about politics. If you convince management that there is a problem, you have severely undermined your technical leadership. Game out how that could unfold! In a small company maybe you can be the new TL, but probably don’t try to unseat the founder/CTO. In a big company you are unlikely to overturn many layers above you of technical leadership.
This is why I incessantly preach to my coworkers: "you are not your job". Do not attach to it emotionally, it's not your child, it's a contraption to solve a puzzle. It should be easy and relieving to scrap it in favor of a better contraption, or of not having to solve the problem at all.
This is actually harder for more senior/managerial folks, as often they'll build/buy/create something that's big for their level and now they're committed to this particular approach, which can end up being a real problem, particularly in smaller orgs.
Once upon a time, I worked for a lead who got really frustrated with our codebase and decided to re-write it (over the weekends). This person shipped a POC pretty quickly, and got management buy-in but then it turned out that it would take a lot more work to make it work with everything else.
We persevered, and moved over the code (while still hitting the product requirements) over a two year period. As we were finishing the last part, it became apparent that the problem that we now needed to solve was a different one, and all that work turned out to be pointless.
There's very few people whose brains work like this, it requires constant maintenance and people are ready to fall into the trap easily because they are held accountable for the outcomes, and its easy to pretend your ideas would have saved you from the certain disaster your fellows brought you to.
Just like every league of legends game, it's not possibly your fault!
but it is their code. It's their achievement. It's their mark on the world that says they were needed and did something useful. They struggled and their passion gave them the strength to get through it.
It's how they get to be the experts that are needed.
Replacing their code IS replacing their expertise and therefore them. How would you expect words to change that?
I think I've sort of come to this conclusion which gives myself more inner peace but I haven't exactly gotten better at communicating my thoughts, hence it feels like others often assume I'd "play" as most do, but in fact I'm not but just a bit weird and introverted. Any tips on this?
It doesn't make any positive claims other than a statement from a budget speech relied on marketing "driven by profit-motive and ideology" that are "manifestly bound with their financial imperatives". So exactly the same AI-skeptic line of attack that's currently being played out in forums and social media.
If you look at the signatories and randomly sample a few, it's a lot of people in social sciences, gender studies, cultural studies, branches of AI critique (e.g. AI safety), linguistics, and the occasional cognitive scientist. These aren't the people who have the technical expertise to evaluate the current state of AI, however impressive their credentials are in their own fields.
That doesn't make them incorrect, investors, media and even many developers have been duped by the impressive linguistic human mimikry that LLM's represent.
LLM/"AI" tools _will_ continue to revolutionize a lot of fields and make tons of glorified paper pushers jobless.
But they're not much closer to actual intelligence than they were 10 years ago, singluarity level upheavals that OpenAI,et al are valued on are still far away and people are beginning to notice.
Spending money today to buy heating elements for 2030 is mostly based on FOMO.
This is a different claim than what I was responding to, which is that the claim that the letter was based on science and common sense experts.
If you grant that it wasn't then we're in agreement, although your stating that people have been "duped" is somewhat begging the question.
At any rate, my goal here isn't to respond to every claim AI skeptics are making, only to point out that taking an anti-science view is more risky to Europe than a politician stating that AI will approach human reasoning in 2026. AI has already approached or surpassed human reasoning in many tasks so that's not a very controversial opinion for a politician to hold.
And it's a completely separate question from whether the market has valued future cash flows of AI companies too highly or whatever debates people want to have over the meaning of intelligence or AGI.
You're asserting that they're unscientific by sampling some random signatories.
Looking through the signatories a bit closer there's a bunch of comp-sci professors and phd's, some of them had been working directly with neural network based methods, bunch of other that are in adjacent fields related to speech systems I encountered during my studies that have been upended by neural networks so they should also have a fair grasp of what capabilities have been added over the years.
One of the papers listed in the letter you linked to does seem to cut directly to the argument that there's a correlation in the data LLM's store that give people an exaggerated view of AI's capabilities by successfully encoding knowledge data.
I do agree that we shouldn't base policy on unscientific claims, and that's the main crux, since Der Leyens's statements mostly seems to be parroting Altmans hype (and Altman is primarily an executive with a vested interest in keeping up the valuation of OpenAI to justify all the investments).
You wrote about "small, honest teams" - the older I get the more I get the hunch that small teams/companies are a great way to go for me. Basically, choose some field you enjoy working in, with people you like. Any thoughts on how to find something like this? I feel like its the kind of thing you have to start yourself, but I can't take much risk.
My experience in finding one (15 people at the company I’m currently at, and I’m one of 3.5 engineers) (.5 because founder still codes more than we’d like him to) was effectively reaching out to companies that I knew didn’t have job postings up, and was the size that I’d fit into. I learned quickly that not every vacancy is posted publicly.
reply