Hacker Newsnew | past | comments | ask | show | jobs | submit | _def's commentslogin

woah this time i even caught it before the status page reported something - i thought they were rate-limiting me.

If GitHub actions break I now assume it’s them and not me. GitHub needs to work on stability ahead of AI features.

It seems to have started slowly. For me, Github releases have failed to serve requests for hours already.

I don't get emails for my calendar events though (which is kinda important for my workflow, as my inbox is my task backlog)

Anyone got a link to some community work on the open source side? Sounds like useful devices, if you fix the issues mentioned.

> Build out what that tech debt is costing the company and the risk it creates

How to do that? Genuine question.


If there's a legit, measurable performance or data integrity problem, start with that. If most of your production bugs come from a specific module or service, document it.

If it is only technical debt that is hard to understand or maintain, but otherwise works, you're going to have a tougher time of building a case unless you build a second, better version and show the differences. But you could collect other opinions and present that.

Ultimately you have to convince them to spend the time (aka money) on it and do it without making things worse and that is easiest to do with metrics instead of opinions


In my experience development has become too compartmentalized. This is why this game of telephone is so inefficient and frustrating just to implement basic features.

The rise of AI actually is also raising (from my observations) the engineer's role to be more of a product owner. I would highly suggest engineers learn basic UI/UX design principles and understand gherkin behavior scenarios as a way to outline or ideate features. It's not too hard to pick up if you've been a developer for awhile, but this is where we are headed.


If it's been around for a while, look at the last year's worth of projects and estimate the total delay caused by the specific piece of tech debt. Go through old Jira tickets etc. and figure out which ones were affected.

You don't need to be anywhere close to exact, it's just helpful to know whether it costs more like 5 hours a year or 5 weeks a year. Then you can prioritize tech debt along with other projects.


It takes guts to say “this 1 month feature would be done in a couple days by a competent competitor using modern technology and techniques”, and the legendary “I reimplemented it in <framework> over the weekend” is often not well received.

But - sometimes drastic measures and hurt feeling are needed to break out of a bad attractor. Just be sure you’re OK with leaving the company/org if your play does not succeed.

And know that as the OP describes, it’s a lot about politics. If you convince management that there is a problem, you have severely undermined your technical leadership. Game out how that could unfold! In a small company maybe you can be the new TL, but probably don’t try to unseat the founder/CTO. In a big company you are unlikely to overturn many layers above you of technical leadership.


> hurt feeling

This is why I incessantly preach to my coworkers: "you are not your job". Do not attach to it emotionally, it's not your child, it's a contraption to solve a puzzle. It should be easy and relieving to scrap it in favor of a better contraption, or of not having to solve the problem at all.


More importantly, you are not your code.

This is actually harder for more senior/managerial folks, as often they'll build/buy/create something that's big for their level and now they're committed to this particular approach, which can end up being a real problem, particularly in smaller orgs.

Once upon a time, I worked for a lead who got really frustrated with our codebase and decided to re-write it (over the weekends). This person shipped a POC pretty quickly, and got management buy-in but then it turned out that it would take a lot more work to make it work with everything else.

We persevered, and moved over the code (while still hitting the product requirements) over a two year period. As we were finishing the last part, it became apparent that the problem that we now needed to solve was a different one, and all that work turned out to be pointless.


There's very few people whose brains work like this, it requires constant maintenance and people are ready to fall into the trap easily because they are held accountable for the outcomes, and its easy to pretend your ideas would have saved you from the certain disaster your fellows brought you to.

Just like every league of legends game, it's not possibly your fault!


but it is their code. It's their achievement. It's their mark on the world that says they were needed and did something useful. They struggled and their passion gave them the strength to get through it.

It's how they get to be the experts that are needed.

Replacing their code IS replacing their expertise and therefore them. How would you expect words to change that?


> the perception that your team is getting a lot done is just as important as getting a lot done.

This might be true. But I hate it. I think I should quit software engineering.


I think I've sort of come to this conclusion which gives myself more inner peace but I haven't exactly gotten better at communicating my thoughts, hence it feels like others often assume I'd "play" as most do, but in fact I'm not but just a bit weird and introverted. Any tips on this?


I swear I've never seen the waterfall dissappear. I've never seen agile work out so far. I don't say it can't work. But I haven't seen it yet.

What I really want is to be able to do the things I'm good at. Usually that is not what gets assigned to me or is next in line.


That's not making any sense as it presumes the arguments here are not based on science and common sense of experts, which is not the case.


The letter is not based on science and common sense of experts.

You can read the letter here https://www.iccl.ie/wp-content/uploads/2025/11/20251110_Scie...

It doesn't make any positive claims other than a statement from a budget speech relied on marketing "driven by profit-motive and ideology" that are "manifestly bound with their financial imperatives". So exactly the same AI-skeptic line of attack that's currently being played out in forums and social media.

If you look at the signatories and randomly sample a few, it's a lot of people in social sciences, gender studies, cultural studies, branches of AI critique (e.g. AI safety), linguistics, and the occasional cognitive scientist. These aren't the people who have the technical expertise to evaluate the current state of AI, however impressive their credentials are in their own fields.


That doesn't make them incorrect, investors, media and even many developers have been duped by the impressive linguistic human mimikry that LLM's represent.

LLM/"AI" tools _will_ continue to revolutionize a lot of fields and make tons of glorified paper pushers jobless.

But they're not much closer to actual intelligence than they were 10 years ago, singluarity level upheavals that OpenAI,et al are valued on are still far away and people are beginning to notice.

Spending money today to buy heating elements for 2030 is mostly based on FOMO.


This is a different claim than what I was responding to, which is that the claim that the letter was based on science and common sense experts.

If you grant that it wasn't then we're in agreement, although your stating that people have been "duped" is somewhat begging the question.

At any rate, my goal here isn't to respond to every claim AI skeptics are making, only to point out that taking an anti-science view is more risky to Europe than a politician stating that AI will approach human reasoning in 2026. AI has already approached or surpassed human reasoning in many tasks so that's not a very controversial opinion for a politician to hold.

And it's a completely separate question from whether the market has valued future cash flows of AI companies too highly or whatever debates people want to have over the meaning of intelligence or AGI.


You're asserting that they're unscientific by sampling some random signatories.

Looking through the signatories a bit closer there's a bunch of comp-sci professors and phd's, some of them had been working directly with neural network based methods, bunch of other that are in adjacent fields related to speech systems I encountered during my studies that have been upended by neural networks so they should also have a fair grasp of what capabilities have been added over the years.

One of the papers listed in the letter you linked to does seem to cut directly to the argument that there's a correlation in the data LLM's store that give people an exaggerated view of AI's capabilities by successfully encoding knowledge data.

I do agree that we shouldn't base policy on unscientific claims, and that's the main crux, since Der Leyens's statements mostly seems to be parroting Altmans hype (and Altman is primarily an executive with a vested interest in keeping up the valuation of OpenAI to justify all the investments).


What do people often get wrong about decoupling capacitors?


Not quite a list of fallacies, but some words about how to do it well:

Me at eevblog: https://www.eevblog.com/forum/projects/location-and-value-of...

A discussion here a while back (not all of which I agree with): https://news.ycombinator.com/item?id=42830948


Few things for starters:

- you have to look at it in frequency domain as well;

- speed of light is too slow;

- often capacitors are inductors, even more so when mounted on PCB;

- capacitance is not what is written on the component.

I am teaching this to robotics and computer engineering MSc students. Quite nice intro book into the topic that I recommend to my students: https://www.oreilly.com/library/view/principles-of-power/978...


You wrote about "small, honest teams" - the older I get the more I get the hunch that small teams/companies are a great way to go for me. Basically, choose some field you enjoy working in, with people you like. Any thoughts on how to find something like this? I feel like its the kind of thing you have to start yourself, but I can't take much risk.


My experience in finding one (15 people at the company I’m currently at, and I’m one of 3.5 engineers) (.5 because founder still codes more than we’d like him to) was effectively reaching out to companies that I knew didn’t have job postings up, and was the size that I’d fit into. I learned quickly that not every vacancy is posted publicly.


> not every vacancy is posted publicly

Thanks for this insight, hopefully would help me.

Also, love the bit "small, honest teams". Aligns really well with my biases.


Solid advice, thank you very much.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: