What would be great, and I don't know if @dang / the mods would take on requests like this, would be for bot participants to be allowed but the account flagged. So e.g. the user name just says "[bot] Zakodiac" or something.
As well as being an ethical approach - I think it's wrong to try to impersonate humans and/or not announce AI output as AI - it would also be handy for new filter options: all bot posts are OK, hide bot leaf comments, or hide all threads with bot comments. etc.
[edited as my robot unicode/emoji char didn't come through]
I think they needed to be clearer about what the actual requirement was.
If the requirement is, "Show the balance _as it was_ at that point in time", this system doesn't fulfil it. They even say so in the article: if something is wrong, throw away the state and re-run the events. That's necessarily different behaviour. To do this requirement, you actually have to audit every enquiry and say what you thought the result was, including the various errors/miscalculations.
If the requirement is, "Show the balance as it should have been at that point in time", then it's fine.
I suspect there is a bit of knee-jerk because so often this pattern is misapplied. I actually quite like the example in the article although I'm basically allergic to CQRS in general.
I think your point about write-ahead logging etc is a good one. If you need a decent transactional system, you're probably using a system with some kind of WAL. If you're event sourcing and putting events into something which already implements a WAL, you need to give your head a wobble - why is the same thing being implemented twice? There can be great reasons, but I've seen (a few times) people using a perfectly fine transactional DB of some kind to implement an event store, effectively throwing away all the guarantees of the system underneath.
I don't want to sound too dismissive, but all these arguments have been brought up time and again. The move from assembler to high level languages. The introduction of OOP. Component architecture / COM / CORBA / etc. The development of the web browser. The introduction of Java.
2018 isn't "the start of the decline", it's just another data point on a line that leads from, y'know, Elite 8-bit on a single tape in a few Kb through to MS Flight Simulator 2020 on a suite of several DVDs. If you plot the line it's probably still curving up and I'm not clear at which point (if ever) it would start bending the other way.
That would be the case under market conditions where buyers are making rational decisions with perfect knowledge based on all available choices. Does that sound like the system we have? To me, reality seems more like a small set of oligopolies or effective monopolies, byzantine ownership structures and a pursuit of short term profits pushing future costs elsewhere as externalities.
I didn't say we get the quality of software people would rationally pay for in a rational system, if the right people were paying for it. I said we get the quality of software that people pay for.
To me on markets where customer actually gets to choose what to buy or play the weaker options have much less success. Gaming is really one example. There is still sales, but they are lot less than expected even from big players if they don't look like good products.
This. There are plenty of people trying to keep using Windows 10, and Microsoft is trying to force them to use Windows 11, which they do not want. The same goes for Mac OS 26. "Choice" doesn't matter.
One interesting thing that most non-systems programmers don’t know is that memory and cpu performance have improved at completely different rates. That’s a large part of why we have x times faster CPUs but software is still slow.
The systems people worry more about memory usage for this reason, and prefer manual memory management.
> ... memory and cpu performance have improved at completely different rates.
This is overly simplified. To a first approximation, bandwidth has kept track with CPU performance, and main memory latency is basically unchanged. My 1985 Amiga had 125ns main-memory latency, though the processor itself saw 250ns latency - current main memory latencies are in the 50-100ns range. Caches are what 'fix' this discrepancy.
You would need to clarify how manual memory management relates to this... (cache placement/control? copying GCs causing caching issues? something else?)
Moore's Law has been dead for a long time. The doubling rate of transistors is now drastically below Moore's prediction.
We're adding transistors at ~18%/year. That's waaaaay below the ~41% needed to sustain Moore's law.
Even the "soft" version of Moore's law (a description of silicon performance vs. literally counting transistors) hasn't held up. We are absolutely not doubling performance every 24 months at this point.
Moore's law has kind of ended already though, and maybe has done for a few years, and even if you can make a chip which is faster there's a basic thermodynamics problem running it at full tilt for any meaningful period of time. I would have expected that to have impacted software development, and I don't think it particularly has, and there's also no obvious gain in e.g. compilers or other optimization which would have countered the effect.
Probably architecture changes (x86 has a lot of historic baggage that difficults newer designs) and also more specialized hardware in the CPU, probably this might also be one of the reasons Apple went this way with the M Silicon
But the machines aren't really "faster" in clock speed— for a long time now the gains have been in better and more local caching + parallelism at both the core and instruction level.
I think another part of this, is that Tech is perhaps the only industry that hasn't quite gotten over itself yet.
Writing code is artistic the same way plumbing is artistic.
Writing code is artistic the same way home wiring is artistic.
Writing code is artistic the same way HVAC is artistic.
Which is to say, yes, there is satisfaction to be had, but companies don't care as long as it gets the job done without too many long-term problems, and never will care beyond that. What we call tech debt, an electrician calls aluminum wiring. What we call tech debt, a plumber calls lead solder joints. And I strongly suspect that one day, when the dust settles on how to do things correctly (just like it did for electricity, plumbing, flying, haircutting, and every other trade eventually), we will become a licensed field. Every industry has had that wild experimentation phase in the beginning, and has had that phase end.
Writing code is artistic the same way writing text is.
Whether that is a function call, an ad, a screen script, a newspaper article, or a chapter in a paperback the writer has to know what one wants to communicate, who the audience/users will be, the flow of the text, and how understandable it will be.
Most professionally engaged writers get paid for their output, but many more simply write because they want to, and it gives them pleasure. While I'm sure the jobs can be both monetarily and intellectually rewarding, I have yet to see people who do plumbing or electrical work for fun?
> Writing code is artistic the same way home wiring is artistic.
Instead of home wiring, consider network wiring. We've all seen the examples of datacenter network wiring, with 'the good' being neat, labeled and easy to work with and 'the bad' being total chaos of wires, tangled, no labels, impossible to work with.
IE. The people using the datacenter don't care as long as the packets flow. But the others working on the network cabling care about it A LOT. The artistry of it is for the other engineers, only indirectly for the customers.
> companies don't care as long as it gets the job done without too many long-term problems
Companies don't care as long as it gets the job done without too many VERY SHORT TERM problems. Long term problems are for next quarter, no reason to worry about them.
And they somewhat have a point. What's the point of code quality, if it delays your startup 6 months, and the startup goes under? What's the point of code quality, if it will be replaced with the newest design or architecture change in 6 months? What's the point of planning for 5 years if a pandemic or supply chain shock could muck it up? What's the point of enforcing beautiful JQuery code... in 2012?
The problem isn't that companies make these tradeoffs. It's that we pretend we're not in the same boat as every other trade that deals with 'good enough' solutions under real-world constraints. We're not artists, we're tradesmen in 1920 arguing about the best home wiring practices. Imagine what it would be like if they were getting artistic about their beautiful tube-and-knob installations and the best way to color-code a fusebox; that's us.
What in the bad rhetoric is this? The trades did and still do have standards.
Hell there was a whole TikTok cycle where people learned there is a right and wrong way to lay tile/grout. One way looks fine until it breaks, the other lasts lifetimes.
It’s the exact same trend as in software: shitty bad big home builders hire crap trades people to build cheap slop houses for suckers that requires extensive ongoing maintenance. Meanwhile there are good builders and contractors that build durable quality for discerning customers.
The problem is exploitation of information asymmetries in the buyer market.
Yes, they do; after regulation, and after the experimentation phase was forcibly ended. You can identify 'right and wrong' tile work, precisely because those standards were codified. This only reinforces my point: we're pre-standardization, they're post-standardization, and most pre-standardization ideas never work out anyway.
For a startup good quality code will never make a difference if everything else is wrong, i.e. product market fit etc. But conversely poor quality code can destroy a startup (the product cannot pivot fast enough, feature development grinds to a halt, developers leave, customers are unsatisfied etc.) even if everything else is right.
I don't see working for most of my employers as "artistic."
I do see it as more of a craft than a typical trade. There are just too many ways to do things to compare it to e.g. an electrician. Our industry does not have (for better or for worse) a "code" like the building trades or even any mandated way to do things, and any attempts to impose (cough cough Ada, etc.) that have been met with outright defiance and contempt in fact.
When I'm working on my own projects -- it's a mix of both. It's a more creative endeavour.
> I do see it as more of a craft than a typical trade. There are just too many ways to do things to compare it to e.g. an electrician.
There are sooo many ways to get electricity from one point to another. The reason that a lot of those options are no longer used is not because they don't exist but because they were legislated out. For example, if you want to run wild just run a single "hot" wire to all your outlets and connect each outlet's neutral to the nearest copper plumbing. Totally esoteric, but it would deliver electricity to appliances just fine. Safety is another matter.
- Electricians in the 1920s? Infinite ways to do things. DC vs AC wars. Knob-and-tube vs conduit vs armored cable. Every electrician had their own "creative" approach to grounding. Regional variations, personal styles, competing philosophies. Almost all of those other ways are gone now. Early attempts to impose codes on electricians and electrical devices were disasters.
- Plumbers in the 1920s? Lead vs iron vs clay pipes. Every plumber had their own joint compound recipe. Creative interpretations of venting. Artistic trap designs. Now? Why does even installing a basic pipe require a license? We found out after enough cholera outbreaks, methane explosions, and backed-up city sewer systems.
- Doctors in the 1920s? Bloodletting, mercury treatments, lobotomies, and their own "creative" surgical techniques. They violently resisted the American Medical Association, licensing requirements, and standardized practices. The guy who suggested handwashing was literally driven insane by his colleagues.
We're early, not special. And just like society eventually had enough of amateur electricians, plumbers, and doctors in the 1920s, they'll have enough of us too. Give it 40 years, and they'll look at our data breaches and system designs the same way we look at exposed electrical wiring, obviously insane no matter the amount of warnings.
While I agree with the general point of treating coding as any other craft or tradeskill, I disagree that in 40 yers non-technical people will be able to evaluate on system design or data breaches. Programming is too arcane and esoteric for non technical people. It all happens too behind the scenes for people to connect the dots.
I always say that code quality should be a requirement as any other. Many businesses are fine with rough edges and cut corners if it means things are sort of working today rather than being perfect tomorrow. Other businesses have a lower tolerance for fail and risk.
If you haven't noticed a dramatic decline in average software quality, you're not paying attention or willfully ignoring it. The article is right.
This is partly related to the explosion of new developers entering the industry, coupled with the classic "move fast and break things" mentality, and further exacerbated by the current "AI" wave. Junior developers don't have a clear path at becoming senior developers anymore. Most of them will overly rely on "AI" tools due to market pressure to deliver, stunting their growth. They will never learn how to troubleshoot, fix, and avoid introducing issues in the first place. They will never gain insight, instincts, understanding, and experience, beyond what is acquired by running "AI" tools in a loop. Of course, some will use these tools for actually learning and becoming better developers, but I reckon that most won't.
So the downward trend in quality will only continue, until the public is so dissatisfied with the state of the industry that it causes another crash similar to the one in 1983. This might happen at the same time as the "AI" bubble pop, or they might be separate events.
I suppose it could be quantified by the amount of financial damage to businesses. We can start with high-profile incidents like the CrowdStrike one that we actually know about.
But I'm merely speaking as a user. Bugs are a daily occurrence in operating systems, games, web sites, and, increasingly, "smart" appliances. This is also more noticeable since software is everywhere these days compared to a decade or two ago, but based on averages alone, there's far more buggy software out there than robust and stable software.
Maybe. Personally I've observed an increase of major system and security failures in the past 5 years, especially failures that impact very large tech companies. You could measure these public failures and see if frequency or impact has increased.
You make a strong point, but now we also have smartphones, ioT devices and cloud networks EVERYWHERE and there is tons of shared open source code (supply chain attacks), and there are tons of open-source attacker tools,vuln databases and exploits (see nuclei on github).
Yes, many/most systems now offer some form of authentication, and many offer MFA, but look at the recent Redis vulns -- yet there are thousands of Redis instances vulnerable to RCE just sitting on the public internet right now.
Eh, after 20 years in the industry, I think that the overall quality of software is roughly the same. Matter of fact, my first job was by far the worst codebase I ever worked at. A masterclass in bad practices.
I blame software updates. That's when software went from generally working on release to not at all.
Agile management methods set up a non-existent release method called "waterfall" as a straw man, where software isn't released until it works, practically eliminating technical debt. I'm hoping someone fleshes it out into a real management method. I'm not convinced this wasn't the plan in the first place, considering that the author of Cunningham's law, that "The best way to get the right answer on the Internet is not to ask a question; it's to post the wrong answer." was a co-signer of the Agile manifest.
It'll take a lot of work at first, especially considering how industry-wide the technical debt is (see also: https://xkcd.com/2030/), but once done, having release-it-and-forget-it quality software would be a game changer.
> a non-existent release method called "waterfall" as a straw man
The person that invented the name never saw it, but waterfall development is extremely common and the dominant way large companies outsource software development even today.
The only thing that changed now is that now those companies track the implementation of the waterfall requirements in scrum ceremonies. And yes, a few more places actually adopted agile.
From a PM point of view waterfall and agile can both have clear end dates. The main difference is in how they approach scope and quality to meet that deadline.
Every time someone says this I ask them “what is your solution for maintainable software architecture?” And they say “what is software architecture? I just write code”
I’ll bite: use objects sparingly, and mainly to namespace functions that operate on data. Use inheritance even more sparingly, because it’s a nightmare to work with a poorly conceived inheritance hierarchy, and they’re so easy to get wrong. Pure interfaces are an exception, in languages where you need them. Write mostly functions that transform data. Push IO to the edge where it’s easy to swap out.
Most importantly, never ever abstract over I/O. Those are the ones that leak out and cause havic.
One salient difference is that typically abstraction layers trade performance (usually less than polemicists like the article author think) for improvements in developer efficiency, safety, generality, and iteration speed.
Current tools seem to get us worse results on bug counts, safety, and by some measures even developer efficiency.
Maybe we'll end up incorporating these tools the same way we did during previous cycles of tool adoption, but it's a difference worth noting.
The trustee's reports on FTX's internal processes were damning. Even they had held their Anthropic on the way up, who's to say their internal FTT ledger and black holes in the Alameda books would not have eclipsed that?
The issue wasn't that crypto markets in general were down at that point; the issue was they were doing frauds.
I think the implication is they fraudsters rarely get busted when they are making everyone money only when things are looking bad. Eventually it catches up though.
Sure, but would we really want to tell liquidators to manage assets for best eventual return rather than just convert everything to cash? In this instance, in hindsight, sure - you'd want the other thing, you want the bitcoin not the cash. But this feels like the exception that proves the rule.
The reason is simple: it allows you to do the install using "sync" in all cases, whether the lockfile exists or not.
Where the lockfile doesn't exist, it creates it from whatever current is, and the lockfile then gets thrown away later. So it's equivalent to what you're saying, it just avoids having two completely separate install paths. I think it's the correct approach.
It's the name at the top. This particular author has been active with LLM posts at least since the popularity explosion of ChatGPT and all of their posts on that topic seem to be well-informed (and they are otherwise community-famous for co-authoring Django). To your point, the content is only as special as the author's reputation makes it, which will be different from reader to reader.
His posts on AI are often very insightful and, unusually for someone so involved in AI, he's not connected to any of the big AI companies. Therefore he is impartial.
Agreed. "Dessert" vs "desert" - mistaking these two is often not a grammatical error (they're both nouns), but is a spelling error (they have quite different meanings, and the person who wrote the word simply spelled it wrongly).
I agree, but this is definitely the kind of spelling error (along with complementary/complimentary, discrete/discreet, etc.) that we normally don't expect our spellcheckers to catch.
The Judges impose the punishment set out in the law; they don't make this stuff up.
The alternative is Judges letting people off just because they're politicians. That seems like an extremely poor precedent to set, those in political life should be held to higher standards.
I didn't say letting someone of, a crime has been committed and the person should be punished. But forbidding someone from running from office? I don't think that should be the power of the judiciary. That should be the power and responsibility of the electorate (to not vote for them).
You do understand that this is explicitly mandated by the law and only in special cases can this be lifted (and here the judge mentioned the lack of remorse or admission from the defendant was a deciding factor for this)?
Here is a reference for that: https://www.vie-publique.fr/questions-reponses/297965-inelig...
>> Trump should be allowed to run for a 3rd term right?
From the 25th Amendment:
"No person shall be elected to the office of the President more than twice, and no person who has held the office of President, or acted as President, for more than two years of a term to which some other person was elected President shall be elected to the office of the President more than once."
Trump might not be able to "be elected to the office of the President" again, but he could run as a temporary Vice President and then the President could resign, allowing Trump to serve another term, for example.
Of course the 12th Amendment says, "no person constitutionally ineligible to the office of President shall be eligible to that of Vice-President of the United States", but the 25th Amendment doesn't say a two-term President is ineligible to the office of President, it says he can't "be elected to the office of President".
The Supreme Court recently decided that a law prohibiting false statements did not prohibit misleading statements. If the legislators had wanted to prohibit misleading statements, they would have prohibited false and misleading statements, not just false ones. Words matter to them.
And there are many other possibilities for creative types.
As far as the French eligibility rules go, would you be comfortable with a system where anyone who Trump's DOJ can get a conviction on is ineligible to run for office, with no right of appeal on that holding? That would be a really terrible incentive.
Not only Trump. Without the rules, Musk or Putin could run as well, the latter even work-from-home style. Also, if justice being blind is so bad before an election, why not after? Figuring out who won shouldn't involve any courts either. The public will just need to figure out who really won for themselves!
As well as being an ethical approach - I think it's wrong to try to impersonate humans and/or not announce AI output as AI - it would also be handy for new filter options: all bot posts are OK, hide bot leaf comments, or hide all threads with bot comments. etc.
[edited as my robot unicode/emoji char didn't come through]