I think the reason there's so much technical debt is largely because the amount it would cost to actually build quality software... is too high. We could not afford it. Like, as a society. Our society is built on crappy software.
I think it's just a utopian fantasy to think that if only the right hypercard-like tool could be created, then the cost of building quality software would go down.
Or at any rate, actually: Let's agree that the web is built on an enormous stack of kludges upon kludges. (These kludges are both in code frameworks that people use to build things on the web, and in the fundamental protocols of the web itself). The reason it is this way is, again, because by the time it is recognized what a problem this is, it would simply be too expensive to rebuild the web from scratch. We can't afford it.
To build this utopian hypercard-like stack which allow just anyone to build web things, and be so high-quality that it just worked without having to understand the things it's abstracted on top of, and to maintain it as web technology and desires continue to evolve, etc.... would be such an expensive undertaking, with such a high risk of failure, that it has no way to succeed in that fantasy of making the web all around cheaper.
We see posts like this come up here from time to time, written by non-programmers who have some kind of belief that programmers _like_ complexity, that programmers are _opposed_ to making things easy and simple. I totally don't see "modern programmer culture fetish[izing] complexity" -- rather, on HN, I think it's pretty clear that modern programmer culture fetishizes simplicity. It's just that simplicity is _hard_. (And people chasing simplicity often end up over-abstracting, and just winding up with an even worse form of complexity). Succesful software that is powerful and reliable and simple takes skill and it takes time. And skill and time cost money.
We've built an economy and a society that is entirely based on crappy software, because the economy could not bear the cost of as much quality software as we have crappy software, and the crappy software provides short-term efficiencies that make people money. (and i'm not talking about programmers,I'm talking about the 'domain' businesses which could not afford to run without software 'automation' anymore, even though it's all crappy)
(1) You actually do meet a developer from time to time who fetishizes complexity. More frequently, you'll find developers and managers who'll fight any attempt to reduce surplus complexity.
(2) I don't think the root cause of crappy software is the cost of quality. Quoth Phil Crosby, quality is free, it's the screw-ups that are expensive.
Nobody has suggested that the federal and state Obamacare sites failed because too little was spent on them. The way it was done, state by state, made the experience a laboratory of software development.
It was certainly possible to make an Obamacare site that works. New York had a rough first week, but at the beginning of Week 2 I had no trouble signing my mother-in-law up. Some states never processed a single application online.
The trouble wasn't that "quality is expensive" but more incompetence in management, procurement, etc.
I think it's important to distinguish up-front cost versus long-term cost. In more than just this discussion.
For example: plenty of people in the Bay Area would, long-term, find it cost-advantageous to own rather than rent -- if they could get together a 20% down payment. But they can't. So it's kind of irrelevant whether they'd save money long term.
The same principle can apply to software. Sure, you'd save money long-term if you adhered to extremely high quality standards. But you wouldn't release this month -- and you need to release this month for your company to stay afloat.
Figuring out when the short term cost is worth the long term savings is a great deal of the art of software product strategy. And I don't think we should just categorically sweep all such decisions -- even all such wrong decisions -- into the catch-all of "incompetence."
Technical debt strangles many products before they even get to market.
The status quo of software development is that management won't face the facts of what software will cost so they chronically underestimate what an efficient software development effort would cost by a factor of two or three.
Instead of laying out a realistic plan that will succeed, they embark on a hopeful plan that will certainly NOT succeed, and you end up with a 2/3 chance of failure and, if there is success, it costs a lot more than efficient development.
Screwing around leads to going in circles, not delivering a product in the next month. If software managers focused on compressing the standard deviation of the schedule they'd come very close to least cost development, because screwing up is incredibly expensive.
When we want to stigmatize people who plan too heavily for the future, we call it "overengineering." When we want to stigmatize people who plan not enough for the future, we do whatever you're doing above. The line between those two failure modes is relatively narrow and not at all obvious. There aren't simple heuristics that will infallibly put us onto the line, and acting like this is all black and white doesn't help anyone.
Even "line" is probably an oversimplification. Some projects probably have a large region, some a narrow line, and some might non-obviously have no such path.
>>We see posts like this come up here from time to time, written by non-programmers who have some kind of belief that programmers _like_ complexity, that programmers are _opposed_ to making things easy and simple. I totally don't see "modern programmer culture fetish[izing] complexity" -- rather, on HN, I think it's pretty clear that modern programmer culture fetishizes simplicity.
Programmers are people. And like most people, they are resistant to any change that will devalue their well-paying jobs and endanger their relatively luxurious lifestyle.
Let's say that you're a developer who makes pretty decent money writing CRUD applications. A new tool comes out that automates the process and it becomes very popular. What will be your first reaction? Are you going to say, "wow, this is such a cool thing, I'm going to tell all my friends and clients about it and even start contributing to it on GitHub"? Or will you have a knee-jerk reaction, based on fear, and criticize the hell out of it?
The software itself is usually no more complicated than it needs to be; the issue is that the things we want to do with the software are themselves very complicated. If you tried to do anything remotely complicated in HyperCard, you pretty quickly ended up with something approaching the complexity of a modern application.
There is also the idea of "default" versus "custom" and how the definition of the two can change over time as expectations of the level of complexity built into the default change. Where AJAX form autocomplete was once a nifty "custom" feature, it has effectively evolved to become the default way to capture input. But not everywhere.
Things are complicated, and the best way to do things changes all the time. And not just from a technology standard. So we build flexible solutions that can be extended and evolved over time to adapt to those changes; which really just adds complexity in the end. But the complexity is worth it, because nothing is ever really "done".
> The software itself is usually no more complicated than it needs to be; the issue is that the things we want to do with the software are themselves very complicated.
Ok, take these requirements: I want a web app that counts the number of times users click a button. Users should be able to see the number of times they clicked and I should be able to see a top 10 of the highest click counts.
To do this I must know HTML, some general purpose server language, how to configure a web server (be it directly or through a hosting account or some cloud thing), how to package/deploy/whatever to said server. I must have some database to store the clicks and use SQL or JSON or some specific API. Interacting with the database from the general-purpose language is going to require a library. I might have to download it and put it in the correct place or use a package manager. If I want the interface to update immediately (like an old-fashioned app would) I also have to use JavaScript. If I want to control the position of things on the screen, fonts, colours, whatever I will also need CSS.
I understand how we arrived at this state of affairs, but claiming that it couldn't be simpler is just Stockholm syndrome.
Actually, from your basic set of requirements, you just feature creeped your design to death.
Let me take a shot at it:
Learn enough HTML to make a GET request. Know enough PHP to receive the GET request, and then update a textfile of entries on disk. Use a second file to store the top ten clicks. Return the second text file.
Thats it. In your example, you did what is generally expected of today's current "web trends": you take a super simple use case, and demand it be highly scalable for millions of users with instant and immediate feedback. And why are we using CSS at all? Its a button and some text, no styling is needed. And why are we using a database? Do you expect millions of concurrent users? Hundreds? Your requirements didn't say that. What do you mean package/deploy/whatever to the server? Sure, there are some basic routing needs and maybe Apache, but those take minutes or less to setup. Also, right in the middle of your solution, you changed the requirement "If I want the interface to update immediately...", right there, you are adding complexity.
While at the face of it, I understand what you are trying to say, but I have to point out that you are the primary cause of the increase of complexity, not the technologies involved. I actually think deploying a simple counter website like this is easy. But as soon as you want immediate feedback? Alright, more complexity. Millions of stored records? Alright maybe some large memory cache, like Memcache (or a large array). Persistent records? Alright, fine, get a DB. Millions of concurrent users? Alright, we are going to need some more complexity to handle throttling. Thousands of requests per second? Even more complexity, maybe we have a distributed system.
In the end, you took a simple problem, and turned it into an awfully complex one. Yes, designing an application for that kind of load is complex, because it is actually a complex task. Doing all the things we want to do today is hard because there isn't some turn key solution, not because we are working with tools that are too complex.
As an unfair little poke at your solution, there are in fact turn-key solutions, like Yahoo webhosting, where you just design really high level basics and it does the rest.
You're not properly identifying your requirements then. If we were to break down your "requirement" into user stories, I count the following user stories:
1. As a user, I want to access this application through my web browser.
2. As a user, I want to know how many times I have clicked the button.
3. As a user, I want to know how many times the top 10 users have clicked the button.
4. As a user, I want the interface to update immediately when I click the button.
5. As a designer, I want the ability to easily change fonts, colors and layouts.
6. As a product owner, I want the ability to push updates to my users automatically.
Your technical requirements all roll up to these user stories. If you wanted to do this as an iOS app, it would be pretty trivial: you could almost build the whole thing in InterfaceBuilder. But the web browser is an abstraction layer we've built because it carries with it certain architectural advantages.
The web browser makes simple requirements much more difficult, I will grant you that. But it makes other requirements much simpler: rather than having to provide a mechanism by which to upgrade users' compiled applications when I want to add a red button and a blue button, I just push the changes out to the server and every user sees both the red and blue buttons. I also no longer have to write network code to connect to a server: my web browser does that. When is the last time anyone wrote a network stack for an application? Everything is REST services and JSON now.
Yes, writing web applications is very complex. But that complexity allows us to do things that were very, very difficult only a decade ago. The cost of being able to do hard things easily is that trivial things are somewhat less trivial to do than they would be in other environments.
This is almost exactly Meteor's "Leaderboard" example [https://www.meteor.com/examples/leaderboard]. Not much code goes into that, and I think it's pretty approachable for a non-programmer.
This is an interesting characterization of Jonathan Edwards... did you not do research on the author before writing this, or are you really claiming he's a "non-programmer"?
First page of google for Jonathan Edwards produces a 17th century philosopher and a singer. Adding "Jonathan Edwards programming" produces Subtext, which has a UI straight out of 1994 and no releases. So ... He's a not terribly well known academic yawning about how programming needs to be more academic?
I'm not claiming he's an Super Well Known Guy, but that it's trivially easy to figure out the non-programmer ad hom is inaccurate.
> First page of google for Jonathan Edwards produces a 17th century philosopher and a singer.
Because all real programmers are on the first page of Google when you search for their name.
It's also worth noting the historical Jonathan Edwards is a pretty important figure in American history. The First Great Awakening set the tone for American religion; it's standard material in any high school History class. And if there's one person you teach about from that period, it's Edwards. In fact, I would be somewhat surprised if most Americans don't recognize the name. So being out-ranked by him isn't exactly unexpected
Right. So even if you haven't heard the name before, some very simple google searching turns up the fact that he isn't a non-programmer.
And even without that, you could flip through his prior blog posts and figure out that non-programmer isn't an accurate description.
> yawning about how programming needs to be more academic?
I mean, the article says basically the exact opposite of this?
> Honestly, I have no idea who he is.
Yeah, I don't know who most of the world's programmers are. So they must not be real programmers (well, unless googling their name turns up their github account? But self-hosted projects don't count!).
But in 10 seconds of Google you figured out that non-programmer probably isn't a great description. And in a few more you might've figured out he's a fellow at MIT's CSAIL, which isn't particularly well-known for hiring programming-illiterate people.
My point was that it's usually a good idea to actually research the author of a piece before firing off the ad homs.
subtext looks interesting. Is it being developed in the open at all? Versions for download? The page looks like one of those shop windows covered in white putty to stop you looking in.
some kind of belief that programmers _like_ complexity, that programmers are _opposed_ to making things easy and simple
Anecdotally, by far the worst spaghetti code I've ever seen was written by big-minded CS types shoehorning algos and metaprogramming quite unnecessarily. The newbie spaghetti I've seen has been magnitudes easier to refactor.
Pretty negative. Not everything that is intractable is 'crappy'. Sometimes it just hasn't anticipated how we're going to want to change it, or was built to order and not for expansion. Like a building or a roadthat you no longer want to use - nothing wrong with it, just no longer useful.
I think the reason there's so much technical debt is largely because the amount it would cost to actually build quality software... is too high. We could not afford it. Like, as a society. Our society is built on crappy software.
I'm not sure that I agree. If by crappy you mean "not formally proven", then sure. Or if you consider floating point crappy, then we disagree on terms.
I think our industry is in a state where 98% of the code produced is just junk: unmaintainable, barely working, no future, career-killing garbage just waiting to fail at the worst time. This is tolerated because software victories are worth (or, at least, valued at) gigantic sums of money: billions of dollars in some cases.
I'm not sure how well we can "afford" it. Do we want to go through another 2000-3? How much use is it to have massive numbers of people writing low-quality code, not because they're incapable but because they're managed specifically to produce shit code quickly in order to meet capriciously changing and often nonsensical "requirements" at high speed? I think it's great for building scam businesses that demo well and then fail horribly when code-quality issues finally become macroscopic business problems and eventually lead to investors losing faith. (Oh, and those failures are all going to happen around the same time.) I'm not sure that it's good for society to produce code this way. So much of the code out there is "totaled": it would cost more to fix or maintain it than to rewrite it from scratch. You can't (or shouldn't) build anything on that.
Floating point, as IEEE standard? Beautiful. Elegant. One of my favorite technical standards. Other than the +0/-0 thing, it's perfect.
Floating point, as implemented? Ugh. You've got processors which implement some subset of x87, MMX, SSE, SSE2, SSE4, and AVX, all of which handle floating point slightly differently. Different rounding modes, different precisions, different integer conversions. Calling conventions differ between x32 and x64. Using compiler flags alone on Linux x64, you can make 'printf("%g", 1.2);' print 0. Figuring out the intermediate precision of your computations takes a page-sized flowchart: http://randomascii.files.wordpress.com/2012/03/image6.png
The "mess" reflects the fact that choices exist, that is, it is the result of the different goals of the producers of compilers or the processors, not of the mentioned standards. What's not standardized can vary.
Compared to the pre-IEEE754 state, the standard was a real success.
Re the article of the picture you link (0) still unless you're building games, and as long as you're compiling using VC your results haven't changed for more then a decade and a half. New versions of the compilers took care to preserve the results. And even VC 6, produced 1998 luckily selected the constants of intermediate calculations that were most reasonable and matched the ones in SSE2 hardware introduced by Intel in 2001.
You say "So much of the code out there is "totaled": it would cost more to fix or maintain it than to rewrite it from scratch."
If that's the case, why does such code still exist? If it's still running, then in some sense someone is "maintaining" it, at least to the extent of keeping the server it resides in powered on. In other words, someone obviously finds it cheaper to keep such code running as-is than to rewrite it (or to do more ambitious maintenance on it).
Even crappy horrible buggy code can be useful (in a business sense, or a "makes its users happier than if it didn't exist" sense), as hard as it is for us as developers to admit it.
One example: I used to work for a company offering a security-related product with crippling, fundamental security problems. The flaws covered everything from improper use of cryptography to failure to validate external input, lack of proper authorization handling, and even "features" fundamentally at odds with any widely expected definition of security.
This company continues to survive, and has several large clients. But the liabilities of the current code base are massive. Worse is that the clients aren't aware of the deep technical problems, nor is there any easy way for then to be. In a very real sense, this company is making some money in the short term (I don't believe they are profitable yet) by risking their clients' valuable data.
In general, the concern by the grandparent is that there are projects out there that are producing some revenue, but are essentially zombies. Every incremental feature adds more and more cost, but there's no cost-effective way to remove sprawling complexity. The project will die, taking along with it significant investor money.
Okay, me and you agree that most of the code produced is junk (not everyone in this thread does I think!).
I agree that the junky code is going to bite us eventually.
But what do you think it would take to change things so most of the code produced is not junk? Would it take more programmer hours? More highly skilled programmers? Whatever it would take... would it cost more? A lot more? A lot lot more? I think it would. And I think if this is so, it's got to be taken account in talking about why most code produced is crap.
I do not think it's because most programmers just aren't trying hard enough, or don't know that it's junk. I think it's because most places paying programmers do not give them enough time to produce quality (both in terms of time spent coding and time spent developing their skills). And if say 98% of code produced is junk, and it's because not enough programmer time was spent on them... that's a lot of extra programmer time needed, which is a lot of expense.
The utopian theory of the OP is that with the right tooling, it would not take any more time, or would even take less time, to develop quality software. I think it's a pipe dream.
>>I do not think it's because most programmers just aren't trying hard enough, or don't know that it's junk.
Actually, that's exactly the reason.
Back in 2003 I was a sophomore in college and I took an intro-level CS class. It was taught in Java. Back then we didn't have sites like Stack Overflow, so if you ran into issues during projects you had to find someone who could tell you what you were doing wrong. Often times this person was the TA or the instructor, and those had limited availability in the form of office hours. So it was super easy to get demotivated and give up -- which is indeed what made a lot of wanna-be programmers (including me) switch majors.
Fast-forward ten years. We now have a plethora of resources you can use to teach yourself "programming." While this is good in the sense that more people are trying to enter the profession, it's not so good because when you teach yourself something complex like programming, it is often difficult to know whether you are learning the correct habits and skills. I've been learning Rails for the past five months and I spend a lot of time obsessing about whether the code I write is high quality, but that's only because I've been an engineer for six years and I'm well-aware of the risks of building something overly complex and unmaintainable. In contrast, most people build something, get it to work, and then call it a day. They don't go the extra distance and learn best practices. As a result, the code they produce is junk.
As long as the job of a programmer is to be a business subordinate, it will not change and we'll see crappy code forever.
Mainstream business culture conceives of management as a greater-than relationship. You're a lesser being than your boss, who's a lesser being than his boss, and so on... It also is inhospitable to the sorts of people who are best at technology itself. Finally and related, it conceives of "working for" someone not as (a) working toward that person's benefit, as in a true profession, but (b) being on-call to be micromanaged. The result is that most programmers end up overmanaged, pigeonholed, disempowered, and disengaged. Shitty code results.
If you want to fix code, you have to fix the work environment for programmers. Open allocation is a big step in the right direction, and technical decisions should be made by technical people. Ultimately, we have to stop thinking of "working for" someone as subordination and, instead, as working toward that person's benefit. Otherwise, of course we're going to get shitty code as people desperately scramble (a) up the ladder, or (b) into a comfortable hiding place.
"As long as the job of a programmer is to be a business subordinate, it will not change and we'll see crappy code forever."
Well of course that's the job of the programmer. The programmer is supposed to build something that does something useful. Most of the time, the primary value of the code isn't that it's GOOD, it's that it DOES THE THING. Oh, sure, at the level of (say) the Linux kernel you can almost think of it as code for the sake of code, but you walk back up the chain and you'll find a lot of people contributing indirectly because they want to do THINGS and they find that they need a kernel for those things.
But most programmers aren't at that far of a remove from doing things, they work directly for a company engaged in doing something other than selling code. Management at that company wants things done. They insist upon this at a very high level of abstraction, that of "telling you to do the thing for them." You are a leaky abstraction.
There are programmers who without direct day-to-day management produce code that is valuable to the business, and programmers who receive comprehensive managerial attention and produce code that costs the business.
I think it's just a utopian fantasy to think that if only the right hypercard-like tool could be created, then the cost of building quality software would go down.
Or at any rate, actually: Let's agree that the web is built on an enormous stack of kludges upon kludges. (These kludges are both in code frameworks that people use to build things on the web, and in the fundamental protocols of the web itself). The reason it is this way is, again, because by the time it is recognized what a problem this is, it would simply be too expensive to rebuild the web from scratch. We can't afford it.
To build this utopian hypercard-like stack which allow just anyone to build web things, and be so high-quality that it just worked without having to understand the things it's abstracted on top of, and to maintain it as web technology and desires continue to evolve, etc.... would be such an expensive undertaking, with such a high risk of failure, that it has no way to succeed in that fantasy of making the web all around cheaper.
We see posts like this come up here from time to time, written by non-programmers who have some kind of belief that programmers _like_ complexity, that programmers are _opposed_ to making things easy and simple. I totally don't see "modern programmer culture fetish[izing] complexity" -- rather, on HN, I think it's pretty clear that modern programmer culture fetishizes simplicity. It's just that simplicity is _hard_. (And people chasing simplicity often end up over-abstracting, and just winding up with an even worse form of complexity). Succesful software that is powerful and reliable and simple takes skill and it takes time. And skill and time cost money.
We've built an economy and a society that is entirely based on crappy software, because the economy could not bear the cost of as much quality software as we have crappy software, and the crappy software provides short-term efficiencies that make people money. (and i'm not talking about programmers,I'm talking about the 'domain' businesses which could not afford to run without software 'automation' anymore, even though it's all crappy)