Why bother with optimisation these days? Who cares about making something fast when you can make it pretty? So what if everyone uses double precision floating point numbers? "Use assembly? But that's too hard." "So what if my CPU has an instruction that will let me multiply 32 integers at once? I want to use floats." "I want to do it in a browser."
At least from the business side of things, code efficiency matters only when it makes a difference in the bottom line. If you can get the solution cranked out in half (or less) the time by being careless with memory, algorithms, and selection of third-party functions, the result is essentially double (or more) the return on investment. In many cases with routine processing of business data, a programmer would have to try to make a program run noticeably slowly. Put another way, paying a coder is often more expensive than buying and running a faster CPU.
PS: I have written some of my industries' most CPU efficient code on the market, but this is only because it matters in my industries. On the other hand, I have been in situations where efficiency is pretty insignificant, such as when programming an installation procedure that the user runs only when installing or removing the software.
Yes, it's bad algorithms/bloat/poor code efficiency that are the immediate cause, but as Socrates pointed out re the Presocratic philosophers, describing only immediate causes can yield quite shallow explanations. The real question is why the bloat and awful code and algorithms? I'll add to my previous reply to that question, with an idea that occurred to me last night: one answer to prisoner's dilemmas/tragedies of the commons is to appoint a supervisor whose word is law, say when programs are running...
And what do you know, we already have one - called an OS. Someday, I expect that OSes will be smart enough to examine the executables they have been given (note that that can be done in parallel) and reward tight code with more cycles, and punish sloppy code with lower priority (unless the user says otherwise, which she may.) Our A.I. might well be good enough now, for that - to do at least crude reverse engineering or pattern recognition rapidly (doesn't have to be real time, of course.) This solves the prisoner's dilemma (which is also a freeloading problem) by punishing the freeloaders and rewarding the cooperators; now good code isn't slowed down just as if it too were bad code; and most companies are forced to start tightening things up, and to actually compete with each other to produce better code than most other outfits. (For webpages - either the browser or OS could police webpages after examining the code and load of pages.)
I do believe that's a patentable idea. Or two. Pity I'm working on another patentable idea I like still more - but if someone wants to front me some real money, please do drop a comment here.
Some editing by reply follows, since the editor just ate a lot of work (perhaps my edit straddled the time limit after posted when edits are allowed):
The identification of programs as well or badly coded can take place outside of the OS, and at a very different time and place, just as antivirus programs work. A bloat=identifying program can do the analysis, judge the programs and provide identifying bit sequences or hashes (if necessary to prevent evasion) to the OS, which can then use that information to police who gets what priority.
Other efficiency evaluation, including of script behavior, could be done either in nearly real time, or over days or months, by the OS or another program feeding data to it.
Some of this analysis would be straightforward, looking for known bad patterns, other analyses might requite AI, neural nets or the sort of combination (AI flags, humans confirm/disconfirm) that make Palintir hum.
Note that this doesn't just produce an immediate effect of shielding good programs from the sluggishness of bad ones; the competition that this sets up becomes something of a marketplace promoting efficiency that keeps becoming more demanding; a competition in which efficiency once again makes you look very good to your customers, as used to be the case when users could only run one program at a time. Back then, we knew who was feeding us garbage code and who wasn't. This invention let's us get back to something like that standard while keeping our shiny new hardware and multitasking, too.
Who knows, one edit at a time, I may have put a unicorn patent into the comments section of Hacker News as the idea occurred to me. Sliding down a slippery slope to publication.
The way I see it, it's essentially because of frameworks and engines. For example in games, I've seen simple isometric games with very little graphic intensity build with huge game engines like Unity or Unreal. 20 years ago, that same game would have been built almost directly with machine code and be just as beautiful (adjusting for screen resolution and quality from back then). Well, that's my theory based largely on assumptions.
Software and Web is probably the same - I remember building very small websites from scratch, painstakingly writing each line of HTML and CSS myself to be as optimized as possible. Today, I'd probably have to start with a framework like Foundation or Bootstrap. Or maybe Semantic-UI which would require node.js to be installed and gulp and whatever. That stack can help with fast development, but it's probably not as optimized as doing everything in notepad++, right?
The company I work for just re-built our whole flagship product from scratch. It was previously in Delphi 6, and now it's in Java. Notwithstanding the prejudice against Java and speed, the reality is that the software now requires like 16MB of RAM and still takes more than a minute to load. And then it's slow as hell, both for use and for actually producing its output (it's print management and VDP).
Note that clock speed hasn't been zooming ahead recently, compared to the astonishing past, which you can see a lovely illustration of at:
http://www.dailytech.com/A+Supercomputer+on+Your+Wrist+Infog...
Instead we mostly get more cores, but there's no general solution to the Von Neumann bottleneck, so that's of limited help to the majority of software. A.I. is benefiting, though. Building a safe, self-driving car using one core would be rather difficult. So some software IS vastly improved.
And then there's Parkinson's Law, "work expands to fill the time allotted to it". Just substitute "sloppiness" for "work" and "bandwith" or "CPU cycles" for "time". Programmers haven't become cheaper or less in demand, so "sloppy" (resource heavy) code is way cheaper than tight-and-hard-to-debug-and-maintain. If you must hire more programmer hours to analyze bottlenecks or use a lower-level language you do. But only if you must. So homeostasis sets in (Parkinson's Law is all about effort-related-homeostasis.)"The hardware giveth and the software taketh away" because that's cheapest for competitive software businesses.
Part of that situation, too, is that the more the software/hardware is capable of, the more kinds of tasks get done with it; but that also means way more demand for programmers, so now you really can't waste their time "optimizing" everything. And as a result we have word processors that are actually slower at times now, than back when the monitor screens were green dots on black backgrounds. So that's homeostasis at work, too.
And then there's multitasking. There are dozens of pages and programs open right now on my desktop, so a tragedy of the commons is taking place within the humming innards of my PC; a prisoner's dilemma in which each program (and the firm that made it) can hope that the others will only sip resources so that they can steal more cycles and download fatter ads in a browser tab and use sloppier code. They can be wasteful because the user won't usually know which code is being piggy, and because they know others will be piggy if they aren't. On the other hand, if they optimize like hell the customer probably won't notice (given the other piggy programs the customer is running in the background) so the software company has just thrown that money (a lot of money) away. More homeostasis.
You're right, there are complexities, and parallelism within each core, too - but again, some instructions will benefit from this a lot more than others (Von Neuman bottleneck, again.)
"I can already tell you what's going to happen to all those extra cycles that faster hardware is going to give us in the next hundred years. They're nearly all going to be wasted."
When your algorithms is O(N^2) or worse, like O(N^3), the computer will handle small amount of data, and faster hardware will only allow your N to grow very little.
(N being the amount of data your algorithm is dealing with).
because it ins't optimised as much mainly because hardware is so much faster and has greater capacity.
Back in the 90s you would still see big chunks of OSs and processor intensive applications written in assembly. But as the processors got more efficient and storage and operating RAM wasn’t a limit, programs could achieve the same results with higher level languages and without much effort to optimise, so developers stopped putting in as much effort into it.
L
O
A
T
Why bother with optimisation these days? Who cares about making something fast when you can make it pretty? So what if everyone uses double precision floating point numbers? "Use assembly? But that's too hard." "So what if my CPU has an instruction that will let me multiply 32 integers at once? I want to use floats." "I want to do it in a browser."
Software runs slow because nobody cares any more.