Good article. Let me try to generalize his main point a little further:
"Premature" does not mean "early in the project" -- it means "before you have hard data to justify the optimization". The reason this guy's optimizations aren't premature is that he's implementing a game engine, a well-established class of program with well-known pitfalls. His hints are based on hard data -- data from earlier game engines, which he and his veteran colleagues have managed to acquire and digest.
In general, it's hard to use performance data from one software system to design optimizations for another -- the two projects must have similar platforms, design constraints, and architecture. But the programs this guy is talking about are all in the same family -- they're built in C++, constrained to run in real time and monitor interactions between hundreds of in-game objects, and descended from a long line of ancestors whose architecture has evolved step-by-step since the earliest days of game programming.
I've been thinking about good observations can degenerate into anti-patterns in the folk wisdom of programmers, and this article hits one of them square on the head.
Pattern: "Adding more programmers to a late project makes it later."
Anti-Pattern: "Don't bother hiring more developers."
Pattern: "There is no silver bullet"
Anti-Pattern: "A more productive programming languages won't solve any of your problems."
To this, I'll now add...
Pattern: "Premature optimization is the root of all evil"
Anti-Pattern: "Throw away your algorithms book."
That last one misrepresents the complaints about premature optimization. It isn't the algorithm choice that is premature, but, say, the difference between using an std::string or char*.
That difference doesn't really matter unless it's part of a bottleneck of a developed system. But the O(bottleneck-function) is determined by the algorithm choice, which isn't premature to plan before starting coding.
I peeked into the ANSI Common Lisp chapter on optimization, which I think starts out by saying "optimization starts on the algorithmic level". I couldn't agree more.
You can safely ignore this one if you are working on server software. Instead you really should focus entirely on functionality and ignore all considerations of performance until you actually have a problem. Be a dogmatic anti-premature-optimizer! You can always throw some hardware at the problem later.
If you are building something that will run on machines you don't control... well, then your problem is a bit different. If you are doing that and it's a something like a game, which must "feel right", well... maybe you should make some decisions up front trading off productivity for runtime performance. But even in those extreme cases, you are probably better off fixing speed problems when they come up, not before.
"Premature" does not mean "early in the project" -- it means "before you have hard data to justify the optimization". The reason this guy's optimizations aren't premature is that he's implementing a game engine, a well-established class of program with well-known pitfalls. His hints are based on hard data -- data from earlier game engines, which he and his veteran colleagues have managed to acquire and digest.
In general, it's hard to use performance data from one software system to design optimizations for another -- the two projects must have similar platforms, design constraints, and architecture. But the programs this guy is talking about are all in the same family -- they're built in C++, constrained to run in real time and monitor interactions between hundreds of in-game objects, and descended from a long line of ancestors whose architecture has evolved step-by-step since the earliest days of game programming.