The bigger problem is a failing of the ecosystem at large; there's an insufficiently toothful agency policing the vendors and a lack of gold-seal certification that matters, leaving space for vendors to do whatever gets the card out the door before the competition. OpenGL's breadth as an API definitely doesn't help ("now that you've implemented direct-buffer rendering, let's go implement all that glVertex crap that lets you do the exact same thing, only slower! Your library coders have infinite time, right?"). But I doubt it's the root cause of the frustration; the "hit the benchmarks and beat ATI out the door this Christmas" ecosystem is my biggest gripe.
I've had to deal with cards that explicitly lie to the software about the capabilities by specifying they support a shader feature that's implemented in software without acceleration (!!!). There's no way to tell via the software that the driver is emulating the feature besides enabling it and noticing your engine now performs in the seconds-per-frame range. So we blacklist the card from that feature set and move on, because that's what you do when you're a game engine developer.
I've had to deal with cards that explicitly lie to the software about the capabilities by specifying they support a shader feature that's implemented in software without acceleration (!!!). There's no way to tell via the software that the driver is emulating the feature besides enabling it and noticing your engine now performs in the seconds-per-frame range. So we blacklist the card from that feature set and move on, because that's what you do when you're a game engine developer.