For those who didn't use Nvidia on linux in the old times:
The driver was a proprietary binary. Since a kernel module requires interfacing with the kernel API, it could be considered a derivative work and a breach of the GPL license. So, Nvidia provided a small open source shim which interfaced between the kernel and the proprietary module.
You had to compile that shim yourself with the right arcane command line incantations and if you did anything wrong, missed the right packages or had an incompatible user space, including libs and compiler, you could end up without X11 and no way to easily run a browser or google about the problem you had. You had to do it EVERY FUCKING TIME YOU UPDATED THE KERNEL!
It was still possible to edit xorg.conf or, if you were older, xf86config by hand to fix it and use the VESA driver, but it was very inconvenient. It became more reliable over the time and even fully automated with DKMS, but I hated them for it.
I used and recommended ATI and INTEL for most of the people I could for a long time because of this.
I was from a time when It was possible to use 3D acceleration on linux with 3dfx with fully open source drivers (I think), giving you a cheap UNIX-like graphical workstation with OpenGL support. When Nvidia bought 3dfx and simply killed their drivers, my hate became specially strong.
EDIT: Remember you had to recompile the shim at every kernel update and replaced "module" with "driver".
> I used and recommended ATI and INTEL for most of the people I could for a long time because of this.
Same here but recently I somehow got a 3700X and there's no integrated GPU so I had to look for a GPU. I like my PC not just quiet but nearly silent, so a GPU with a fan was a big no-no. I couldn't find any single GPU able to drive 3840x1600 without a fan... Except for a NVidia one. Of course the proprietary Linux drivers are somehow buggy: the "sleep" doesn't work correctly, failing to reinitialize the correct video mode when waking up. It's always, always, always the same with NVidia GPUs on Linux. Thankfully I can switch to tty2, then back to graphical mode but I hate the inconvenience.
I'm thinking about selling my 3700X and getting a 12th gen Intel with an integrated GPU (I don't game and really couldn't care less about fast GPUs).
X - Higher clocked desktop processor (what you got)
G - Has integrated AMD Radeon Vega Graphics (what you probably want)
GE - Has integrated AMD Radeon Vega Graphics but lower TDP (what you might want in niche use cases)
For example, some of my homelab servers use older AMD Athlon 200 GE as their CPUs, due to the fact that there are on board graphics and because the TDP is 35W, both saving electricity and also allowing me to use passive cooling (with just a large heatsink https://www.arctic.de/en/Alpine-AM4-Passive/ACALP00022A).
> For example, some of my homelab servers use older AMD Athlon 200 GE as their CPUs, due to the fact that there are on board graphics and because the TDP is 35W, both saving electricity and also allowing me to use passive cooling (with just a large heatsink https://www.arctic.de/en/Alpine-AM4-Passive/ACALP00022A).
I have a server with a 3700x. I originally had a cheapo discrete AMD graphics card in it, but I ended up just yanking that and running the machine without any graphics card. Saves power.
Side note: not that you said it yourself, but since you bought up the "E" SKUs, and it often comes up:
in almost all situations, the "lower tier" CPUs can be replicated simply by taking the higher-tier CPU and setting a lower power limit. A 1800X with a 65W power limit is the same thing as a 1700, a 3400G with a 45W (?) power limit is the same thing as a 3400GE, etc. Since the "GE" chips and other niche SKUs (there is a 3700 non-X iirc, for example) are often OEM-only, and thus they only have a very limited availability, they will often command higher prices on ebay/etc than the "real" chip. In this case, there is no reason to seek out the "E" chip, with the possible exception of if it gets you the "pro" feature set and you happen to need one of the features (particularly for APUs since ECC is disabled on non-pro APUs). So if you see a 3400G for $200 and a 3400GE for $250 (made-up numbers) then don't buy the GE, buy the G and set the power limit yourself.
Lower-end chips do not have better binning - actually the opposite, higher-end chips have better binning and will run at lower voltages for a given frequency than the lower-end chips will. The "higher leakage clocks better" thing is not really a factor that matters on ambient cooling, that is for XOC doing LN2 or LHe runs, but it has entered the public consciousness that "low-TDP chips are binned for efficiency". Not in the consumer market they're not - the exceptions being things like Fury Nano that explicitly are binned better, and for which you pay a premium price for that efficiency. But most low-end consumer processors are just... low-end. They're priced according to performance, not binning.
You can see this in the SiliconLottery historically binning statistics: 1800X will categorically clock higher at any voltage than a 1700, a 3800X is categorically better at any voltage than a 3700X, etc - and that also means they will run a lower voltage at any target frequency. AMD bins straight-down in terms of chip quality: Epyc and TR get the best, then the high-end enthusiast chips, then the value enthusiast chips, then the efficiency parts at the bottom of the bins.
The "E" parts are efficient because they have a low power limit set - not because they're binned better. Lots of chips can run fine at lower voltages, they just don't do the peak frequencies as well as the binned chips. A 1800X will still get you a bit lower voltage - but since the voltage/power curve is quadratic, the difference in power is compressed at lower frequencies/voltages. Also, at a low-enough frequency you will bump into the minimum voltage required, so that tends to compress things as well. So at 3 GHz, the impact on binning between a 1800X and a 1700 would be a lot smaller than, say, at 4 GHz. The 1800X can usually do that fairly easily in a later sample, but 4 GHz is pretty much always pushing the limits of safe voltage for a 1700, for example, so the 1700 gets crappier silicon because it clocks lower.
(the "one weird exception" is lower core-count chips. If you think of each core as a dice roll, this means that an 8-core chip has to roll perfectly 8 times, where a 6-core chip only has to roll perfectly 6 times. Since all-core OC is limited by the performance of the worst core, this means that for equal yields, there may be more lower-core-count chips with high all-core OCs. Many midrange parts do not actually have defective cores, they're locked out for market segmentation to avoid undercutting margins on the higher-end parts (which is why Phenoms and 7950s used to be unlockable, etc) and - while I'm not sure AMD has explicitly ever said it - it would also be sensible that when they are disabling cores on an 8-core to turn it into a 6-core they pick the best 6 cores, which would push silicon quality upwards too. A 1600X actually has silicon quality comparable to a 1800X according to SiliconLottery, for example, despite being a much higher-volume part. This "weird exception" also gets complicated with Zen2/Zen3 because AMD deliberately uses a low-quality die for the second CCD (3900X/3950X) since it will mostly be used under those lower-clocking all-core loads, and the impact of binning is compressed in those lower-clocking situations...)
I heard the rumor that, starting with Zen4, AMD will include an embedded GPU in each and every CPU using the new socket.
This will end this situation, and give a nice bump to AMD's market share in e.g. Valve's steam client statistics.
These days, the performance of the embedded GPUs is already pretty usable, even allowing for running heavy videogames on lowish settings. They've been increasing the performance considerably (30-60%) on each generation for several generations.
I was about to post this. The fans on my AMD RX 580 never turn on unless I'm gaming or doing some crazy WebGL stuff. I can only imagine that newer ones are even more efficient in this regard.
AIO performance on CPUs is largely limited by thermal transfer through the coldplate/IHS (Integrated Heat Spreader), not by radiator size. Basically the radiator is keeping the fluid very cool already, but heat can't move through the IHS quickly enough. So it takes a large improvement in fluid temperature to make a small improvement in die temperature - you are "pushing on a string" as the expression goes.
Almost no AIOs have the fluid-temperature sensors that would allow you to measure this directly, so everyone uses the die sensors. Which, since they're behind the IHS, will be much higher than the fluid itself. The die temperature is also a measurement of interest too - I'm merely explaining why the number you're seeing in the die sensor isn't really the big picture of how good the the radiator is doing at cooling. The die is hot, but the fluid is cool.
The AMD 295x2 is an extremely good example of this observation - this card had a single 120mm radiator, with one fan, and it could dissipate >500W of heat at ~60C die temperature. (not sure if this source says this directly but the non-OC power was ~430W average during gaming, and ~250W is not unreasonable for each 290X chip - actually they could go to 300W or higher if you really poured it on, but they also did generally show some significant power scaling with temperatures, so ~250W per chip/500W total is a reasonable estimate imo).
You might say - but that's a dual-GPU card, with bare dies. And yes, that's my point, when the coldplate/IHS is no longer a bottleneck moving heat into the loop, a 120mm radiator is comfortably capable of dissipating 500W of power back out of the loop at extremely reasonable operating temperatures (60C die temperature). 60C is actually barely breaking a sweat, you could probably do 1000W through that 120mm if you didn't mind a die temperature in the 80-90C range. In CPU overclocking - your power limits/temps are almost entirely limited by how fast you can get that heat through the IHS. Reducing fluid temps (by increasing radiator size) is pushing on a string, it takes big gains in fluid temp to produce a small improvement in die temp.
Incidentally, direct-die cooling is the last untapped frontier of gains for ambient (non-chilled) overclocking. Der8auer and IceManCooler.com both make "support brackets" that replace part of the ILM (integrated loading mechanism - the socket and its tensioning mechanism and attachment to the motherboard) that holds the processor. This is necessary since the IHS is actually part of the ILM - the ILM presses down on the sides of the IHS, so removing it would change the pressure, and the ILM needs to keep a specific level of pressure on the chip to make a good contact with the pins but but without damaging anything. But you can delid the processor (there are services that do this for soldered chips, I don't recommend doing it at home) and use one of those brackets with a "normal" waterblock/AIO (or even air-cooler), since the bracket is holding the chip in the pin-bed at the proper tension.
Thermal density is going nowhere but up, Dennard scaling is over, so that is the only way to really improve thermals on <= 7nm-class nodes. Even AMD runs hot - they routinely run in the mid-80C range nowadays, even though they don't pull a lot of power - because of that thermal density, and every time they shrink it's going to get worse. The gains may be more worth it on Intel though - they show better scaling from power/voltage, TSMC nodes seem to pretty much top out at about 4 GHz and past there it gets exponentially worse for very little actual performance gain. 4.3, 4.4, sure, but they don't seem to do 5-5.3 GHz like Intel can on their Intel 7 given good temps and enough voltage.
But yes, to go back to your original point, I really like my 3090 Kingpin as well. It runs extremely cool, I can keep the die at literally 30C with the fans cranked all the way up, and it'll keep the VRAM at under 70C (!). And since it is a 2-slot card it doesn't turn into a compatibility mess with motherboard pcie slots getting blocked and needing airspace/etc. I am 100% behind AIOs on the larger gpus that we are seeing lately, this is a better solution than triple-slot or 3.5 slot coolers, which are (imo) completely ridiculous.
Then you're not paying the Kingpin markup if you're not planning to x-oc it. I did that on a Titan X (Pascal) which was still using a blower design and it worked fantastically.
This is true, but: you have to watch compatibility (note that there are no 3000 series chips on that list - because NVIDIA changed their hole placement again, and AMD has a couple different sizes for their different chips, 6500XT/6400 is definitely smaller for example), and also it doesn't do as good a job cooling VRAM. You can put add-on heatsinks on the VRAM chips, but they can fall off and short something. And adding them on the back can run into compatibility problems with bumping into the CPU heatsink.
And VRAM temperatures are a big problem on the Ampere cards - I don't really think running >100C all the time is really gonna be great for them long-term. Even gaming (vs mining) it's not abnormal to see VRAM over 100C (especially 3090, with the chips on the back, but also on the other GDDR6X cards, GDDR6X just runs extraordinarily hot). I know what NVIDIA and Micron say, I'm not sure I believe it. Above-100C is really really dubious imo.
for the 3090, with the VRAM on the back, I think it makes sense to go with a factory-configured AIO. Other cards, and especially GDDR6 cards, sure, it does work and it does help. Don't go too nuts tightening the AIO down though (ask me why! >.<)
Gelid used to make nice little cooling shields for the VRM and memory modules. I'm disappointed they stopped, although I'm sure it was a tiny market. For single-sided cards that is a much much nicer solution than stick-on heatsinks imo.
When I ran it the vram temps were actually fine (although not the scorching GDDR6X variety of course). Since it's not sharing thermal mass with the GPU die, the direct airflow on its own was plenty.
You actually can see this with the 3090 in fact. Simply pointing a fan at the back of the card does wonders and easily keeps them in spec without a heatsink at all, although the backplate is acting as a bit of a spreader. Which makes sense since each memory chip is only like 2-3W. You don't need a heatsink for that, just a little bit of airflow
Thanks for the detailed write up, I learnt something new today.
I sometimes feel I should stop wasting so much time on the internet, but sometimes I realize that it means missing out on such comments that distill a lot of/important information.
> I'm thinking about selling my 3700X and getting a 12th gen Intel with an integrated GPU (I don't game and really couldn't care less about fast GPUs).
Don't suppose you're in Ireland/UK/EU? Looking at building a low/mid-range gaming rig for a sibling, and a 3700X would fit fine if you're looking to sell.
> The module was a proprietary binary. Since a kernel module requires interfacing with the kernel API, it could be considered a derivative work and a breach of the GPL license.
I never quite understood this logic: the same (?) binary blob is used for the FreeBSD and Solaris drivers.
Because to make a driver work with Linux you have to add Linux-specific code that typically uses Linux's source code, and that combination of the driver and Linux-specific code could be considered a "derivative".
Note that the word "derivative" is used here as defined by the license, not in its plain English meaning.
But one gray area in particular is something like a driver that was
originally written for another operating system (ie clearly not a derived
work of Linux in origin). At exactly what point does it become a derived
work of the kernel (and thus fall under the GPL)?
THAT is a gray area, and _that_ is the area where I personally believe
that some modules may be considered to not be derived works simply because
they weren't designed for Linux and don't depend on any special Linux
behaviour.
Basically:
- anything that was written with Linux in mind (whether it then _also_
works on other operating systems or not) is clearly partially a derived
work.
- anything that has knowledge of and plays with fundamental internal
Linux behaviour is clearly a derived work. If you need to muck around
with core code, you're derived, no question about it.
By that reasoning every program written to run on MS-DOS was a "derived work" of MS-DOS. Such programs were written with MS-DOS in mind, and often mucked around with fundamental internal MS-DOS behavior.
Same with pre-OS X Mac. Most Mac programs were written with Mac in mind, and it was not uncommon for programs mucked around with OS code and data.
What matters is how copyright law defines derivative work, and in the US that has nothing to do with whether or not your work was written with another work in mind or plays with fundamental internal behavior of another work. What matters is whether or not it incorporates copyrighted elements of another work.
This is the key point, and it is nearly impossible for firmware blobs to depend on any OS behavior, let alone "special Linux behavior". The only way they could do so is via the open-source part, so it should be easy enough to check that.
You quickly get into something similar the ship of theseus (or Trigger's broom) argument. You write some code that must link to a GPL library to functon; that code is now GPL because it's a derivative work of the library.
You rewrite that library under an MIT license, so now your code can link to that and run. Is your original code still a derivative work?
Your original code is never a derivative work. You retain copyright to the code you wrote yourself, even if it's combined with GPL later. GPL even contains this interesting clause:
> You are not required to accept this License, since you have not signed it.
So to answer your question: no, unless you've copied bits of GPL library into your code (or similar that would be judged as a copyright violation).
There's also a crappy situation of Oracle vs Google that made APIs copyrightable, so now it's not entirely clear if your code + your rewrite of library is still yours if it uses an API of the GPL library.
> Your original code is never a derivative work[...]
> > You are not required to accept this License, since you have not signed it.
> So to answer your question: no, unless you've copied bits of GPL library into your code (or similar that would be judged as a copyright violation).
Actually that clears a lot up for me, and I'd have considered myself reasonably knowledgeable when it comes to copyright in general; I think I had a few conflicting ideas about what it means to be an original work. Thank you.
> The license is abundantly clear about this and answers all your questions.
The GPL is emphatically not clear about anything. It's a legal minefield precisely because no one has any idea what it means, and everyone has their own interpretation.
> It matters who is doing the rewriting and how they got the code in the first place.
I think one of the matters that confused me about it was the CLISP question. IIRC, CLISP linked to readline, but was released under a non-GPL license.
RMS contacted them, and asked them to relicense. They suggested either reimplementing a stub readline-library, or rewriting their line editing code against another lib instead. RMS insisted that they would still be a GPL-derivative, resulting in the current license situation.
I may be misremembering the recount of this, as this was way before my time.
RMS may have believed or wanted that, but it is my understanding (IANAL!) that the case law has been settled differently. If you are found in violation of the GPL due to a dependency you weren't aware was released under the GPL, you can fix that violation by rewriting your application to avoid the GPL dependency.
CLISP et al cannot be forced to distribute their code under the GPL. It's their code and their choice; contract law cannot compel someone who has never entered into the contract to do something against their will -- CLISP didn't knowingly distribute GPL code, so that distribution doesn't trigger acceptance of the GPL terms. They just have to make the situation right once they're made aware of the violation.
It's more nuanced. If you've included GPL code and modified or redistributed it, then you either have to comply with the GPL to have permission for that use, or you've potentially committed a copyright violation.
To comply with the GPL you only need to publish the one specific snapshot of source code you've combined and redistributed with GPL code, but you don't need to permanently relicense your project if it doesn't contain any code you don't have rights to use. The "tainted" version will be granted as GPLed forever, but other earlier or later versions that don't use any GPL code don't have to.
Or you can go the copyright way, and claim it wasn't a copyright violation (because it was a fair use, or non-copyrightable code) or settle the matter in whatever way the law lets you get away with.
That isn’t any different from what I said though? If someone points out you violated their copyright then you need to (1) fix the issue, and (2) pay appropriate damages. But for open source software the damages are nil, so fixing the violation is the only thing you need to do. And you can do that by either releasing the code as GPL, or removing the dependency. Either would be an acceptable remedy, in the eyes of the law.
Apparently the situation was that CLISP was distributed as `lisp.a` and `libreadline.a` (with source for Readline included) and the end-user linked them together. Haible offered to write a `libnoreadline.a` library, exporting Readline's function but not providing their functionality, but RMS insisted that the result would still be a derived work of Readline.
I’m not a lawyer, but my understanding is that the GPL requires you to make source available upon request to users of your software.
So I guess if someone asks you for the source code, you can require them to prove that they actually have a copy of the GPL version, and not the new one.
That's an interesting cognitive dissonance that I've always been fascinated by. I've heard people criticize developers who release proprietary drivers for the linux kernel, but never those who release something dual licensed as GPL2/MIT, or those who distribute a dual licensed GPL2/MIT module as if it were solely under the MIT license; surely that would violate the Linux kernel's GPL (in being a derivative work) as much a proprietary module would?
MIT is Ok because MIT is compatible with GPL. GPL has language saying you can't add restrictions, but the combination GPL+MIT is essentially GPL so it's ok.
Dual GPL/MIT essentially means that you as a user can choose whether you to use the code as GPL or as MIT, but if you contribute to the code you must provide the full GPL+MIT rights.
As to why release a driver as GPL/MIT instead of just the GPL, I think the idea is that the BSD's (or other OS'es) can take the code and use it under the terms of the MIT license and port it to their kernels. IIRC there are many drivers in Linux that are dual licensed in this way for that reason.
I'm not sure about the details, but to write a linux kernel module you must include some .h files under the GPL license. You really have to include it because linux ABI is intentionally not stable exactly to prevent proprietary abuse. So, AFAIK, every functional linux kernel driver must be released under the GPL.
The point is: if you want to include a GPL licensed header, your code must comply with its license. Your opinion on the GPL is an entirely different matter.
"Derivate" I think here is red herring. The biggest problem is that the combination of kernel + nvidia modules could never be redistributed, which means that technically no distro should have been able to ship these drivers by default.
If I remember correctly, the open source ATI drivers were always a bit buggy and it wasn't that easy getting them installed either. The tradeoff was always Nvidia: proprietary but works well, ATI: open but buggy.
As far as I'm aware, since AMD took over, they've been fairly stable (although occasionally omitting support for the latest features until the next kernel release)
As a Navi 10 (5700 XT) owner, those problems still exist. It used to be that at least once a week while gaming the driver would crash with some undecipherable error message in dmesg, and because the card had the reset bug the only recourse was to reboot the machine entirely. 4 years later the only thing that's changed is that the crash shows up less frequently (I'd say once every 3 months).
> Have you ruled out power supply issues and are you running at stock clocks (for CPU and RAM as well)?
Yes for both. No overclocking whatsoever.
> Anyway, going from "at least once a week" to "once every 3 months" means that 90% of your crashes have been fixed.
I don't think I'm supposed to be ok with a device I paid premium money for crashing once every three months with no explanation from the manufacturer. They could've fixed 99% for all I care, it's still absurd that it's even an issue in the first place.
> What kind of message would you expect that would be more decipherable.
One that would lead me to an actual solution or at least an explanation, not just year old threads of people reporting this exact issue with replies saying it was fixed in kernel version X, where X is different for each thread.
The part about them being buggy is definitely true.
Up until somewhere around 2016-2017 the ATI/AMD drivers were really bad.
I had an "HD 7850" GPU on Linux around that time and it was barely usable. The performance was less than half of what you got on Windows, and the drivers would crash very often, sometimes several times a day if I was trying to play games like Team Fortress 2.
It was so bad that I decided to replace the HD 7850 with a new GTX 970 and decided to not buy anymore AMD GPUs for the indefinite future. The GTX 970 was stable and performed very well with the closed source drivers, and other than them being closed source I never had an issue with them. I always installed the closed drivers through the system package manager which handled all of the tricky stuff for me (Arch Linux maintains the nvidia driver as a system package and makes sure it runs on the current kernel before releasing it).
In modern times the situation has flipped though. I still haven't bought an AMD GPU since then but I am pretty sure my next one will be.
I agree; 2016-17 was about the turning point. I bought a Fury X around then, and it was flawless back then. In contrast, my old nvidia cards had become unusable.
On the AMD, FreeSync and HDMI audio didn't work at first. (For any card; the driver documentation said those features were a work in progress.)
Anyway, I unplugged it for a year or so, and recently plugged it back in. One apt get upgrade later FreeSync and HDMI audio just work.
It's gotten to the point where I'd opt for an ARM laptop over one without AMD or intel graphics. From what I can tell suspend resume doesn't work on intel CPUs (on windows or linux), so it's basically AMD GPU or no x86 at from a compatibility perspective. (Did AMD also eliminate S3 suspend, and not replace it with a working alternative?)
I also had a HD 7850, and though I had pushed it less than you I never noticed any huge issues.
It was in a uniquely terrible position of being one of the last cards released supported by radeon when all the development had moved to amdgpu, which it supposedly could run if you jumped through the right hurdles. I remember the xorg feature table having several things working for older and newer models but not the 7850.
Still, my experience with it led to another AMD card that I've also been quite happy with.
I belive this is talking about radeonhd/radeon/ati circa 2015 or earlier.
Around then, you still had to install the corresponding X11 portion of the drivers, though the nvidia eqiuvalent had the same limitation.
radeon/radeonhd, or fglrx (which was the propriertary AMD graphics) absolutely worked worse than nouveau or the proprietary nvidia drivers at that time. It was only a couple of years into amdgpu where the tables turned.
At this point it would be nice if they'd backport their Linux drivers to Windows, as I'm now on my third AMD GPU in 12-13 years (HD 5770, r9 290x, 6900XT) to have issues where the driver will randomly crash when playing hardware accelerated video on one monitor while playing a directx game on another monitor under Windows.
I'm pretty sure I needed to mess with xorg.conf and other settings to get things like screen resolution and Compiz working correctly. I don't know what part of the stack was responsible for those issues, but I thought it was related to the graphics driver.
I could be misremembering though, this was 15+ years ago now.
That's basically still what happens. Fedora automates this nicely with akmods, which automatically rebuilds these source only modules and installs them when you update the kernel. Has been working smoothly for a while, but it is fundamentally the same thing still.
I remember using that for a Geforce2 MX and the installer.
People has no idea on what FREEDOM do you have if you aren't bound to crappy licenses with Nvidia and CUDA for serious computing where you can be limited per core.
On Debian I do: module-assistant build-install nvidia
And it works every time, but you do need to run it every time. There is a way to automate it on new kernel installs.
> missed the right packages or had an incompatible user space, including libs and compiler, you could end up without X11 and no way to easily run a browser or google about the problem you had
I always kept the previous version of the kernel and module in case of this.
I've been recompiling my nvidia module each kernel release for over a decade and I've had no problems, you install the kernel, you install the nvidia module, and you reboot.
The driver was a proprietary binary. Since a kernel module requires interfacing with the kernel API, it could be considered a derivative work and a breach of the GPL license. So, Nvidia provided a small open source shim which interfaced between the kernel and the proprietary module.
You had to compile that shim yourself with the right arcane command line incantations and if you did anything wrong, missed the right packages or had an incompatible user space, including libs and compiler, you could end up without X11 and no way to easily run a browser or google about the problem you had. You had to do it EVERY FUCKING TIME YOU UPDATED THE KERNEL!
It was still possible to edit xorg.conf or, if you were older, xf86config by hand to fix it and use the VESA driver, but it was very inconvenient. It became more reliable over the time and even fully automated with DKMS, but I hated them for it.
I used and recommended ATI and INTEL for most of the people I could for a long time because of this.
I was from a time when It was possible to use 3D acceleration on linux with 3dfx with fully open source drivers (I think), giving you a cheap UNIX-like graphical workstation with OpenGL support. When Nvidia bought 3dfx and simply killed their drivers, my hate became specially strong.
EDIT: Remember you had to recompile the shim at every kernel update and replaced "module" with "driver".