Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, X11 is backwards compatible and that's been really great for the Linux desktop. But the article bashes Windows for rapidly introducing new APIs even though the old ones remain compatible; I'm just pointing out that Linux does that too.

As for the audio situation, the problem is sharing between multiple apps. ALSA sucks at it so "sound servers" were invented but they caused more problems than they solved. Now it seems like every distro has a different sound solution, and meanwhile OSS was never removed from the kernel so you can't even rely on ALSA always being there.

The filesystem problem can't be blamed wholly on apps or kernel devs; IMHO the problem really lies in POSIX which doesn't specify a way to achieve what app developers need in a way that can also be easily implemented in the kernel with good performance. I would argue that the behaviors app developers were using became a de facto part of POSIX and the way kernel devs broke them was irresponsible even if it didn't break the letter of the standard. Contrast with Microsoft which goes far, far out of its way to avoid breaking apps even when they flagrantly violate good practices (there are some great examples on The Old New Thing: http://blogs.msdn.com/oldnewthing/ ).



Yes, X11 is backwards compatible and that's been really great for the Linux desktop. But the article bashes Windows for rapidly introducing new APIs even though the old ones remain compatible; I'm just pointing out that Linux does that too.

Linux's graphics APIs have remained largely static: they are GTK+ and Qt. Nobody writes their own implementations of X11.

Your audio paragraph is completely divorced from reality. Sound servers were originally created to implement software mixing, which is not supported by OSS. When ALSA was imported into the mainline kernel, software mixing became possible without servers and they largely died out.

Now, ALSA is the dominant standard for Linux audio. Recently, the sound server PulseAudio was created, but apps use it through the standard ALSA API. To my knowledge, there is no distribution using a sound server other than Pulse.

You can't rely on sound being enabled, true. But every mainline distribution ships with sound enabled, and that means they support ALSA. If a user disables their sound subsystem and then complains they can no longer hear music, that's their problem.

The filesystem problem can't be blamed wholly on apps or kernel devs; IMHO the problem really lies in POSIX which doesn't specify a way to achieve what app developers need in a way that can also be easily implemented in the kernel with good performance.

Sure it does -- fsync(). To my knowledge, the only filesystem which has poor performance when using fsync() is ext3 in data=ordered mode (which is not the default).


When ALSA was imported into the mainline kernel, software mixing became possible without servers and they largely died out.

My experience was completely different. A few years ago, ALSA software mixing was not enabled by default and didn't work well when enabled. When not enabled, apps couldn't share the audio device at all. KDE created aRts to allow sharing at the app level, but aRts sucked, plus it hogged the audio device so non-aRts apps wouldn't work at all. Later aRts added a timeout after which it would release the device but this obviously wasn't a good solution. Gnome had ESD which I didn't use but it conflicted with aRts. JACK came along but was only ever used by high-end audio programs.

ALSA finally did get decent software mixing support, but now people are used to running sound servers. PulseAudio is the newest thing but last I heard a lot of people are still unsatisfied with it (e.g. http://jeffreystedfast.blogspot.com/2008/06/pulseaudio-solut... ). Furthermore, people who know what they're talking about are recommending a move away from PulseAudio and ALSA and back toward OSS! Personally, I think the case is quite convincing: http://insanecoding.blogspot.com/2009/06/state-of-sound-in-l...

To my knowledge, the only filesystem which has poor performance when using fsync() is ext3 in data=ordered mode (which is not the default).

Not according to http://lwn.net/Articles/351422/


A few years ago, ALSA software mixing was not enabled by default and didn't work well when enabled. When not enabled, apps couldn't share the audio device at all.

Yes, disabling sound mixing will prevent multiple applications from using the sound card at once, in much the same way as disabling graphics drivers will prevent X11 from working.

KDE created aRts to allow sharing at the app level, but aRts sucked, plus it hogged the audio device so non-aRts apps wouldn't work at all.

aRts is for pre-ALSA (ie OSS) applications. It doesn't belong on an ALSA-based system, and will obviously not get along well with a modern stack. I'm not denying there are lots of distributions which are configured poorly, but any decent distribution such as Red Hat or Debian worked well.

I disagree that glitch-free playback and per-application volumes are "solutions in search of problems", but that's personal taste. If somebody wants to run without PulseAudio, they can. Reading the blog post, it seems he was surprised when upgrading to a bleeding-edge development version caused problems.

The last link is written by an OSSv4 developer. OSSv4 is unlikely to ever gain mainstream acceptance because it contains insanity such as performing floating-point math in the kernel. Aside from people are literally hired from the company developing OSSv4, I have heard no good news about it, and there do not appear to be any movements back to an OSS-based stack.


Yes, disabling sound mixing will prevent multiple applications from using the sound card at once, in much the same way as disabling graphics drivers will prevent X11 from working.

X11 never came with graphics drivers disabled by default.

aRts is for pre-ALSA (ie OSS) applications.

aRts isn't "for" ALSA or OSS applications; in fact it doesn't play nice with either. aRts is for aRts applications. aRts has backends for both OSS and ALSA.

There's no reason glitch-free playback and per-application volume control can't be done in the kernel. The only feature that makes sense to do in user space is network transparency, which is of limited utility.


There's nothing wrong with floating-point math in the kernel (unless your CPU has no FPU!)

Arguably the best-performing audio driver API of the moment is CoreAudio (on Mac OS X) - and that uses floating point code in the kernel... ;-)


That first link you provided to people being 'unsatisfied with PulseAudio' is over a year old. Around a year ago most major distros jumped on PulseAudio a) before it was ready and b) using messed up configurations which didn't help anything.

I don't know much about how it stacks up for OSSv4, but I like the ability to see multiple audio streams from various programs and the ability to tweak a specific programs audio volume from PulseAudio (sometimes programs don't provide volume level control). I'm assuming here that everything talking about 'audio mixing' is just talking about taking multiple software audio outputs and blending them together to create the output to the hardware. Things like per-application/per-process audio volume control is an advanced feature that I've seen provided on OS X and I believe Windows (through 3rd party software). I would like to see functionality like this on Linux as well.

And to be fair, that blog post about going 'back to OSS' is claiming that OSSv3 -> OSSv4 was a major overhaul that adds in things like mixing support. When you say 'back to OSS', most people are going to read that as 'back to OSSv3' not 'ditch ALSA for the revamped OSSv4.'




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: