Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Unfortunately the author and the paper he links apply alpha premultiply to the gamma compressed image. To be correct, this should be done in a linear colorspace. His solution will make some color edge combos get halos.

Basically, alpha in all formats I’ve seen is stored linear, but colors are gamma compressed (sRGB, HDR stuff, etc.). If you apply alpha premultiply, then linearize, you’ve misapplied alpha. If you ignore linearizing (as even this author shows), you get immediate black halos since your blend is effectively multiplying colors, not adding them.



This is something I'd love to get right. Pixman does appear to support sRGB input, in the form of the PIXMAN_a8r8g8b8_sRGB format, which might work well enough for the premultiply step. It's the unpremultiply that I'm struggling to wrap my head around - I'm guessing I'd need Pixman to output to 16-bit channels in the destination, otherwise I wouldn't be able to convert back to sRGB? That's kind of a massive pain though, I'd have to allocate a whole other temporary buffer that's double the size, for something that is imperceptible enough I never noticed it with my test images or in my playthrough. So I'm unsure what the cheapest way to do it would be. This is all well outside of my area of expertise which is primarily hacking and reverse engineering, but I'm always open to learn.

I tried my hardest to create something that was as "technically correct" as I could approximate given my lack of graphics experience and the performance constraints I was under, but I kind of knew it was likely I could mess up some small detail. Maybe since it's open source someone will eventually come along to correct it? One can dream :P


right, looking at it again, I think I get it now. You'd need: the 8-bit sRGB source, to the premultiplied image as floating point (Pixman can't do to 16-bit channels it seems,) then to the resized image as floating point, then unpremultiply that, and then go back to 8-bit sRGB. It makes sense in my head, I just don't know if it's really worth all that tradeoff, it's a lot of extra steps... I don't even know that the original resize algorithm would've even done it either given its age, and my goal is to replicate that. But maybe I'll test and see how it goes eventually


Followup: I've now implemented this and I determined it doesn't take enough longer to have a noticeable impact. It so happens mango has an sRGB to linear function that is much faster than using Pixman to do the conversion so that's what I used. I kept it 32-bit all the way through which will introduce some colour banding but it's not really noticable with the photorealistic images being resized. So I expect this will be ready for whenever I release my next version




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: