That video is excellent, thanks for sharing. BTW it does back up the point @subb was making, that the experience of color is a perceptual thing; “light isn’t what makes something a color. As we’ve seen, colors are ultimately a psychological phenomenon.” Which is true.
FWIW I suspect the issue in this thread is that color models and color spaces are not necessarily modeling perception. The word color is overloaded and has multiple meanings. Just because color experience is perception, that doesn’t mean “color” is always referring to perception nor that phrases like “spectral color” or “color model” are referring to perceived experience, and they’re often not.
A color model is any numeric representation that captures the information needed to recreate a color, and it can be a physical or spectral color model, a stimulus model (cone response), or a perception model. Being able to recreate a color does not imply that the information is perceptual. Spectral “color” measurements are just pure physics, and spectral color models are just modeling pure physics.
By and large, the color matching experiments that lead to our CIE standards mostly measured average cone response for an average observer, and were never intended nor designed to capture effects like adaptation and surround. This is why many of the 3D color spaces we have that trace lineage to those experiments, especially the “perceptual” ones, are primarily modeling cone response and not perception. CIE color spaces do involve some kind of very averaged out perception of color, in a static unchanging, well adapted, no surround kind of way, which is for example why the “red” color matching function goes negative. [1]
There are people doing stuff like adaptation and spatial tone mapping in video games and research, and they’re using more tools than just 3D color spaces for that. That’s the kind of discussion I was hoping @subb would get into, i.e., what specific cases require going beyond the CIE models.
FWIW I suspect the issue in this thread is that color models and color spaces are not necessarily modeling perception. The word color is overloaded and has multiple meanings. Just because color experience is perception, that doesn’t mean “color” is always referring to perception nor that phrases like “spectral color” or “color model” are referring to perceived experience, and they’re often not.
A color model is any numeric representation that captures the information needed to recreate a color, and it can be a physical or spectral color model, a stimulus model (cone response), or a perception model. Being able to recreate a color does not imply that the information is perceptual. Spectral “color” measurements are just pure physics, and spectral color models are just modeling pure physics.
By and large, the color matching experiments that lead to our CIE standards mostly measured average cone response for an average observer, and were never intended nor designed to capture effects like adaptation and surround. This is why many of the 3D color spaces we have that trace lineage to those experiments, especially the “perceptual” ones, are primarily modeling cone response and not perception. CIE color spaces do involve some kind of very averaged out perception of color, in a static unchanging, well adapted, no surround kind of way, which is for example why the “red” color matching function goes negative. [1]
There are people doing stuff like adaptation and spatial tone mapping in video games and research, and they’re using more tools than just 3D color spaces for that. That’s the kind of discussion I was hoping @subb would get into, i.e., what specific cases require going beyond the CIE models.
[1] https://yuhaozhu.com/blog/cmf.html