Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just a shower thought so to speak, but could you combine this technique with something similar to precomputed radiance transfer[1]?

You'd have to take multiple pictures of the scene, then move some light source around, take another set of pictures etc. And in a similar sense to the irradiance volumes[1], instead of encoding just the gaussian parameters, encode them using something that lets you reconstruct the gaussian parameters based on the position of the primary light source for example. I know estimating light position and such from images has been worked on for image-based BRDF extraction for a long time[2].

Of course it'll require a lot more images and compute, but that's the nature of the dynamic beast.

Again not really thought this through and it's not really my field, though I was into physically-based rendering a decade ago. Just seems like this is something that seems like it would be solved by natural progression in a not too distant future.

[1]: https://chrisoat.com/papers/Oat_GDC2005_IrradianceVolumesFor...

[2]: https://graphics.stanford.edu/~lensch/Papers/TOG03BRDF.pdf (random example)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: