Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is very cool. Interestingly it was generated by Google's Closure compiler. The author has the full source with excellent comments on his site.

link: http://www.gabrielgambetta.com/tiny_raytracer.html



from his site:

> Their combination of a simple algorithm and stunning results are hard to beat.

that's exactly what fascinates me about raytracing. It's pretty straightforward, you literally emulate the light bouncing around, but given enough resources the results can be surprisingly good-looking.


For interest, the method of literally emulating light bouncing around seems to be called Photon tracing (http://en.wikipedia.org/wiki/Photon_tracing). It creates realistic renderings of just about anything if you can afford the time to render it. Rendering times are prohibitively high, though, because this rendering method emulates lots of photons that never hit the sensor.

Ray tracing, although very powerful, has some limitations because it goes the other way: It casts rays from the sensor into the scene.


This is where path tracing steps in. Bi-directional path tracing combines both photon tracing and ray tracing. http://en.wikipedia.org/wiki/Path_tracing


could you elaborate on where the symmetry breaks down?

From your comment and another one here, it seems to be the case that actually some of the photons that don't hit the sensor still affect the image (either that, or you can't get back to the source starting at the sensor for some of the photons). In other words, if a light source emits a trillion photons everywhere in the room, and your sensor (the idealized camera) catches ten million photons, then it's not strictly equivalent to trace back the ten million maths to the light source, and ignore the ones that didn't make it to the idealized camera.

If it were equivalent, you wouldn't need "photon tracing and ray tracing" or bidirectionality.

But I'm struggling at where the symmetry breaks down, since it seems that tracing a path back from the sensor along the same angles should produce the same result as tracing it forward from the light source - for the photons that directly or indirectly (after a certain number of bounces or passing through the materials) make it to the idealized camera.

But the ones that don't make it to the idealized camera - how can they still affect the frame? Or (equivalently), when is it the case that you can trace it forward to the camera but not backward from the camera?

Why do you need bidirectionality? Where does the symmetry break down?


When you do photon tracing, you begin with beams of known intensity, wavelength, direction because you are aware of the light source. That combines with everything else that hits the sensor to give the image.

When you do ray tracing, you don't immediately know if 10 million photons were captured. You know how many rays you are going to send based on the resolution of the image. This also means there's issues with say light coming from narrow apertures because you limit yourself to a finite number of rays.

If you use photon tracing and a light source sends 10 million photons and 70,000 are absorbed that's 70,000 data points to define the image which may be plotted on a 200x200 canvas. If you use ray tracing with a 200x200 canvas you will have the same size of image, but only get 40,000 samples. Even if you expand the size of the canvas, there may be some photons which can reach the camera but whose path the ray tracer can't reach.

For instance, say some photons hit the camera at pixel x-coordinates 10.0003 to 10.0005 through a fine aperture.

This will illuminate the camera in a photon tracing simulation. The ray tracer will not capture that photon unless its resolution were 10,000 times higher. The ray tracer will trace back from 10, 11. It can't do every intermediary step. The photon tracer can account for those circumstances.


A simple case: the sun illuminating a large white room through a small window. The light creeps all over the room, not just at the direct illuminated spot, as you'd expect If you were to do a simple camera-trace.

The issue at hand is: we don't experience it, but light behaves in a thermodynamic (dynamic) equilibrium way. You don't see the transients, because the equilibrium takes nanoseconds to converge, but when you turn on a light all your room is exchanging photons until it settles at a rate where everywhere is receiving as much as it's irradiating plus what's wasted as heat, and the total heat is equal to the energy coming through the light sources. Finding those values is solving the so called light transport equation -- and this clearly requires probing geometrical information from all over the place.

In the sunlit room case, all that light is a) heating the room and b) going outside. Now let me try to build a physical model based on your intuition: this equilibrium dynamic depends on the rate of absorption/reflection of each surface. If you have experience with EE, the room is essentially a resonator, where walls are reflecting some and absorbing some until it's absorbing as much as there is light coming in. The less absorbing the walls are, the greater the Q factor of the resonator is: the light is going to bounce a lot and build intensity before it gets absorbed. This is why dark walled rooms are so dramatically, well, darker than white rooms -- the Q factor is not bounded. If your walls were perfect mirrors the intensity would build up with time to +inf.


With bi-directional path tracing it works something like this:

  * trace a photon from the light source with x bounces
  * trace a ray from the camera with x bounces
Now the end points of those traces are not connected but you can calculate the probability that they can 'see' each other. You can use this calculation for the light transport from the light source (photon) to the camera (pixel).

Edit: maybe this image will make it clear: http://lebedev.as/web_images/historyGlobal/15.png

The 'deterministic step' connects the raydiance from the light to the pixel.


ray tracing is conceptually the same as following the reverse path an idealised 'photon' takes - that photon being the classical 'ray of light' idea.

photon tracing is quite different - instead of starting from the pixel you accumulate your results.

as such i consider the original comment not only accurate, but insightful. (if ambiguous)


A nice and thorough book about raytracing-like techniques (including photon tracing), and their capabilities is "Advanced Global Illumination" (http://sites.edm.uhasselt.be/agibook). It is very interesting.


Very nice indeed! I especially like the fact that the source is so well annotated, which makes me wonder: why bother with minifying it at all and not simply use the unminified source on Jsfiddle?


From the jsfiddle source:

  // Non-minified source in 35 lines for HN, because it's the latest fad :)
https://www.hnsearch.com/search#request/submissions&q=%2235+...


It was 30 lines, but artistic license accepted I guess.

https://www.hnsearch.com/search#request/submissions&q=%2230+...


The author writes in the link:

> Note that the goal here was to make the source code as small as possible, not clarity; so even the original code before minification is a horrible mess. This doesn’t do justice to the elegance and simplicity of proper raytracer code; I’m writing a book to right this wrong.


The live demo on his site works on the ipad. Amazing.


It's just done with a 2d canvas element. It's been supported in most browsers for a very long time.


Why?


Sigh. Kids these days...

Raytracing is not exactly a lightweight calculation. My first raytracer was TurboSilver 3D on the Amiga, in 1990 (actually one of the first commercial raytracers ever produced). "Photorealistic" images at 7.5 Mhz. For an image like this, you'd set up the scene, hit the render button, and grab a quick lunch. When you came back, the scene would be about 2/3rds rendered, and you'd watch it for a while, thrilled by every new pixel that pushed itself onto the screen. Then you'd go get coffee and hope the scene was done when you got back.

Now, the same scene (say, 320x200px) renders in an eyeblink on my phone, driven by a high-level universal scripting language I can tweak at will. This is beyond amazing. It's fucking transcendent.

(Oy vey, I feel old. Where'd I put my dentures? BTW: get off my lawn, etc.)


"Old people are the greatest. They're full of knowledge and wisdom." Is it ok to quote Spongebob here? In this case, I say yes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: