I don't know what the right word for this is (ironic? serendipitious?), but my astonomy friend was just doing this same thing this past week with Blender:
"Using blender, I've taken my first steps in creating a black hole. Since blender with its ray tracing renderer doesn't allow light to bend and curve like it would through spacetime deformation, I had to incorporate refraction and a lot of lens to emulate the behavior.
The rays from the renderer that I'm using only know how to move in straight lines, however it can handle refraction very well. So long story short, the curvature of spacetime is replaced with a bunch of refracting elements that essentially fill the volume of a sphere. The crucial part is the index of refraction changes, inversely so, within the sphere. Result is a close enough approximation of the black hole."
Results looked similar to some of those in the article.
If you were able to physically manufacture an object with inversely decreasing index of refraction, would you observe light bend as it does around a black hole? Or do impurities in the substance / difficulty of manufacturing w/ different IORs make this impossible in practice?
Our atmosphere is such an object. Air density increases closer to the ground and so does the index of refraction, allowing us to see light that should have missed!
If I understood article correctly, author is doing ray marching, which also "stops the ray" at given distances, and then he reorients the ray. Very similar to your technique, altho here length of each ray leg is calculated differently
This is extremely cool; it's the best explanation I've come across yet of the Interstallar visualisation.
And yet frustrating - I have dangling questions. Why does the ISCO have a fuzzy inner edge? What is the fuzzy semicircle of light that appears below the accretion disk, outside the photon sphere, but inside the ISCO, in only the lower half of the image?
I get less than half a second of motion for these images, and I can't make them restart. It would be super-cool to get motion video that ran continuously, or at least in a loop.
It's possible to see the acceleration disk behind the black hole from both sides. Above arch is the upside, below is the downside. That's the outermost disk.
It's also possible to see the up- and downside of the front part of the acceleration disk from both sides second time from the light that circulates the black hole.
I think the fuzzy part that joins photon sphere is the light from the upper part of the front of the acceleration disk. Going once around the black hole. When you go you go off axis (upwards) the lower side becomes fussizer/wider and other side mixes with the photon sphere.
Basically "visualizing black holes with relativistic ray tracing" is my hobby now. I can't think of a better use of compute, honestly. The way it was explained to me: we can simulate a single black hole, all the photons together by gravity alone, on a Lambda workstation. To simulate two black holes in orbit, requires Berkeley Lab's Perlmutter, the biggest supercomputer on the planet. By the time we get to three black holes in orbit, it's so chaotic it doesn't matter if are afforded all the energy in the universe, it would not be enough to calculate predicatble orbits. Cosmological simulations are the benchmark. And the outputs are the building blocks of life itself: dark matter & energy ;)
Which is still of course incredibly impressive, but I wouldn't say biggest, maybe 'one of the biggest' is fair. Even that's constrained by 'known' -- some private companies especially in fossil fuel and tech companies have systems larger either dedicated or allocatable.
You can also do black hole simulations (orbit and merger) on far smaller systems. I worked with a physicist who did these numeric relativity sims on smaller systems, you simply lose fidelity (also depends on how you treat the singularity). Our visualizations looked like cartoons compared to these beautiful renderings though.
For those that are interested in more details on how this is done in real research, I recently wrote an introductory article on the topic that should be accessible to anyone with a scientific background:
TLDR: When you go beyond simple disk and/or black-hole models, things become extremely complicated. Real black holes have complex and turbulent accretion disks that cannot be studied analytically and require large simulations. Similarly for binary black holes. When you put the two things together, ray tracing and radiation transport become much more involved.
EDIT:
The article is unfortunately behind a paywall (with no benefit to the authors), as publishing with open access would have required a steep fee. Please, do not pay to read! If you have no way to access this document for free, we will be happy to share the PDF.
Awesome. Also, see this great YouTube video of what you would see if you fell into a Black Hole. Is also has a very good explanations of what you are seeing: https://www.youtube.com/watch?v=4rTv9wvvat8
I wonder about the "order of operations" when calculating the doppler shift though. We're seeing the back of the acretion disk over the top of the event horizon due to the light bending. However there is significant doppler shifting in the color of the acretion disk in that region. Aren't we essentially looking straight down onto the back of the acretion disk, orthogonally to the disk's plane of rotation? And as such, shouldn't we see less doppler shift in that region?
My intuition may very well be wrong, but I do like thinking about how light behaves around these things.
Since gravity bends light, I always thought it would be pretty neat if somehow no one noticed until recently this operation was completely reversible (i.e. that bending light creates gravity), and within a few years we suddenly have superluminal travel supported only by the technology of optics.
Mirrors and lenses, too! Using a dormant black hole as a telescope could be a really interesting way of gettong a good look at some really distant objects. Getting in position to use it as such would be challenging, to put it mildly.
"Using blender, I've taken my first steps in creating a black hole. Since blender with its ray tracing renderer doesn't allow light to bend and curve like it would through spacetime deformation, I had to incorporate refraction and a lot of lens to emulate the behavior.
The rays from the renderer that I'm using only know how to move in straight lines, however it can handle refraction very well. So long story short, the curvature of spacetime is replaced with a bunch of refracting elements that essentially fill the volume of a sphere. The crucial part is the index of refraction changes, inversely so, within the sphere. Result is a close enough approximation of the black hole."
Results looked similar to some of those in the article.