I am not an astrophysicist or photographer for that matter... But wouldn't it be like a overexposed photo -- all red-and-white, because it will catch so much light from the distant galaxies and such?
To an extent, yes, which is why DSF images tend to be aimed at "empty" sections of sky.
There are a few factors involved AFAIU:
- You don't want to be shooting through the Milky Way's own primary mass as nearby dust and gas will obstruct more distant objects.
- "Nearby" objects --- stars within the Milky Way, reasonably nearby galaxies --- might also tend to blow out the image. Though for the most part these end up being point sources. It's artefacts such as spikes which give the most obstruction.
In the case of the JWST, the fact that it's looking into the infrared means that it can see object which are literally invisible to Hubble regardless of how long the exposure.
The question of why space is black (or alternatively: why it's not uniformly light) is known as Olber's Paradox or "the dark-sky paradox", and dates back to the time of Keppler. Effectively: the universe has a finite age, and there is not an infinite number of stars (or other light sources) as one goes back in time.
There is a uniform illumination of the Universe that can be detected, as microwave radiation, known as the cosmic backgroud radiation. That occurs well below JWST's sensor range (0.6–28.3 μm), however, with a peak wavelength of about 1 mm.
Regarding Olber's paradox, would an infinite number of stars really imply the sky would be uniformly bright? Why couldn't (say) the following alternative explanations work?
(a) The universe is infinite, but has been (and will always be) stretching faster than would allow light from galaxies too far away to ever reach us.
(b) The universe is infinite and not even stretching, but there is enough (dark?) matter in it to eventually block any ray of light coming from infinitely far away.
I think (a) could work, but in (b), if matter were absorbing light for an infinite span of time, it would eventually heat up and glow itself. “Dark” matter does not directly interact with electromagnetism aka light, so wouldn’t block it.
When people talk about how much time it takes for an astronomical image to be captured, they mean the total time involved over many, many, many individual shots. All of those images are then "stacked" with fancy algorithms to generate the final image.
This is how these images of very dim, distant galaxies are created without foreground stars blowing out the whole image.
This image stacking technology has crept in to smartphone cameras in the last handful of years, most prominently as "night mode".
From researching a couple of earlier comments here: a chief reason the Hubble DSF was composed of multiple images was to eliminate individual cosmic ray events which otherwise fog / degrade such images.
If you're interested in distant constant objects, then near-transient signals can be safely ignored and removed.
Yes, one has to be careful about guessing what they probably did, because the processing was actually pretty complex, and astrophysics image processing is in general quite advanced. Saturation of the digital numbers coming from the detector is one consideration, but only one among many -- there is a lot going on.
The wikipedia article has a lot of details, including masking the CR's, removing some scattered light from Earth, the use of multiple color bands, and super-resolution using slightly different pointing from frame to frame. All this processing is done at the single-image level, and motivates dividing up the exposure time into chunks.
The ~340 exposures were taken over about 10 days and spread over 4 color bands. The typical integration time for one exposure appears to have been about 30 minutes.
There is also a tradeoff where every read from a CCD introduces a fixed amount of noise (called read noise), so there is a cost to making extremely short exposures. As a rule of thumb most individual exposures on a large telescope are ~20 minutes or so for an image where the plan is to stack many exposures. But sometimes different fields have unique constraints, and obviously JWST is a different beast than ground based telescopes.
Yes and no. It's not a kodak moment. "Camera" there basically squints hard and blinks often and then fancy algorithms reconstruct a full image based on that.
The basic (in photography) is that you can counter balance exposure time with the opening of the lens; the less light gets in, the longer it gets to take a well exposed picture (basically).
But since it’s all digital you can also counterbalance higher and lower exposure zones to grab the details that would have been over or under exposed (hdr)… unsure of how these basic photography rules would work out in this case… could anyone elaborate ?
Dabbler in photography, general familiarity with astronomy:
The goals are to maximise light capture (the objects being imaged are dim and distant), whilst miniising any degredation from other factors. JWST doesn't ahve to deal with skyglow, daylight, or satellite interference. It may still be seeing other solar system bodies (depending on where it's aiming), but mostly would be subject to cosmic-ray interference, probably impacting on the light sensor itself.
Since those are essnetially instantaneous and randomly distributed with time, by "stacking" images and filtering out transient events (taking an average or median brightness AFAIU).
I'm not sure to what extend HDR is used in astronomical imaging.
There is a lot of post-processing and palette selection to apply colours to what are just intensity maps at a given frequency.
no, that would be a single exposure (aka single integration). There’s so many reasons not to do that over many days. You take multiple exposures and stack them and mask bright things around saturation during coaddition