It turns out that yes, better forecasts is a large part of what motivated the launch of this instrument.
High-spectral-resolution IR spectra at GEO allow estimation of vertically-resolved temperature and water vapor (over large spatial areas, at high temporal cadence), which are then assimilated. Forecasts and nowcasts thus improve.
These "spectra-to-get-temperature-and-water" measurements were pioneered by other instruments in LEO (e.g., NASA's AIRS, https://airs.jpl.nasa.gov/mission/overview/), but LEO does not provide enough coverage to help forecasts.
To understand the benefits of GEO IR spectra, we do "OSSE's" (Observing System Simulation Experiments) to quantify how much improvement you get. You take a "Nature Run", make simulated observations (existing and proposed), and see if there is an improvement. (Since the Nature Run, which you made, provided ground truth, you can judge if there really was an improvement.)
In particular, looking at the figure there from Li et al., compare panels:
* (d) -- (Nature Run) - (existing data) ("CNTRL")
* (e) -- (Nature Run) - (existing data + GEO IR)
which both show differences between the Nature Run (NR) and the forecast.
The RMSE improvement (on a CONUS storm) is given as RMS of 0.55 (existing) versus 0.43 (with GEO IR), in degrees Kelvin. So that's 0.12 Kelvin or 0.22 Fahrenheit. Also, and probably more interestingly, the spatial pattern changes.
Studies like this (i.e., OSSEs like the ones above) are one of the main ways we decide how to build the next instruments -- what provides the most benefits vs. cost, which system parameters to push to improve and which are good enough.
They also feature that the IR hyperspectral measurement is new -- 1700 channels in IR for a telescope in GEO seems new to me, but I'm not sure what exists now in this space.
They say they hope to retrieve trace gases at that global scale (seemingly with 30 minute cadence), which I think would be new. Also, they seem to say that this spectral resolution would enable them to retrieve temperature and humidity as a function of height -- not just surface temperature and column-integrated water content ("humidity").
Aha, here's a nice link (https://www.ssec.wisc.edu/geo-ir-sounder/) on exactly this question, pointing out the NASA IR sounders that have existed for many years (AIRS). These instruments get vertically-resolved atmospheric information, but they are not at GEO so their coverage is different. This makes them less useful for NWP.
Yeah I realized the parallel while I was writing my comment! I guess what I'm thinking is that a much better experience is available and there is no in-principle reason why overleaf and prism have to be so much worse, especially in the age of vibe-coding. Prism feels like the result of two days of Claude Code, when they should have invested at least five days.
I was an area chair on the NeurIPS program committee in 1997. I just looked and it seems that we had 1280 submissions. At that time, we were ultimately capped by the book size that MIT Press was willing to put out - 150 8-page articles. Back in 1997 we were all pretty sure we were on to something big.
I'm sure people made mistakes on their bibliographies at that time as well!
And did we all really dig up and read Metropolis, Rosenbluth, Rosenbluth, Teller, and Teller (1953)?
I cited Watson and Crick '53 in my PhD thesis and I did go dig it up and read it.
I had to go to the basement of the library, use some sort of weird rotating knob to move a heavy stack of journals over, find some large bound book of the year's journals, and navigate to the paper. When I got the page, it had been cut out by somebody previous and replaced with a photocopied verison.
(I also invested a HUGE amount of my time into my bibliography in every paper I've written as first author, curating a database and writing scripts to format in the various journal formats. This involved multiple independent checks from several sources, repeated several times.
Totally! If you haven't burrowed in the stacks as a grad student, you missed out.
The real challenges there aren't the "biggies" above, though, it's the ones in obscure journals you have to get copies of by inter-library agreements. My PhD was in applied probability and I was always happy if there were enough equations so that I could parse out the French or Russian-language explanation nearby.
> And did we all really dig up and read Metropolis, Rosenbluth, Rosenbluth, Teller, and Teller (1953)?
If you didn't, you are lying. Full stop.
If you cite something, yes, I expect that you, at least, went back and read the original citation.
The whole damn point of a citation is to provide a link for the reader. If you didn't find it worth the minimal amount of time to go read, then why would your reader? And why did you inflict it on them?
I meant this more as a rueful acknowledgment of an academic truism - not all citations are read by those citing. But I have touched a nerve, so let me explain at least part of the nuance I see here.
In mathematics/applied math consider cited papers claimed to establish a certain result, but where that was not quite what was shown. Or, there is in effect no earthly way to verify that it does.
Or even: the community agrees it was shown there, but perhaps has lost intimate contact with the details — I’m thinking about things like Laplace’s CLT (published in French), or the original form of the Glivenko-Cantelli theorem (published in Italian). These citations happen a lot, and we should not pretend otherwise.
Here’s the example that crystallized that for me. “VC dimension” is a much-cited combinatorial concept/lemma. It’s typical for a very hard paper of Saharon Shelah (https://projecteuclid.org/journalArticle/Download?urlId=pjm%...) to be cited, along with an easier paper of Norbert Sauer. There are currently 800 citations of Shelah’s paper.
I read a monograph by noted mathematician David Pollard covering this work. Pollard, no stranger to doing the hard work, wrote (probably in an endnote) that Shelah’s paper was often cited, but he could not verify that it established the result at all. I was charmed by the candor.
This was the first acknowledgement I had seen that something was fishy with all those citations.
By this time, I had probably seen Shelah’s paper cited 50 times. Let’s just say that there is no way all 50 of those citing authors (now grown to 800) were working their way through a dense paper on transfinite cardinals to verify this had anything to do with VC dimension.
Of course, people were wanting to give credit. So their intentions were perhaps generous. But in no meaningful sense had they “read” this paper.
So I guess the short answer to your question is, citations serve more uses than telling readers to literally read the cited work, and by extension, should not always taken to mean that the cited work was indeed read.
My son went to LA-area and LAUSD schools, and the echo of that same commitment from those years in California was still faintly detectable in the 2010s, highly attenuated by Prop 13, as you mention.
The southern end of the central valley (San Joaquin region, whole central valley is outlined in red) is particularly hard-hit by groundwater depletion. Some of that storage does not come back, because the ground compacts after the groundwater is withdrawn.
Thanks for contributing these insights. Having worked with hydrologists for 15 years or so -- water is complicated, and people who say there are simple solutions generally do not know the domain.
A moment's reflection should make this clear. It's such a fundamental resource, touching everything we do. We just tend to take it for granted.
Yeah, and with California's typical topography (relatively younger mountains), there's a lot of sediment at the ready than can fill dams and render them worse than useless -- i.e., costs money, loses capacity fast, alters river and coast.
> Almost immediately after construction, the dam began silting up. The dam traps about 30% of the total sediment in the Ventura River system, depriving ocean beaches of replenishing sediment. Initially, engineers had estimated it would take 39 years for the reservoir to fill with silt, but within a few years it was clear that the siltation rate was much faster than anticipated.
There are similar sites all over the state. If you happen to live in the LA area, the Devil's Gate Dam above Pasadena is another such (but originally built for flood control, not for storage).
It turns out that yes, better forecasts is a large part of what motivated the launch of this instrument.
High-spectral-resolution IR spectra at GEO allow estimation of vertically-resolved temperature and water vapor (over large spatial areas, at high temporal cadence), which are then assimilated. Forecasts and nowcasts thus improve.
These "spectra-to-get-temperature-and-water" measurements were pioneered by other instruments in LEO (e.g., NASA's AIRS, https://airs.jpl.nasa.gov/mission/overview/), but LEO does not provide enough coverage to help forecasts.
To understand the benefits of GEO IR spectra, we do "OSSE's" (Observing System Simulation Experiments) to quantify how much improvement you get. You take a "Nature Run", make simulated observations (existing and proposed), and see if there is an improvement. (Since the Nature Run, which you made, provided ground truth, you can judge if there really was an improvement.)
Thankfully, many people have already done this. See: https://www.ssec.wisc.edu/geo-ir-sounder/osse/
In particular, looking at the figure there from Li et al., compare panels:
* (d) -- (Nature Run) - (existing data) ("CNTRL")
* (e) -- (Nature Run) - (existing data + GEO IR)
which both show differences between the Nature Run (NR) and the forecast.
The RMSE improvement (on a CONUS storm) is given as RMS of 0.55 (existing) versus 0.43 (with GEO IR), in degrees Kelvin. So that's 0.12 Kelvin or 0.22 Fahrenheit. Also, and probably more interestingly, the spatial pattern changes.
There are a lot of OSSE's reported on that page for these sounders. NASA is also conducting OSSE studies for a more ambitious multi-spacecraft observing system (https://science.nasa.gov/earth-science/decadal-surveys/decad...).
Studies like this (i.e., OSSEs like the ones above) are one of the main ways we decide how to build the next instruments -- what provides the most benefits vs. cost, which system parameters to push to improve and which are good enough.
reply