When you're dealing with large populations (here, the study include 230,065 students--a very large number), even small shifts due to some treatment can be significant. It is very hard to generate top-down policy interventions that shift the mean of a population in significant ways: if this treatment effect (banning phones) is real, 1.1 points represents a very big policy win that can easily be applied elsewhere. The devil is in the details, however: they exclude some recent data based on the pandemic, but baseline off of 2022-2023, which was still in the throes of the pandemic. The data they show looks to have around a 0.5-1 sigma variation in percentile from 2022-2024, so the shift from the baseline of around 1 to 4 definitely looks significant, but it will be interesting to see if sticks over time.
There is a way to model this type of situation for watertight dielectrics with interface tracking: you assign each material a priority value, and a transition between materials occurs when entering that material only if it has a higher priority than your current material. Yining Karl Li has a great article about it:
that inspired me to add the feature to my renderer (rayrender.net).
The downside to priority tracking (and possibly why PBRT does not include it) is it introduces a lots of overhead to ray traversal due to each ray needing to track a priority list. Modern raytracers use packets of rays for GPU/SIMD operations, and thus minimizing the ray size is extremely important to maximize throughput and minimize cache misses.
Minimizing the ray payload for GPU was definitely part of why we didn't add that. (Though it does pain me sometimes that we don't have it in there.)
And, PBR being a textbook, we do save some things for exercises and I believe that is one of them; I think it's a nice project.
A final reason is book length: we generally don't add features that aren't described in the book and we're about at the page limit, length wise. So to add this, we'd have to cut something else...
Wow, the problem is more involved than I (a simple user) realized ...
Maybe I have to broaden my search for a raytracer. What would be my best bet for correctly simulating multi-material lenses (so with physical correctness), in Linux (open source), preferably with GPU support?
(By the way, as a user I'd be happy to give up even a factor of 10 of performance if the resulting rendering was 100% physically accurate)
The developing S7 object system (https://github.com/RConsortium/S7) is looking fairly promising in that it combines many of the nice properties of S3 and S4 (validation, multiple dispatch, sane constructors) while still being fairly simple and straightforward to use.
Excellent news. Quite promising, but R's power is its been actually functional natively. Even binary operations are functions, `+`(x,y) would work as in x+y
It makes it look like the presentation is rushed or made last minute. Really bad to see this as the first plot in the whole presentation. Also, I would have loved to see comparisons with Opus 4.1.
Edit: Opus 4.1 scores 74.5% (https://www.anthropic.com/news/claude-opus-4-1). This makes it sound like Anthropic released the upgrade to still be the leader on this important benchmark.
After reading around, it seems like they probably forgot to update/swap the slides before presentation. The graphs were correct on their website, as they launched. But the ones they used in the presentation were probably some older versions they had forgotten to fix.
For 28 Years Later, note that while the iPhone sensor did in fact ultimately collect the photons for the movie, they attached substantial professional-grade glass to the front to augment the phone camera.
My understanding is that all that extra gear is mainly to enable more ergonomic manual control for things like focus. The matte box and ND filter are probably the biggest boosts to image and motion quality, and there are affordable ways to get those on your phone.
I would assume, that most languages do that, or alternatively have a compiler, that is smart enough to ensure there is no actual overhead in the compiled code.
MITRE is a Federally Funded Research and Development Center (FFRDC), which is a distinct type of federal contractor with strict conflict of interest regulations. They are owned by the federal government, but operated by contractors and are specifically structured and regulated to minimize conflicts of interest, so are distinct from "private industry" in many regards.
Yes, you are correct, I should have typed "runs". But the point is that MITRE runs the U.S. National Cybersecurity FFRDC that maintains the CVE system, and FFRDCs are deliberately structured to minimize potential conflicts of interest (GP comment) and are definitely distinct from private industry (parent comment).
Funny you mention namespacing: R 4.5.0 was just released today with the new `use()` function, which allows you import just what you need instead of clobbering your global namespace, equivalent to python’s `from x import y` syntax.