This is nonsense, at least in part because it's mixing two different ideas. The notion that the image "looks exactly the same as how it originally appeared" is only true when one of your eyes is positioned exactly where the camera sensor would have been, which requires a specific distance away from the screen.
Lines in 3D remaining straight in a photo is unrelated and not actually demonstrated by the image. I'm having trouble imagining why this matters - you're trying to find the intersection of two lines in an image without drawing anything?
Another aspect of the solution that makes it rather abstract is it effectively assumes we know nothing about the distribution of the number of days.
Paying at 1/2 will be optimal if it ends before you buy, very bad (3x optimal) if it ends right after you buy, and slightly better than the solution in the post if it lasts at least twice that long (1.5x optimal vs e/(e-1)).
The metric in the post is just the worst of those ratios. Assuming the unproven statement in the post (that the solution which is a constant factor worse than optimal is best), any solution of the form you suggest is going to have similar tradeoffs. If we had a distribution, we could choose.
The main image is all at the same 90x level, and those buttons just zoom in (more or less) all the way on the points, while the "140x" are separate scan patches at higher magnification (though the real point is they have 3D/height data, too).
That isn't at all what the central limit theorem says. The whole point is it holds independent of the actual shape of distribution of the population. You could use the same argument to say social security numbers are normally distributed.
One way to explain things like height being normally distributed is that there are a bunch of independent factors which contribute, and the central limit theorem applied to those factors would then suggest the observed variable looking normal-ish.
I think you can reasonably think about the flight path by modeling the movement on the hyperbolic upper half plane (x would be the position along the linear path between endpoints, y the side length of the viewport).
I considered two metrics that ended up being equivalent. First, minimizing loaded tiles assuming a hierarchical tiled map. The cost of moving x horizontally is just x/y tiles, using y as the side length of the viewport. Zooming from y_0 to y_1 loads abs(log_2(y_1/y_0)) tiles, which is consistent with ds = dy/y. Together this is just ds^2 = (dx^2 + dy^2)/y^2, exactly the upper-half-plane metric.
Alternatively, you could think of minimizing the "optical flow" of the viewport in some sense. This actually works out to the same metric up to scaling - panning by x without zooming, everything is just displaced by x/y (i.e. the shift as a fraction of the viewport). Zooming by a factor k moves a pixel at (u,v) to (k*u,k*v), a displacement of (u,v)*(k-1). If we go from a side length of y to y+dy, this is (u,v)*dy/y, so depending how exactly we average the displacements this is some constant times dy/y.
Then the geodesics you want are just the horocycles, circles with centers at y=0, although you need to do a little work to compute the motion along the curve. Once you have the arc, from θ_0 to θ_1, the total time should come from integrating dtheta/y = dθ/sin(θ), so to be exact you'd have to invert t = ln(csc(θ)-cot(θ)), so it's probably better to approximate. edit: mathematica is telling me this works out to θ = atan2(1-2*e^(2t), 2*e^t) which is not so bad at all.
Comparing with the "blub space" logic, I think the effective metric there is ds^2 = dz^2 + (z+1)^2 dx^2, polar coordinates where z=1/y is the zoom level, which (using dz=dy/y^2) works out to ds^2 = dy^2/y^4 + dx^2*(1/y^2 + ...). I guess this means the existing implementation spends much more time panning at high zoom levels compared to the hyperbolic model, since zooming from 4x to 2x costs twice as much as 2x to 1x despite being visually the same.
Actually playing around with it the behavior was very different from what I expected - there was much more zooming. Turns out I missed some parts of the zoom code:
Their zoom actually is my "y" rather than a scale factor, so the metric is ds^2 = dy^2 + (C-y)^2 dx^2 where C is a bit more than the maximal zoom level. There is some special handling for cases where their curve would want to zoom out further.
Normalizing to the same cost to pan all the way zoomed out (zoom=1), their cost for panning is basically flat once you are very zoomed in, and more than the hyperbolic model when relatively zoomed out. I think this contributes to short distances feeling like the viewport is moving very fast (very little advantage to zooming out) vs basically zooming out all the way over larger distances (intermediate zoom levels are penalized, so you might as well go almost all the way).
Hi, I was the one nerdsniped :) In the end I don't think blub space is the best way to do the whole zoom thing, but I was intrigued by the idea and had already spend too much time on it and the result turned out quite good.
The problem is twofold: which path should we take through the zoom levels,x,y and how fast should we move at any given point (and here "moving" includes zooming in/out as well).
That's what the blub space would have been cool for, because it combines speed and path into one.
So when you move linearly with constant speed through the blub space you move at different speeds
at different zoom levels in normal space and also the path and speed changes are smooth.
Unfortunately that turned out not to work quite as well... even though the flight path was alright (although not perfect), the movement speeds were not what we wanted...
I think that comes from the fact that blub space is linear combination of speed and z component.
So if you move with speed s at ground level (let's say z=1) you move with speed z at zoom level z (higher z means more zoomed out).
But as you pointed out normal zoom behaviour is quadratic so at zoom level z you move with speed z².
But I think there is no way to map this behaviour to a euclidean 2d/3d space (or at least I didn't find any. I can't really prove it right now that it's not possible xD)
So to fix the movement speed we basically sample the flight path and just move along it according to the zoom level at different points on the curve... Basically, even though there are durations in the flight path calculation, they get overwritten by TimeInterpolatingTrajectory, which is doing all the heavy work for the speed.
For the path... maybe a quadratic form with something like x^4 with some tweaking would have been better, but the behaviour we had was good enough :) Maybe the question we should ask is not about the interesting properties of non-euclidean spaces, but what makes a flightpath+speed look good
The nice thing about deciding on a distance metric is that it gives you both a path (geodesics) and the speed, and if you trust your distance metric it should be perceptually constant velocity. I agree it's non-euclidean, I think the hyperbolic geometry description works pretty well (and has the advantage of well-studied geodesics).
I did finally find the duration logic when I was trying to recreate the path, I made this shader to try to compare:
https://www.shadertoy.com/view/l3KBRd
wow not just some, it tells a totally different story, that's awful. (edit, I emailed the author- further edit: author reply: I’ve corrected my chart. I did not use a tool to extract the data. I actually think the chart you are looking at (on the left) was updated from the one I originally received and I may have been working off the earlier one.)
i'm not clear from that how many trials were run for each test condition, but the percentage is average speed reduction, not a chance for binary hit/not hit. edit: the paper pdf says up to three trials each.
Lines in 3D remaining straight in a photo is unrelated and not actually demonstrated by the image. I'm having trouble imagining why this matters - you're trying to find the intersection of two lines in an image without drawing anything?