> While describing a pixel as a little square is frowned upon in a world of signal processing, in some contexts it is a useful model that lets us calculate an accurate coverage of a pixel by the vector geometry
I wish people would try to calculate the coverage of some other reconstruction filter instead of just separate little squares. I’m not convinced it would be more expensive, and would hopefully give better results (less griddy artifacts, better spatial resolution), especially for images that will be rotated or resampled later.
If we assumed a radially symmetric kernel, then coverage would just be some monotonic function of the distance from the pixel center to the edge, and this function could be approximated using some low-degree polynomial and computed very quickly.
> If we assumed a radially symmetric kernel, then coverage would just be some monotonic function of the distance from the pixel center to the edge
I don't think that's true. Consider a simple case where the shape is a square and you are sampling in such a way that the closest edge point is closer to a corner of the square than the filter width. Using a function of distance you will get the same result as when sampling far from the corner. But the coverage is different in the two cases.
Better filters are much more expensive, because they need to incorporate the influence of so many more pixels. A simple Lanczos-3 which is far from perfect requires evaluating 7x7=49 input points for each output point.
I’m talking about figuring out the coverage of a vector shape, not about sampling some arbitrary rendered scene or transforming a pixel image (etc.).
Calculating the percentage of some radially symmetric kernel covered by one side of a straight line should be very cheap (it’s just a monotonic function of the closest distance from the pixel center to the edge).
If we want to extend that to shapes with multiple edges close to the pixel, or to curved edges, it will get harder, but I don’t think it’s necessarily inherently more expensive than the “calculate the coverage of a little square” problem.
The problem is that "figuring out the coverage" already assumes you're modeling the pixels as little squares, and that is not the optimal model for pixels.
Imagine we have infinite samples (i.e. a continuous brightness function), and we are using some radially symmetric reconstruction filter. Then the value of each output pixel is the sum (2d integral) of the brightness at the sample points multiplied by our function evaluated at the distance to the sample.
If our continuous brightness data consists of one color on one side of a line and a different color on the other side of the line, then if we take the contribution of every point in the plane and integrate them, the output value at our pixel is going to be a combination of the two colors lerp(color1, color2, x(d)), where x is some monotonic function of the minimum distance between the pixel center and the line, which will depend on the specific reconstruction filter we are using.
But as for the little square model: the linked article proposed that because it is being used right now, today, in many shipping vector graphics rasterizers, e.g. for fonts.
I agree with you that it yields suboptimal results, so I am proposing an alternative.
I wish people would try to calculate the coverage of some other reconstruction filter instead of just separate little squares. I’m not convinced it would be more expensive, and would hopefully give better results (less griddy artifacts, better spatial resolution), especially for images that will be rotated or resampled later.
If we assumed a radially symmetric kernel, then coverage would just be some monotonic function of the distance from the pixel center to the edge, and this function could be approximated using some low-degree polynomial and computed very quickly.
(Maybe this has been done?)