# BIT-101

Bill Gates touched my MacBook Pro

In my last post on Graphing Equations I showed how I graph an equation by visiting every pixel in the canvas, seeing if it satisfies that equation (or comes close to solving it) and then coloring that pixel appropriately. In that case, I was just coloring each pixel either black or white. In general it looked great, but you may have noticed some aliasing going on. The dreaded stair steps. Today we’ll look at how subpixel sampling can help solve that.

I’m going to start with some code and the image it creates. It’s a similar kind of thing to what I created in the earlier post. I just experimented with `cosh`

in place of some of the `cos`

instances and tweaked some other values. If you read the previous post, you should be able to work this out.

```
for (x = 0; x < width; x++) {
for (y = 0; y < height; y++) {
xx = map(x, 0, width, -4, 4)
yy = map(y, 0, height, -4, 4)
a = sin(sin(xx) + cosh(yy))
b = cos(sin(xx * yy) + cosh(xx))
d = abs(a - b)
if (d < 0.125) {
fillRect(x, y, 1, 1)
}
}
}
```

Anyway, to recap, we’re hitting each pixel, mapping it to a -14 to 14 range, running two expressions and seeing if the results of those two are equal (or close to being equal - less than 0.125 close). If so, we fill that pixel with the default color, which will be black. Otherwise, it remains white.

This is what we get:

It looks … OK. But let’s zoom in a bit…

Yeah, not very OK. The problem is we’re getting the value of each pixel and setting it to black or white. So it’s all very hard, rectangular images that don’t look great.

We could apply a blur…

That smooths it out a bit, but blurs everything and doesn’t really solve the problem. What we want is to have more shades of gray around the edges. One way to do this would be to render it at a higher resolution and then scale it down. The scaling algorithm then blurs those pixels on the edge just right usually. I rendered this at 3200x3200 and then scaled it back down to 800x800, and it looks pretty good:

But then we’re rendering 10,240,000 pixels (3200x3200) to get 640,000 (800x800) and having to open up an image processing program or call out to ImageMagick or something. It would be nice to be able to figure this out in code alone.

The basic concept of subpixel sampling is that for every pixel you are rendering, you sample it multiple times, as a small offset, but still within the bounds of that pixel. Then you take the average of all these samples and use that as the color for that pixel. It’s kind of the same as rendering the image larger and scaling it down, but you’re doing it all in one go, for each pixel.

There are a few different strategies for this.

- One is to randomly sample various points within the pixel.
- Another is to sample a grid of subpixel locations.
- The most complicated is to recursively sample smaller and smaller grids until the values converge for each section, then merge them back up to the top to get the final pixel value.

The final strategy is sometimes used in ray tracing, but it would be overkill here. We’ll examine the first two, though.

This is pretty simple. We choose a number of samples and randomly generate a bunch of floating point pixel values between the current and next pixel. For example, if we’re looking at pixel 300, 400, we’d get say 20 different values for x, from 300.0 to 301.0. And from 400.0 to 401.0 on y.

Here’s how that might look:

The pixel we’re sampling lies on the edge of a curved shape. Without antialiasing, we’d just sample at exactly 300, 400, and color that whole pixel white. Here, we’ve sampled twenty subpixel locations. About half of them are on the shape and half are off, so this pixel value would get somewhere around mid level gray rather than just plain black or white. The pixel one down and to the left only overlaps the shape a little bit, so would likely get fewer hits of black and would be a lighter shade of gray. The top left pixel would be all white and any of the pixels entirely in the shape would be entirely black.

So let’s code that.

```
for (x = 0; x < width; x++) {
for (y = 0; y < height; y++) {
value = 0
for (s = 0; s < 20; s++) {
x1 = x + randomFloat()
y1 = y + randomFloat()
xx = map(x1, 0, width, -4, 4)
yy = map(y1, 0, height, -4, 4)
a = sin(sin(xx) + cosh(yy))
b = cos(sin(xx * yy) + cosh(xx))
d = abs(a - b)
if (d < 0.125) {
value += 1
}
}
// theoretical color function. 0 is black, 1 is white.
setGray(1 - value/20)
fillRect(x, y, 1, 1)
}
}
```

We start with a `value`

for the pixel set to 0. Then we loop through 20 times.

Each iteration, we add a random floating point value from 0.0 to 1.0 to `x`

, storing it in `x1`

. And the same for `y`

to `y1`

.

Then we do our previous calculation with `x1`

, `y1`

. But if it’s within range, rather than coloring the pixel then and there, we just add 1 to `value`

. When we’re done with our sampling, we divide by the number of samples (20) and use that as the pixel value. I have a theoretical function `setGray`

that will set the drawing color to black if it gets 0 and white if it gets 1. Because we started at a `value`

of 0 and added up, we’ll need to reverse that by subtracting `value`

from 1. Then we fill that pixel. Here’s the result:

That looks pretty good. But not quite as good as the scaled up and scaled down version I think. And how many samples did we take? 800 x 800 x 20 = 12,800,000.

And look what happens when we reduce the number of samples to 10 and zoom in:

Because of the small sample size, sometimes the samples clump in a white section of the pixel and sometimes in the black section. So you can get ragged edges. For random sampling to work well you really need to increase the sample size. I think it also may work better on more colorful images. We’re really challenging ourselves here with black and white.

But let’s change directions now and try method two, a subpixel grid.

The idea here is rather than a random sampling, we purposely spread the sampled subpixels evenly across the pixel. Here’s a few examples.

Each square represents a single pixel. In the first one, we sample nine subpixels, sixteen in the second one and twenty-five in the third one. Of course, the more subpixels you sample, the slower your rendering is going to be. You’ll have to figure out the balance of speed and quality that works well for your purposes.

The code for this is fairly easy, not too much different from the random version. We just use a couple of for loops.

Le code:

```
sampleSize = 3
spacing = 1 / (sampleSize - 1)
for (x = 0; x < width; x++) {
for (y = 0; y < height; y++) {
value = 0
for (i = 0; i < sampleSize; i++) {
for (j = 0; j < sampleSize; j++) {
x1 = x + i * spacing
y1 = y + j * spacing
xx = map(x1, 0, width, -4, 4)
yy = map(y1, 0, height, -4, 4)
a = sin(sin(xx) + cosh(yy))
b = cos(sin(xx * yy) + cosh(xx))
d = abs(a - b)
if (d < 0.125) {
value += 1
}
}
}
// theoretical color function. 0 is black, 1 is white.
setGray(1 - value/(sampleSize * sampleSize)
fillRect(x, y, 1, 1)
}
}
```

We create a `sampleSize`

variable that holds the number of samples we’ll take on each axis. The total number of samples will be the square of this number.

The spacing between each sample will be 1 (pixel) divided by one less than the sample size. So for a sample size of 3 the spacing would be 0.5, which fits in with the first square in the image above.

The `i`

, `j`

for loops get each sample. It’s just recreating what you see in the grids above. And the rest you’ve already seen. In the end we divide `value`

by the square of `sampleSize`

. So for a sample size of 3, we divide by 9.

Here’s how it looks with a sample size of 3. That’s 9 samples per pixel.

And a sample size of 4, 16 samples per pixel.

This last one winds up samping the same number of pixels as the 4x image scaled down: 10,240,000. If I’m honest, the manually image-processed one looks a little better. But I’m pretty happy with this. If I’m generating a ton of frames for an animation, I’m not going to manually process each one. And in that case, the sample size 3 one would probably be just fine.

I find that increasing the sample size beyond 4 has very diminished returns.

Just to see how far we’ve come, here’s a combo image. On the left, subpixel sampling with a sample size of 4. On the right, there’s no antialiasing at all.

I’m no expert in this field, but this stuff was pretty easy to work out and gives decent results. I’m not sure if sampling edge-to-edge like I’m doing gives the best results. Maybe instead of going to both edges like in the first square below…

…it might be better to avoid one edge like in the second square. Or avoid all the edges and sample completely inside the pixel like in the third square. I’m sure smarter people than me have worked on this problem and have come up with better answers than me. But now you have something to look into maybe.

You’ll probably find the most resources on this stuff in ray tracing books and articles. That’s where I first ran across practical applications of the concept when I built my own ray tracer a year and a half ago.

Comments? Best way to shout at me is on Mastodon