Sunday, August 20, 2017

Tiny MCU 3D Renderer Part 4: Gouraud Shading

Today, it's interpolating normals to render smooth lighting. That's right; Gouraud Shading in full effect. Two screenshots to start with; first is a view with diffuse disabled, to show the full effect of the shading. Followed by fully textured.
Surprising that the texture is so dark. But it is what it is. I think this test model has just about reached the end of its usefulness for the project. There's just one duty left for it to serve. Remember those gradients at the bottom of the image that were added in Part 2? It's time to put our friend here through some post-processing!

Of course, there are a few other improvements I could still make to the internals. Setting up the ModelView matrix for object and camera transformations, extending the API to mimic the GLSL shader interface, normal mapping...

But first, I do have some bad news. The dithered gradients I've created so far are ALL WRONG. They are close, but close is not good enough. I need mathematical perfection. Even beyond that, I need perceptual perfection. Unfortunately, I've been struggling for the last few days trying to understand why my math wasn't showing good perceptual results. The conflict was on the display side, which was breaking visual perception. After seeing less-than-stellar gradients, I would attempt the tweak the math, hoping that I just misplaced a decimal, so to speak.

Nope. The math was right all this time. It was my screen that was wrong. You see, I have a Retina Display Macbook Pro, and I had the resolution set as high as System Preferences would allow. I've known from tinkering in the past that the display is capable of much higher resolutions, but I kept it on the official scaled resolution for visual comfort. Long story short, the resampling used for scaled resolutions is not gamma aware. This is no surprise, but it still threw me off for a few days. I'm just glad I'm not crazy.

The large rectangle is the wrong brightness at Apple's official highest "scaled" resolution.

Detail of incorrect brightness.

You can see in the closeup photo that the resampling performed by scaling has added interpolated pixels with the wrong brightness (this is not gamma-aware scaling), which destroys the integrity of the image. Compare to the closeup below, where no scaling was performed. These were both captured directly from screenshots at the various resolutions, then scaled up in Gimp with no interpolation (nearest neighbor).

When setting the screen resolution to 1:1 with the physical DPI, the mathematically-correct gamma corrections suddenly also look correct visually! And my approximations look, well, like there is a lot to be desired. But this has taught me a very valuable lesson; always trust the math. And secondly, that all this is for naught anyway, because it is completely impossible to guarantee a 1:1 display in every situation. Without a 1:1 display, gamma correcting a dithered image doesn't make sense. In fact, it makes the visual perception worse when the image is scaled without gamma correction.

The same website at 1:1 (no scaling) The large rectangle has the expected brightness.

Detail of correct brightness.

I also learned, empirically, that the gamma on my display is definitely 2.2, like any normal display. And that my tweaks to Interleaved Noise are too noisy. This is the problem with attempting to make corrections to a pipeline that broke a few stages back; two wrongs don't make a right.

So now I have to rethink my strategy on dithering a bit. For the time being, I'll leave the gamma corrections out of it. I might eventually offer a configuration option for anyone so inclined, and that's that. At least now I know how to test it for accuracy!

Postprocessing

This section is what I was hoping would be the meat of the article. Sadly, I burned a lot of time with the gamma snafu. The post processing was really straightforward, taking less than an hour with a ton of refactoring in the mix. Even then, I think the resulting image is not too bad. In this experiment, I dither the grayscale frame buffer to four grays (hand-picked) and map them to four available palette entries. What we get is a low color-depth (but colorized) rendering. Exactly what I was going for!
I shouldn't have to mention this, but to be clear it is not gamma correct. The input grays were chosen based on the grayscale image histogram (which showed that 99% of the image is in the dark half of gamma space), and the palette entries were chosen as the closest available matches against the original (pre-grayscale) diffuse texture. A few hand-tweaks to the input grays and palette indices to get a better (subjectively) perceptual result, and here we are.

There are a few obvious issues with this image. I'll try to enumerate them here, and describe why they are problems, and what to do about them.

  1. The mesh and diffuse texture are way too detailed. I've mentioned this before, but it bears repeating. It's an issue because my target screen resolution is very small (256 x 240 pixels). The game that I'm building is not going to use these assets at all. It's safe to say it won't even have any scenes from this camera perspective, either. Therefore, the current rendering is completely unrepresentative of the final project. That said, I think it does show off the power of these techniques, and it looks pretty cool in its own right.
  2. The low color depth destroys fine details. This is tied in with the first issue. The mesh and diffuse are very detailed, but we lose a lot of that in the downsampling (e.g. the last grayscale rendering) and then we lose even more in the quantization (e.g. the colorized image). Dithering only brings back a few very rough details, such as some of the lighting and shading. Practically all of the texture is now gone, with some minor exceptions like the hair and eyebrows.
  3. My target palette is not well-suited to human skin tones. I think this is the biggest problem for this particular model. I don't have the option to use a palette optimized for the diffuse. The palette I have to work with has only 56 colors, and notably there are no good browns or yellows. I had a choice between reddish-brown or greenish-brown. It just happened to look a bit better with the red-tinted browns. I even used a palette desaturation mode to make sure he didn't look too orange. It's literally the best I could do.
  4. The grayscale is way too dark. Finally, this is something that I do have control over. I could increase the contrast on the diffuse when I reduce it to gray scale, and that would help get our gradient out of the dark end. That explains why the dither looks so dark, too. This is something to keep in mind for authoring the final assets; make use of the full grayscale spectrum! Smoother gradients will result in smoother renderings.
With a color depth of 2 bits per pixel (four colors) and a less than ideal palette, this is probably as good as it's going to get for this model. For funsies, here's a mockup that I did in Gimp to get an approximation of the dithered rendering. This uses slightly different input gray levels, and a much different dithering pattern. Otherwise it was my prototype for establishing a baseline on dithering quality. I think the real time dither is superior because I was able to optimize the input levels and pattern, even if I didn't optimize the overall image brightness.

Dither mockup made in Gimp.

Where to go from here?

Next steps are creating new assets; models and textures that are optimized for the low resolution, low color depth, and fixed palette. To give you some sense of what to expect, I have already started on a human base mesh that currently contains about 280 triangles (about 1/10 of the human head mesh that I've been using to this point). It looks very blocky, as you might imagine. On the other hand, the character should only stand about 56 pixels high in the frame buffer (a little less than 1/4 of the screen height).

Assuming the final model has roughly 400 triangles, it's clear that each triangle will only cover a few pixels of screen area. Perhaps no more than a 3x3 pixel square on average. That means diffuse and normal mapping details are unimportant. The only real variations you'll be able to see from pixel-to-pixel are shadows and highlights. And that's why I have been focusing so heavily on gradients.

What I haven't mentioned yet is that the renderer will output a lot more than just four shades! Granted it can only do four shades per diffuse texture (each diffuse gets its own gradient). But it can render up to eight total gradients shared across as many diffuse textures as I need. The gradient inputs (thresholds) and outputs (fixed palette indices) can be arbitrarily chosen. Ultimately the renderer should create images that resemble Dan Fessler's HD Index Painting technique from a few years ago. If it wasn't clear by now, I'm just applying this pixel art workflow in real time, as you might do with a GLSL shader.

I'll continue working on those new assets, and should have a complete scene to show off next time! And don't worry, it won't be a scene in glorious four-shades-of-orange. You have my word.

No comments: