Sunday, September 17, 2017

Tiny MCU 3D Renderer Part 5: Aspect Ratio and Field of View

I had a long week on vacation, and was able to do a little bit of coding almost every night. There was a lot of time spent doing touristy things, so my coding opportunities were limited. I had a good solid 4 hours of nothing but coding time on the plane, though! Both ways.

On my departure flight, I managed to finally fix the aspect ratio (as far as I can tell). This was just a matter of adjusting the projection matrix to use the correct aspect ratio for non-square pixels. On my return flight, I finished almost all of the refactoring for the new Shader API, and finally completed it from the comfort of my own couch.
It shouldn't look too much different from the previous screenshot. There are a few obvious differences if you look closer, though.

The first of those differences is that the window is a little bit wider and not as tall. In fact, it's wider by about one 7th, and shorter by exactly 16 pixels. The rasterized image is getting stretched (with linear interpolation) from 256x224 pixels to 292x224 pixels. This emulates the 8:7 pixel aspect ratio at which the final video will be displayed. The interpolation causes some slight blurring from left-to-right, but it's visually much nicer than nearest neighbor.

The second thing you might notice, if you have a keen eye, is that the aspect ratio of the model is very close to that of the older screenshots. This was accomplished with the perspective projection matrix, which now renders the scene slightly squashed along the horizontal axis. The squashing during rasterization (1:1 pixel AR) cancels the stretching when the frame is finally displayed (8:7 pixel AR).

And the last point to note is that the vertical field of view (FOV-Y) has been increased from approximately 45° to precisely 60°. This makes our friend's nose look a little bigger. With a different scene, it would cause more of the background to be shown (by shrinking faraway geometry even more than it used to). The 45° FOV-Y came from a rough approximation of the perspective projection matrix that I was using before the Shader API refactor.

Side Notes

60° FOV-Y is a good choice because this game is designed to be played on a television at some generous distance away from the viewer. As described by the Wikipedia article on Field of View in Video Games, this is appropriate because the TV subtends a small part of the viewer's visual field. Likewise, a larger field of view is appropriate on a PC monitor which is closer to the viewer, thus subtends a larger part of the visual field.

If you haven't noticed by now, I am obsessed with details. A true perfectionist. It is important that everything look right (or is at least mathematically accurate) as early as possible. Remember the anecdote from Part 4 about pipeline breakage? I want to prevent that on everything, regardless if it's color reproduction, aspect ratio, or field of view. And that's just visual acuity; I haven't even touched audio on this project yet! Another issue I am aware of within the rendering pipeline is z-buffer precision. I don't plan to tackle that yet (since my test scene consists of one small object close to the near plane). But I understand the tradeoffs and most of the maths to workaround it, should the need arise.

The Difficulty with Perspective Projection

The perspective projection matrix is something I wanted to talk about since it caused me no end of grief as I was trying to refactor the code. A big part of the problem was that the original approximation that I got from the renderer tutorial just didn't do things the OpenGL way. It missed a lot of small but important details.

The biggest problem I had is that I was not clipping geometry in Homogeneous Clip Space. This caused the rasterizer to underflow and overflow the width/height of the frame buffer, and the depth of the z-buffer. It was easy to workaround by clipping late in the rasterizer (at pixel plotting time) but it's more efficient to do the clipping early, before perspective division.

What's Next

The refactor left me with an ultimately cleaner implementation, but also with many new FIXME and TODO comments sprinkled throughout. One of these items to be fixed is optimizing the attribute buffer by implementing analogous APIs for glDrawElements and GL_TRIANGLE_STRIP; right now it only supports something like glDrawArrays and GL_TRIANGLE.

Another item on that list is wrestling with data types, e.g. by using Generics. This will allow multiple shaders to be used in a single program, with each shader supporting its own custom set of attributes, uniforms, and varyings.

And finally, I really need to work on some optimizations. One example is cleaning up the overuse of Copy/Clone, especially with Vector and Matrix objects. I suspect a small but impactful percentage of runtime is wasted on heap allocations for all of these little structs everywhere. And another example is with recomputing every pixel index in the frame buffer and z-buffer, when the index could just be incremented statically during the rasterization loop. At a minimum, this optimization would remove one multiplication operation from every pixel rendered.

Wrapping Up

Yay! That's it for this episode. There's a lot of work left to do, but I'm now in a good position to start implementing new shaders. The shader shown above is the plain old textured and dithered with Gouraud lighting shader. The next shader I plan to work on was hinted in the last article; it will render animated sunbeams! For that, I need to finish up the Blender export script. And now that the refactored Shader API supports an attribute buffer, I can export my scene from Blender in a format that closely matches it.

Onward to LoFi 3D gaming!

No comments: