The reason I have been so quiet recently is because I've been working on a very dry subject; testing. Unit tests are probably the most uninteresting thing I do on a daily basis as a software architect. Create some fixtures, write some assertions, run the test suite, fix any issues, repeat. It makes me cringe, just thinking about it.
But here I am, blogging about how I've been writing tests in my fun personal project, too! I am taking this project very seriously, so tests are a necessary evil. I haven't checked coverage yet (because I hit this bug with tarpaulin vs serde_derive), but it's probably somewhere around 70~80%.
Update 2017-11-05: serde_derive is only used in the app. The lib doesn't have this dependency, so I can run coverage on the lib separately. The result is even better than I imagined: 85.14% coverage, 573/673 lines covered
Surprise! There are only 26 tests (ignoring the 4 benchmark tests), how can coverage to be so high? This is one of the most interesting things about 3D rasterization; it doesn't require a whole lot of code! To be honest, though, I'm only exposing the bare minimum set of APIs; for example there's no support in the fixed function pipeline for stencil/scissor, alpha blending, etc.
Friday, November 3, 2017
Saturday, October 14, 2017
Tiny MCU 3D Renderer Part 9: Bug fixes, and first shader tests
It's shader time! I finished the simple sunbeam, which I think looks quite nice in motion.
It could use a little work on the gradient, and some extra geometry wouldn't hurt. There are four sunbeams in this test scene. Each is just a plane (two triangles). I'm thinking of splitting the plane into thirds (vertically) so I can shape it more into a semi-circle, instead of just laying flat in the background. That might give it some depth, and help combat the problem with the right-most sunbeam looking so thin.
It could use a little work on the gradient, and some extra geometry wouldn't hurt. There are four sunbeams in this test scene. Each is just a plane (two triangles). I'm thinking of splitting the plane into thirds (vertically) so I can shape it more into a semi-circle, instead of just laying flat in the background. That might give it some depth, and help combat the problem with the right-most sunbeam looking so thin.
Monday, October 9, 2017
Tiny MCU 3D Renderer Part 8: Programmable Pipeline and Asset Pipeline
Oh hey, look at that! I changed the CSS on my blog a smidge. It's worth mentioning, anyway. Say hello to the Blipjoy Invader! You can just make it out on the left side, there. (Depending on your screen resolution, it might be behind the post content, whoops!)
I also finalized the programmable pipeline on the 3D renderer, thanks to @vitalyd over at the Rust user's forum for the hint I needed to push me in the right direction. The API isn't exactly what I had in mind, but it's certainly reasonable.
This is what I built yesterday. The model on the left is drawn with our very familiar Gouraud shader with interleave dithering on a four-color gradient. This is what we've seen exclusively in screenshots to this point. On the right is something new! A much simplified shader that renders something like a cartoon, aka cel shading minus edge detection. Each model rotates in opposite directions for funsies.
Code from the last article will be referenced below.
I also finalized the programmable pipeline on the 3D renderer, thanks to @vitalyd over at the Rust user's forum for the hint I needed to push me in the right direction. The API isn't exactly what I had in mind, but it's certainly reasonable.
This is what I built yesterday. The model on the left is drawn with our very familiar Gouraud shader with interleave dithering on a four-color gradient. This is what we've seen exclusively in screenshots to this point. On the right is something new! A much simplified shader that renders something like a cartoon, aka cel shading minus edge detection. Each model rotates in opposite directions for funsies.
Code from the last article will be referenced below.
Wednesday, October 4, 2017
Tiny MCU 3D Renderer Part 7: Generics and Traits, oh my!
If you've been following my blog, you'll know that I've been writing a 3D renderer in Rust. This is my first real experience using the language. I dabbled a bit in Rust on a project in January 2016, where I was challenged by lifetime annotations, and gave up. This time around, I've managed to write a complete software renderer with all the bells and whistles of a modern shader model, without the need for a single lifetime annotation. And until just recently, without defining a single Trait or Generic.
Earlier articles in this blog series have focused exclusively on the renderer from an end user's point of view. In other words, trying to make it attractive to the everyday gamer. In this episode, I want to describe in detail one of the issues that I have been struggling with in the code design, and how I've been approaching a solution. This is by no means the "right way" to handle this, or even similar situations. I just want to provide some info for anyone who happens upon the right magical incantation in The Googlies, and ends up reading this.
(Edit 2017-10-04: The illustration below was originally flipped vertically. It now shows the correct scanning direction.)
So let's start from the middle, and work our way out. Above is a simplified representation of the rasterization step of the renderer. The details prior to, and those that follow rasterization can be ignored for the time being.
This image represents a zoomed-in detail of a frame buffer (or any render target) as a 2D triangle is being rasterized. This occurs after the perspective division, so the Z coordinate can simply be dropped for rasterization; the Z component is used later for depth testing.
Earlier articles in this blog series have focused exclusively on the renderer from an end user's point of view. In other words, trying to make it attractive to the everyday gamer. In this episode, I want to describe in detail one of the issues that I have been struggling with in the code design, and how I've been approaching a solution. This is by no means the "right way" to handle this, or even similar situations. I just want to provide some info for anyone who happens upon the right magical incantation in The Googlies, and ends up reading this.
(Edit 2017-10-04: The illustration below was originally flipped vertically. It now shows the correct scanning direction.)
Triangle rasterization scans pixels in the target buffer from left-to-right and bottom-to-top |
So let's start from the middle, and work our way out. Above is a simplified representation of the rasterization step of the renderer. The details prior to, and those that follow rasterization can be ignored for the time being.
This image represents a zoomed-in detail of a frame buffer (or any render target) as a 2D triangle is being rasterized. This occurs after the perspective division, so the Z coordinate can simply be dropped for rasterization; the Z component is used later for depth testing.
Sunday, September 24, 2017
Tiny MCU 3D Renderer Part 6: Camera animation and display scaling
Yesterday I finally got around to adding some simple animations. The app was always rendering at 60 fps, but the image was static because there was no animation. That's why I've only been posting PNG images of progress so far. But now I can do this:
This was my first ever foray into quaternions! And I must admit, learning about quaternions suuuuuuucks. Surprisingly, this is one area where I would actually recommend developers keep quaternions as a mysterious black box, though an essential part of their repertoire. Every academic writing you will find about quaternions is deep in imaginary number territory (which we all know is impossible to represent on a computer). The important point is that the imaginary numbers can be optimized out, so having them in the first place is completely stupid, but I digress. Ok, ok, imaginary numbers help make sense of the derivatives... So what? It still sucks. And it's still pointless in the context of 3D graphics.
This was my first ever foray into quaternions! And I must admit, learning about quaternions suuuuuuucks. Surprisingly, this is one area where I would actually recommend developers keep quaternions as a mysterious black box, though an essential part of their repertoire. Every academic writing you will find about quaternions is deep in imaginary number territory (which we all know is impossible to represent on a computer). The important point is that the imaginary numbers can be optimized out, so having them in the first place is completely stupid, but I digress. Ok, ok, imaginary numbers help make sense of the derivatives... So what? It still sucks. And it's still pointless in the context of 3D graphics.
Sunday, September 17, 2017
Tiny MCU 3D Renderer Part 5: Aspect Ratio and Field of View
I had a long week on vacation, and was able to do a little bit of coding almost every night. There was a lot of time spent doing touristy things, so my coding opportunities were limited. I had a good solid 4 hours of nothing but coding time on the plane, though! Both ways.
On my departure flight, I managed to finally fix the aspect ratio (as far as I can tell). This was just a matter of adjusting the projection matrix to use the correct aspect ratio for non-square pixels. On my return flight, I finished almost all of the refactoring for the new Shader API, and finally completed it from the comfort of my own couch.
It shouldn't look too much different from the previous screenshot. There are a few obvious differences if you look closer, though.
On my departure flight, I managed to finally fix the aspect ratio (as far as I can tell). This was just a matter of adjusting the projection matrix to use the correct aspect ratio for non-square pixels. On my return flight, I finished almost all of the refactoring for the new Shader API, and finally completed it from the comfort of my own couch.
Monday, September 4, 2017
Quick update, progress report, current plans
This weekend I was distracted by a well-intentioned good friend of mine who suggested solving a chess puzzle described as "deceptively simple". Unfortunately, the article is criminally misleading. Therein it is claimed that computers cannot "solve the conundrum quickly and efficiently". The article is misleading because it is in fact trivial to solve the puzzle in linear time with constant space complexity.
Solving the puzzle is not the challenge alluded to. The challenge is that given any starting position with queens already placed, find a valid solution by adding more queens. A notable related problem is enumerating and counting all valid solutions. To date, it has been shown by brute force that a 27x27 chessboard has over 243 quadrillion solutions; removing all symmetrical solutions results in about 29 quadrillion. The brute force work took about 7 years with a massively parallel array of custom hardware (written to FPGAs).
To provide some context to the size of the numbers involved, it would take a modern CPU (single-core at 4.5 GHz) about 4 years just to increment a counter as fast as possible from 0 to 29 quadrillion. That is the time estimate just for the work involving the counter; nothing more. It's also hopelessly optimistic, since that assumes you already have a list of all 29 quadrillion solutions, or that there are zero false positives or any instances of wasted effort.
Solving the puzzle is not the challenge alluded to. The challenge is that given any starting position with queens already placed, find a valid solution by adding more queens. A notable related problem is enumerating and counting all valid solutions. To date, it has been shown by brute force that a 27x27 chessboard has over 243 quadrillion solutions; removing all symmetrical solutions results in about 29 quadrillion. The brute force work took about 7 years with a massively parallel array of custom hardware (written to FPGAs).
To provide some context to the size of the numbers involved, it would take a modern CPU (single-core at 4.5 GHz) about 4 years just to increment a counter as fast as possible from 0 to 29 quadrillion. That is the time estimate just for the work involving the counter; nothing more. It's also hopelessly optimistic, since that assumes you already have a list of all 29 quadrillion solutions, or that there are zero false positives or any instances of wasted effort.
Sunday, August 20, 2017
Tiny MCU 3D Renderer Part 4: Gouraud Shading
Today, it's interpolating normals to render smooth lighting. That's right; Gouraud Shading in full effect. Two screenshots to start with; first is a view with diffuse disabled, to show the full effect of the shading. Followed by fully textured.
Surprising that the texture is so dark. But it is what it is. I think this test model has just about reached the end of its usefulness for the project. There's just one duty left for it to serve. Remember those gradients at the bottom of the image that were added in Part 2? It's time to put our friend here through some post-processing!
Saturday, August 12, 2017
Tiny MCU 3D Renderer Part 3: Textures and Perspective
I was surprised by how easy it was to interpolate over the texture coordinates, given the barycentric coordinate space. I have more boilerplate code to convert the mesh vertices into cgmath vectors than there is code to interpolate the triangles! I'll refactor it all away after the renderer's features begin to stabilize. With a little gamma and luminance love, I now have nearest-neighbor texture mapping:
The gamma correction was crucial, since this texture went through two separate processing passes; first, I dropped all chrominance information from the RGB leaving just the relative brightness; and second the global illumination was applied to the texture mapped geometry as you might imagine. To get it working right, the RGB components are transformed from Gamma Space to Linear Space prior to the luminance transformation. Then the texture stays in Linear Space until after the final illumination pass. The pixels are transformed back to Gamma Space as they are written to the frame buffer.
The gamma correction was crucial, since this texture went through two separate processing passes; first, I dropped all chrominance information from the RGB leaving just the relative brightness; and second the global illumination was applied to the texture mapped geometry as you might imagine. To get it working right, the RGB components are transformed from Gamma Space to Linear Space prior to the luminance transformation. Then the texture stays in Linear Space until after the final illumination pass. The pixels are transformed back to Gamma Space as they are written to the frame buffer.
Friday, August 11, 2017
Tiny MCU 3D Renderer Part 2: Dithering
Dithering is an important post processing technique for color quantization, and is especially useful for smoothing gradients with a low precision color space. I got ahead of myself a little bit on the 3D renderer development, and decided to research and experiment with various dithering algorithms. The most popular algorithm is arguably Floyd-Steinberg, which is based upon error diffusion. I used this algorithm back in 2009 for an image processing side-project.
It's safe to say I've learned a bit more about dithering in the last 8 years. Most obviously that Floyd-Steinberg is not ideal for animations because error diffusion will cause an avalanche of artifacts over the temporal domain. A noisy animation could be nice - even artistic - if the noise was evenly distributed. Avoiding the grainy look may be a better option, however. To that end, Bayer's ordered dithering algorithm is commonly used. Unfortunately, the apparent pattern may be too distracting. Various deterministic noise functions are also useful (e.g. pink noise or blue noise ... definitely not white noise).
My first dithering attempt was simple: I would draw a smooth gradient from black to white, using only 2 stops: black at 0.0 and white at 1.0. It was immediately clear that I needed some gamma correction, because my gradient was far too bright overall (when compared to a linear gradient without dithering). Everything I know about gamma, I learned from this article; highly recommended read. This was the first decent dithered gradient I created, using 2 stops:
You'll have to stand pretty far away from the image, and maybe squint a bit to see how the gradient tones line up (note that gamma correction was performed with γ = 1.8, which looks correct on macOS and Windows 10). Not bad for two shades! Close up though, the pattern is a little too strong. It will look better when applied to an image with lower frequency components; the linear gradient is the same pattern of pixels repeated vertically. If applied to the head model, the dithering would probably look rather nice (TBD). But we can always do better!
It's safe to say I've learned a bit more about dithering in the last 8 years. Most obviously that Floyd-Steinberg is not ideal for animations because error diffusion will cause an avalanche of artifacts over the temporal domain. A noisy animation could be nice - even artistic - if the noise was evenly distributed. Avoiding the grainy look may be a better option, however. To that end, Bayer's ordered dithering algorithm is commonly used. Unfortunately, the apparent pattern may be too distracting. Various deterministic noise functions are also useful (e.g. pink noise or blue noise ... definitely not white noise).
My first dithering attempt was simple: I would draw a smooth gradient from black to white, using only 2 stops: black at 0.0 and white at 1.0. It was immediately clear that I needed some gamma correction, because my gradient was far too bright overall (when compared to a linear gradient without dithering). Everything I know about gamma, I learned from this article; highly recommended read. This was the first decent dithered gradient I created, using 2 stops:
You'll have to stand pretty far away from the image, and maybe squint a bit to see how the gradient tones line up (note that gamma correction was performed with γ = 1.8, which looks correct on macOS and Windows 10). Not bad for two shades! Close up though, the pattern is a little too strong. It will look better when applied to an image with lower frequency components; the linear gradient is the same pattern of pixels repeated vertically. If applied to the head model, the dithering would probably look rather nice (TBD). But we can always do better!
Tuesday, August 8, 2017
Tiny MCU 3D Renderer Part 1
It's hard to believe that it has been two years since my last blog update. A lot has happened since then, but nothing to write about. I have done surprisingly little in the way of game development or hobby programming since js13k-2015. I experimented with Rust a bit, kept up on some minor maintenance work for my nodeJS Capstone bindings, and I've played a whole lot of Rocket League.
But today I want to share some progress on something that I have been working on periodically for a very long time, because I've been getting more serious about it recently. In the tradition of keeping up my personal motivation, it's time to start sharing what I've been doing. It's not much to look at, but here it is:
3D Renderer in Rust (100% Software)
This is rendered entirely in software using Rust. And, well, that's about all there is to it! Flat-shaded triangles rendered with orthographic projection. I have other screenshots from earlier stages of development, including a wireframe raster, and polygonal (as above) without depth correction. In this screenshot, I had just added a depth buffer which completes all of the geometry rendering work. Next steps are adding diffuse texture mapping and perspective projection. I'll get to that later.
Subscribe to:
Posts (Atom)