But today I want to share some progress on something that I have been working on periodically for a very long time, because I've been getting more serious about it recently. In the tradition of keeping up my personal motivation, it's time to start sharing what I've been doing. It's not much to look at, but here it is:
3D Renderer in Rust (100% Software)
This is rendered entirely in software using Rust. And, well, that's about all there is to it! Flat-shaded triangles rendered with orthographic projection. I have other screenshots from earlier stages of development, including a wireframe raster, and polygonal (as above) without depth correction. In this screenshot, I had just added a depth buffer which completes all of the geometry rendering work. Next steps are adding diffuse texture mapping and perspective projection. I'll get to that later.
So, what's the big deal, right? Anyone can render a simple head model using OpenGL and get textures and perspective for free, so what am I trying to accomplish here? Well, it's a long story that would be better told some other time. But the final goal of this little subproject is to render a 3D scene to a frame buffer on a very small ARM microcontroller. Initially I'm starting with the Teensy board, because it is inexpensive and powerful enough for simple renderings.
The example head model that I'm using is way too detailed for the final product (~2,500 triangles), but it's a good one to start with because it matches the software renderer tutorial that I'm following. By the way, if you have ever been curious how GPUs work, this is probably the right resource to start with! The author will walk you through the conceptual design, mathematical theory, to implementation. It's fascinating stuff. Anyway, the target scene size should be somewhere in the 1,000 - 2,000 triangles range, which is reasonable for such a small frame buffer. That works out to be 60,000 - 120,000 triangles per second, roughly equivalent to mid/late-90's PlayStation-era games.
Ok, so continuing on, the Teensy has up to 256KB of RAM, which is NOT much. The frame buffer used in the screenshot above is 256x240 pixels. Well, do the math; an uncompressed 32-bit image of that size is pretty close to 256KB! For this reason, I'm only using an 8-bit frame buffer, which brings it back to just below 64KB. I also have an 8-bit depth buffer, which is exactly the same size. All said, the two buffers already consume half of the memory on the MCU. That does not leave much for double buffering, let alone hard-shadow rendering. But that's OK, because I'm not building this project specifically for Teensy; the board just happens to be convenient for early development.
In fact, I'll probably target a more powerful MCU, like the STM32H7 Series. This is a pretty beefy little package; 400 MHz Cortex-M7F with 864 KB of RAM (spread across 3 regions). This MCU series is not in production as of writing, but by the time I get done with the project in a few months(?) this MCU or something like it will be readily available. An MCU like this can be purchased in bulk (10,000 pieces+) for about $12, which is very inexpensive for mass production.
So hardware is no issue (let's assume I'll be mass-producing a PCB, or Teensy 4.0 will be available in the future with similar specs). I can hypothetically fit all of the necessary buffers in memory (four buffers max, ~64KB each = 256KB) along with the stack and heap. What about textures? Having such a small frame buffer, it wouldn't make sense to use textures that are much bigger than roughly 32x32 pixels, I would estimate. And it's possible to put 64 of those small 8-bit textures into a single 64KB buffer! Sweet.
That brings us around to the next question; 8-bit textures and frame buffers, really? Yep! And that isn't changing. The screenshot above is grayscale not just because I followed the tutorial, but because I'm targeting grayscale rendering by design; 8 bits per pixel is the perfect fit.
Next Steps; Textures and Perspective
I'm on the final step of Lesson 3 on the GPU tutorial, which describes in little detail how texture mapping should be done. Seems simple enough; the entire framework is in place to make the interpolation pretty straightforward. And Lesson 4 covers perspective projection, so that's in alignment with my thoughts as well.
I'll post a followup after taking care of these two features, and maybe provide some more context about what I'm building. But that will emerge naturally over time in any case. So don't worry if any of this seems intentionally obtuse (it is).
A Note About Dependencies
You might have noticed that the screenshot was taken on macOS. That's because I'm building and testing the code with SDL2. It provides a surface texture which I can draw the raw frame buffer to. Thanks to the layout of Rust crates, I don't need to incorporate any of the SDL code into the renderer library; it just sits in the executable that "wraps" the library. The only dependencies required for the library are cgmath and obj for vector/matrix maths and model loading, respectively. These are super lightweight, and I shouldn't have any trouble building them for ARM. (Famous last words.)
are you following the tiny renderer tutorials on github?
ReplyDelete