Wednesday, May 21, 2014

The Debugger is a Hacker's Katana

It may be difficult to tell from my blog alone, but I have a strong passion for debuggers. Especially in the context of reverse engineering; analyzing software at the machine level. Today I shall step outside of video game development and write about debuggers, code analysis, and assembly hacking. All of the fun stuff that I grew up with!

Yesterday I released the first version of node-capstone, a node module that provides bindings for the Capstone disassembler library. It's the first piece of a debugger project that I recently restarted. It's a long story, but the project has been in development for about 5 years. These are its latest developments.

Hacking like a Samurai

First, I should probably establish my credentials in this area. I've written debuggers for five different architectures. Two of those were released publicly; FCEUd and GCNrd. The other three targeted Nintendo DS, Nintendo 64, and Nintendo Virtual Boy.

There are a few patterns to be spotted here! Obviously, they are all for Nintendo game systems. But more importantly, the debuggers interact with and analyze assembly code. Games for the most recent of these machines (NDS, GCN, and N64) were written in C and C++, but of course we don't have the luxury of inspecting any of this source code. The next best thing is analysis of the machine code itself; transform the machine code into human-readable assembly language, and begin studying!

In the past three years, I have written a lot of JavaScript and Python, which leaves little room for poking around in assembly languages. But there are good debug environments for each; For JavaScript you have Firebug, Firefox Browser Debugger, and the Webkit Inspector; For Python there's the excellent pdb module. These tools all hold many similarities to one another; providing breakpoints for trapping code execution, inspecting and modifying variables at runtime, injecting code, but most importantly they are all source-level debuggers.

Source Level vs Machine Level

So let's talk about source level debuggers and how they differ from machine level debuggers. A source level debugger expects the original source code and debug info from the compiler that maps machine code addresses to the source code. Some of these source level debuggers (gdb, et al.) will disassemble machine code if there is no source (or if you ask it to), but these tools are not designed for static analysis. They can give you a murky picture of the machine code at best.

A machine level debugger is the ultimate super king of static analysis. There is no "source of truth" to make sense of the blob of binary being analyzed. Some executable formats provide hints about code structure, like which segments contain code and which segments are only for data. Then there is the issue of determining which bytes in the code segments are instructions (this is complicated by some architectures using variable-sized instructions. x86 is one such architecture). Finally, it must determine the overall structure of the code; which groups of instructions comprise a complete function, flow control, how functions refer to one another, how they refer to data, pattern matching and heuristics to identify common library functions, etc.

With static analysis you can go even one step further, and determine how sets of instructions can be "decompiled" into a high-level language. A simple example is a load instruction followed by an arithmetic instruction that adds 5, and a store instruction back to the same address; this set could be equivalent to a variable assignment like the following in C:
x += 5;

This method of analysis is not necessary in a source level debugger. But it becomes important with fairly complex code in a machine level debugger. The above snippet of C is immediately recognizable, compared to the sample assembly below:
ldr  r3, 8[r4]
addi r3, r3, #5
str  r3, 8[r4]

It is this sophisticated analysis that I want my debugger to do. I have a long, long way to go before it gets to that point.

The Project

I've explained enough to describe my intention; building a machine level debugger with awesome analysis tools. But I haven't really touched on internal design or even specific architecture targets or languages. And with good reason! This debugger is taking an architecture-independent approach. Meaning it should be able to do a decent job debugging anything that it can disassemble. But it can do the best job with architectures that it can analyze automatically.

To achieve this abstraction, the debugger needs a solid core with all architectures supported by plugins or modules. Even the tracing of running processes will be written with an abstraction layer; a protocol I'm designing called Scalable Remote Debugger Protocol. SRDP provides a generic interface for transporting raw data and issuing primitive debugger commands like "add breakpoint". The protocol also supports a notification mechanism for event handling (like a breakpoint being triggered).


The protocol is still in the design phase, having undergone many revisions already. The latest developments in protocol land have been separating the data blocks from the data streams. CPU memory is an example of a block of data. Things like commands and event notifications are data streams. Conceptually all three can use the same transport, but the endpoints will be a bit different:

  • Data block endpoints are encapsulated as a mountable block storage device, similar to a USB flash drive or SD card. This allows interacting with a machine using native OS utilities; want to read raw memory from a specific address? Open and read the file, just like you are already used to doing!
  • Commands and events are encapsulated in a data stream interface using sockets, which itself uses the same open/read/write/close API as the file system.
Putting the data blocks into a mountable storage interface was an idea from qeed, who is working on something similar for his GameBoy emulator.

SRDP gives us a way to connect debugger targets to debugger UIs. A UI could provide a text mode interface like gdb, or a graphical interface like IDA Pro.

The User Interface

This project has been in development for a long time, and it has gone through plenty of revisions itself. When I initiated the project over 5 years ago, I spent a lot of time researching available GUI toolkits. The only one I found promising was the web stack! HTML is already used to layout every website in existence, CSS styles them, and JavaScript breathes life into them. This is the web stack; it is not tied to any particular operating system, and it works everywhere.

As early as '10, the only real web stack that could be used outside of a browser was Mozilla's framework, called XULRunner. I've written a few samples with it [1] [2], and was impressed immediately. But at the time, the technology was not mature enough to support development of a full-featured application by just one man.

XULRunner is still alive and kicking today, in the same capacity as always; it powers Firefox and similar applications. There are three reasons I did not stick with it as my platform of choice:

  1. XUL. This is the name of a structural language with XML syntax that replaces HTML as the GUI layout definition format in XULRunner. You don't so much write HTML+CSS+JavaScript as you write XUL+CSS+JavaScript. It's rather unfamiliar.
  2. XPCOM. This technology allows JavaScript to call functions written in another language (C, C++, Java, Python, etc.) XPCOM is written in C++. Even to interface with an existing dynamically linked library, one would have to write an XPCOM component in C++ to create the bindings. This situation has changed since '10, with the introduction of js-ctypes.
  3. Debugging. Native debugger UIs in XULRunner were non-existent at the time. It was possible to install a version of Firebug which worked with XULRunner, but it just wasn't the same. This situation may also have been alleviated, now that Firefox Browser Debugger is available.

Due to the rough edges with XULRunner, I briefly switched gears away from JavaScript for my debugger UI, and went full-force into Python+Tkiner! While Tk certainly has its place, I'm fully convinced that it is not for developing debugger user interfaces. It does not give you full control over element styling (only a little!) and there's a grievous bug in the OSX release that will probably go unfixed for the next decade. The only thing Tkinter has going for it is that it's bundled with Python.

With Python eliminated, I'm back to JavaScript as the host for my application. Being in game development, I was made aware of node-webkit, which is very similar to XULRunner, but uses HTML everywhere instead of XUL, integrates node.js instead of XPCOM, and has the Webkit Inspector built in for debugging. Seems perfect, right?

The problem with node.js is that native modules must be written in C++ (if you can't tell, I have a major aversion to this language). But then I came across the ffi module, which can be compared to js-ctypes. Now it's perfect!

Baby Steps

I have not yet decided on how I want to render the disassembler or hex editor (two widgets that are required in any machine level debugger), but I know exactly how I'm going to implement the disassembler!

In the Python experiment, I wrote a complete MIPS disassembler directly in Python. It wasn't going to be the fastest disassembler ever, but it sure was easy to write and extend! Not wanting to rewrite it in JavaScript, I decided to look for an open source library that would suit my needs. I found Capstone. It is awesome.

Capstone is written in C, based on LLVM, and supports five architectures: x86 (including X86_64), ARM, ARM64, MIPS, and PowerPC, with two more coming in the next version (SystemZ and Sparc). These are all interesting to me (especially x86_64, which is a disassembler I do NOT want to write myself), and the library should be straight-forward to extend with other architectures that I want.

All I need now is JavaScript bindings for it!

var capstone = require("capstone");

var code = new Buffer([
    0x55, 0x48, 0x8b, 0x05, 0xb8, 0x13, 0x00, 0x00

var cs = new capstone.Cs(capstone.ARCH.X86, capstone.MODE.X64);

cs.disasm(code, 0x1000).forEach(function (insn) {
        insn.address.toString(16), insn.mnemonic, insn.op_str


Welp! That was easy! :D

In all seriousness, I spent a weekend getting this working. Here's the actual output:

0x1000: push    rbp
0x1001: mov     rax, qword ptr [rip + 0x13b8]

Compare to the Python example: (the close() method was unavoidable in JavaScript, because there are no destructors.)

It is written as a node module using ffi, and has been released on github under the terms of the MIT license. This will provide the "solid core" that I mentioned earlier. Hook the disassembler up with some static analysis tools, an awesome renderer/editor, and SRDP; a modular debugger foundation that runs everywhere and debugs everything!

It's still just a tool

While that sounds quite glorified and romantic, it must be said that even the one-debugger-to-rule-them-all won't make you the best hacker in the world. It takes years of practice to use such a tool effectively. And it takes many more years just to create the tool, and to refine it to perfection.

The debugger is my katana.

Sunday, May 11, 2014

Trials of Documenting JavaScript Source Code

This should be a solved problem, by now. It's 2014, for crying out loud! Documenting source code in other languages is pretty much trivial; for Python there's Sphinx, Ruby has RDoc, for Java you have JavaDoc, and then there's Doxygen if you need to document C/C++ code. These are all pretty much de facto standards in their respective realms. What do we have for JavaScript?

A horrible wart called JSDoc3.

I hate JSDoc3. It's slow, the default template is unacceptably bad, it's difficult to create links between symbols in a semantic way, formatting requires writing HTML junk into your doc comments, ...

JSDoc3 is so bad, there are tons of replacements available:

And plenty of forks that were omitted. (I've only included generators written in JavaScript for this list. I've also seen Sphinx, JSDuck, and some others that I won't go into, because they cannot be used as node-modules.)

Docco doesn't support structured comments, and it's really designed for micro projects or tutorials (see Jasmine documentation for an example), so that's out.

dox-foundation is a good start with markdown support, but organization is a major issue (everything is sorted by file name!) and it still doesn't solve the problem with linking symbols.

Doxx is also nice (and is a fork of dox-foundation), but it's still not good at organizing generated documentation, and linking between symbols is still a pain. And what's with the jade templates? I like to write my HTML in HTML.

YUIDoc looks just as big and clunky as JSDoc3, and its default template looks like something out of Microsoft or IBM.

dox-docco is similar to Docco, but with structured comment support. It's also best for tutorials.

Dissatisfied with everything available, I decided to use dox and Prism (the same components behind dox-foundation, Doxx, and dox-docco) and combine them with doT to create an incredibly fast and flexible documentation generator that will do everything I need for one of my personal projects. Styling is based on Foundation, a la dox-foundation.

Here's what it looks like so far:

node-capstone documentation
This is a screenshot showing the WIP documentation for a project I've been hacking on for the past few weeks. (I will blog about that project later.) The project is a node module, and I wanted an easy way to generate the documentation that meets the following criteria:

  • It has to be blazing fast! No more waiting 30-seconds to spin up a JVM, please.
  • It must also be lightweight, but not limiting; easy to extend.
  • It should support JSDoc3-style tags in block comments.
  • It must support styling within documentation, preferably without HTML soup.
  • It must be capable of organizing documentation in a more sane manner than sorted-by-filename.
  • It must provide links to contextually relevant information in a semantic way.
  • The theme must not make my eyes bleed.

I think I've accomplished all of these goals, so far. It's a tiny JavaScript app that just does some comment parsing, symbol organization, and template rendering (with a really fast renderer). The template renderer, doT, is awesome because the template itself can run JavaScript, meaning it can do some of the heavy lifting (which is important as you will see later).

dox parses JSDoc3-style block comment tags, and integrates markdown, so that's perfect.

After comments are parsed, symbols are organized; first by kind (classes, functions, constants, miscellaneous). On the TODO List: child symbols below the four top-level categories will be further organized by kind (again), then sorted alphabetically.

Data types are already linked properly (the arguments table for INSN_OFFSET() in the screenshot has a type linking to a class) and automatically; there's no special documentation syntax for it! It just works.

The theme is easy on the eyes; it's pretty, it doesn't get in the way with obnoxious frames, drop-down menus, or accordions. There are no bouncy animations, giant titles, or logos; just the juicy meat of the documentation! Syntax highlighting is all there. The full source code for each symbol is included, hidden by default, and automatically highlighted as well; in the screenshot, source code for INSN_OFFSET() is shown.

All this was built in just the last three days or so, and I've been creating it as a grunt task from the start, so hopefully it can be reused again on other projects. The grunt task uses metadata from the project's package.json, and can get additional information (or overrides) from the task options.

Design of the Generator

After parsing comments with dox, it does some basic symbol organization to build a tree structure that will represent the final documentation output. The tree is simply passed on to the template renderer. The template is responsible for inferring types and generating proper links between symbols; the renderer and parsers do not care about context. Doing the type inference late in the rendering allows the template to choose link destinations (not just DOM structure and styling) to keep the documentation comment semantics separate from the documentation template structure. Only the template knows where the template is putting symbols!

At present it is not ready to take on a project the size of melonJS. In fact, it's not even complete enough to fully document my little node module! You'll even notice in the screenshot there are three kind-of-broken links at the end of the side navigation bar. ;) Only the functions and main pages have templates.

It has been a very pleasant side project, though. Of all the components I'm using, the one with the steepest learning curve has been Foundation ... and that's just a bunch of CSS! dox makes comment parsing dead simple, and provides enough context to create beautiful documentation. Prism was absolutely zero-effort to integrate (after installing a dox fork that uses marked for markdown parsing). And doT is extremely versatile; it's almost ... magical. ;) The last piece, which I haven't mentioned yet, is Font Awesome. Four of the five icons I've used can be seen in the screenshot. The last icon I'm using is for Twitter, and it appears in the page footer if you've specified a twitter handle in the grunt task options. How's that for awesome?

Next Steps

I actually have a small TODO list for this side project, and it happens to be a larger list than the node module which required documentation in the first place! One thing that would be handy is printing the file name and line number alongside the source code, but this will require extra patches to dox; may as well fork it at this point.

Another thing to do is make the side navigation bar "sticky", so I don't have to scroll to the top of the page each time I want to visit a different symbol. This is actually more important than it sounds. ;)

I have not started on the "class", "constants", or "miscellaneous" pages at all! But considering the "functions" page was blank just yesterday (and now it looks like the screenshot) I would say it isn't a lot of trouble to do each page. To be honest, I was expecting it to take longer for some reason.

And lastly, to actually package it as a grunt plugin. Right now it's just embedded into the node module's repo, where it doesn't really belong! That should just be a matter of running grunt-init. But then there's the task of documenting it! Hahaha!