Compile a brainfuck program into WebAssembly binary code.
The resulting webassembly code is very inefficient, but we rely on the browser to optimise it. So it is actually running pretty fast because the webassembly code is optimised and compiled into native code by the browser.
As a reference, on my machine the mandelbrot.bf program runs in:
- 5.1s with Chrome 108
- 1.3s with Firefox 108
- 1.3s with Safari 16.1
- 0.9s when compiled directly with bf-llvmlite
- 30+ min with a naive python interpreter
Note that this is not representative of a real-world application, as in general the webassembly code would already be optimised by the compiler generating it. The code we give to the browser in this demo is comically bad.
That very simple brainfuck program:
>+
would be transpiled into this webassembly module:
;; static program preamble
(module
(memory (import "js" "mem") 1)
(import "js" "putc" (func $putc (param i32)))
(func (export "main")
(local $ptr i32)
;; this is for '>'
(local.set $ptr (i32.add (local.get $ptr) (i32.const 1)))
;; this is for '+'
(i32.store8 (local.get $ptr) (i32.add (i32.load8_u (local.get $ptr)) (i32.const 1)))
)
)
The code directly generates the webassembly binary into a Uint8Array
buffer, pass it to WebAssembly.instantiate and call the exported main function.
The wat2wasm
can be used to generate commented webassembly binary, which was very useful when implementing the transpiler:
# wasm text format to commented binary
wat2wasm test.wat -v
# wasm text format to hexdump
wat2wasm test.wat --output=- | hexdump -C