Making Floating-Point Deterministic: Extending WebAssembly with Explicit Rounding Instructions
Short summary: In this series, I extend the official WebAssembly reference interpreter by implementing a custom instruction. This requires modifying the runtime, adding validation logic, and writing spec-compliant test cases.
Changelog:
- 11 November 2025 - We've done everything described here.
- 27 March 2026 - I wrote the detailed blog article.
Generic babbling
WebAssembly is often associated with browsers, but at its core it is a portable, sandboxed execution environment with growing relevance on Linux systems.
In this talk, I’ll go beyond using WebAssembly and focus on modifying the runtime itself. I finally have a contribution in WebAssembly - new floating-point instructions with explicit rounding semantics (such as f64.add_floor) to the WebAssembly reference interpreter.
Using this work as a case study, I’ll walk through how the WebAssembly interpreter is structured. Adding a single instruction requires changes across the AST, binary encoding, execution engine, and test suite.
How do WebAssembly runtimes rely on the operating system for memory management, isolation, and execution, and how these design choices compare to processes and syscalls in the Linux kernel?
We will gain a practical understanding of how a modern sandboxed execution environment works internally, and what it means to extend it safely and correctly.
The concept originated from discussions and was formalized in WebAssembly proposal:
Make rounding explicit in instruction set
When I was first shown with the idea, a friend of mine tried to convince me using raw bit patterns. I'll be honest: I did not fully understand the first time, but I understood that it's all about implementing a new instruction in an interpreter.
I'll try to explain this in a way, which at least now seems understandable for me. If you don't get it, don't worry, I did not get it at the first time either - jump to the next section instead.
Addition is just transforming bit patterns
When we add two numbers, we really mean transform two bit patterns into a third bit pattern according to a set of rules.
If we add two numbers, the result should be correct, for example 1.0 + 0.0 = 1.0. But with different rounding mode, a tiny contribution, like adding 0.0, can push the result to the next value.
Let's look at something strange: in IEEE-754, which is a standard which defines floating point arithmetic, we have the following:
- 0.0 = 0x0000000000000000
- -0.0 = 0x8000000000000000
Rounding affects, which one you get. For example 123.0 - 123.0 might give you +0.0 or -0.0, so different bit patterns.
NaNs
A NaN is not just Not-A-Number. It carries a payload. What is a NaN payload?
In IEEE-754, we encode floating point numbers as following:
[sign | exponent (11 bits) | mantissa (52 bits) ]
Inside the mantissa: top bit is quiet or signaling, rest is payload. Payload lets us store very small numbers... If we do NaN + NaN, which NaN we get? It depends on your hardware.
TL;DR: Floating-point operations are bit-level transformation with hidden policies
Addition is just a transformation from one bit pattern to another. So is subtraction, multiplication, etc...
Same inputs with different rounding mode give different bit pattern. Nothing approximate about it, it is a completely different value.
What my friend tried to show me was that floating-point operations are not just arithmetic. They are bit-level transformations with hidden policy decisions and rounding is the biggest hidden policy.
At this point, adding numbers stops looking like math, and more like data flowing through pipeline.
Having an idea is one thing, getting it accepted is another. It took at least 2 years of discussions for the proposal to be accepted. I was happy enough to start my work when the proposal was already accepted.
Finally I received a long awaited message that the proposal is accepted. It was time to get to implementation.
Continue to Chapter 1, where we download WebAssembly reference interpreter, compile, and run it.