Performance Benchmarks
In the world of infrastructure, architectural theory must be backed by raw metrics. Omni-MDX was engineered to eliminate the parsing bottleneck, whether you are rendering a static blog or processing thousands of user-submitted research documents in real-time.
To prove the efficacy of our Universal Rust Core and the Omni-Core Protocol (OCP), we benchmarked our engine against the industry standard (@mdx-js/mdx).
Methodology
The benchmarks were executed on a standard cloud environment (AWS c6g.large - ARM64, 4 vCPUs, 8GB RAM) running Node.js v20.
We used a Heavy Document Workload: a 10,000-line MDX file containing:
- 1,500 standard Markdown nodes (paragraphs, lists).
- 500 complex JSX components with nested attributes.
- 200 LaTeX block equations.
Each test measures the time taken from raw string input to a fully usable Abstract Syntax Tree (AST) in the target runtime.
1. Parsing Speed (String to AST)
This metric measures the raw computational speed of the lexer and parser. Because Omni-MDX runs native machine code (via napi-rs), it bypasses the V8 JavaScript engine’s JIT compilation phase entirely.
| Engine | Environment | Avg Time (ms) | Ops / sec |
|---|---|---|---|
| Omni-MDX (Rust Native) | Node.js (FFI) | 1.8 ms | ~550 |
| Omni-MDX (WASM Fallback) | Edge / Browser | 3.4 ms | ~290 |
@mdx-js/mdx | Node.js (V8) | 85.2 ms | ~11 |
Result: Omni-MDX Native is approximately 47x faster than the leading JavaScript parser for heavy workloads.
2. Memory Footprint & GC Spikes
Speed is only half the equation. Traditional JavaScript parsers allocate hundreds of thousands of small objects during the parsing phase, causing the Garbage Collector (GC) to spike and block the main thread.
We measured the peak memory allocated during the parsing of 100 heavy documents concurrently.
| Engine | Peak RAM Usage | GC Thread Blocking |
|---|---|---|
| Omni-MDX (Zero-Copy OCP) | 14 MB | 0 ms |
@mdx-js/mdx | 215 MB | 120 ms |
Result: Thanks to our OCP binary transfer and Rust’s ownership model, Omni-MDX uses 15x less memory and completely eliminates Garbage Collection pauses in the host runtime.
3. Cold Start Time (Serverless & Edge)
For modern web architectures (Vercel, AWS Lambda), cold start times are critical. Loading a massive JavaScript parsing library into memory incurs a significant initialization penalty.
Omni-MDX is pre-compiled. Loading the binary (.node or .wasm) is a flat memory mapping operation.
- Omni-MDX Cold Start: ~5 ms
- Traditional JS Parsers: ~150-300 ms (depending on module resolution and V8 parsing).
The Verdict
Omni-MDX proves that document parsing should be treated as a systems engineering problem. By offloading the AST generation to a Rust core and utilizing a binary transfer protocol, we provide a foundation that is not just marginally faster, but operates in an entirely different performance category.
This extreme efficiency is what makes Omni-MDX capable of scaling beyond 2D documents into real-time 3D spatial rendering.