OMNI-CORE LogoOMNI-CORE
omni-mdxomni-3D (soon)Open SourceAbout
GitHubDocumentation
OMNI-CORE

Knowledge must flow freely to shape the future.

Ecosystem

  • omni-mdx
  • omni-3D

Resources

  • Documentation
  • Interactive Playground

Legal & Open Source

  • GitHub Organization
  • Notice

TOAQ GROUP © 2024 - 2026

Released under the MIT License.

Navigation

Getting Started

  • Introduction
    • Web & Next.js
    • Python Engine
    • Build from Source
  • Syntax Guide

Web Integration

  • Next.js Integration
  • Binary AST Transfer
  • Custom Components
  • Unified & Plugins Ecosystem Integration
    • Basic App Router
    • Advanced Rendering
    • Live Client Editor

Python

  • Introduction & Core Engine
    • Basic Parsing & Traversal
    • Advanced Analysis & RAG
    • Native Qt Rendering
    • HTML & Web Rendering
    • Basic Parsing
    • Advanced Analysis
    • HTML Rendering
    • Qt Rendering

Architecture & Core

    • Design Philosophy
    • The Rendering Pipeline
    • Lexing & Tokenization
    • AST Node Design
    • Math & JSX Handling
    • Protocol Specification
    • Zero-Copy Decoding
    • Memory Lifecycle
    • WASM Bindings (Browser)
    • Node.js Native Addons
    • Python Bindings (PyO3)
  • Security
    • Benchmarks
    • Fuzzing Results
Docs
Architecture
Performance
Benchmarks

Benchmarks & Performance

Last Updated March 24, 2026

In the modern web ecosystem, parsing Markdown and JSX is often the most expensive operation in a content-heavy build pipeline. When generating thousands of static pages in Next.js or processing vast amounts of documentation, the JavaScript bottleneck becomes painfully obvious.

We built Omni-Core to eliminate this bottleneck entirely. Here are the numbers.

Execution Time (Lower is Better)
Target: 500-line complex .mdx file
Traditional JS
12.50 ms
Omni-MDX
WASM
0.85 ms
Omni-MDX
Native Node
0.53 ms 23x Faster

The 23x Speed Multiplier

In our standardized test suite, Omni-MDX consistently outperforms traditional JavaScript-based MDX parsers (like the remark ecosystem) by a factor of 20x to 25x depending on the complexity of the file.

When parsing a standard 500-line MDX document containing mixed Markdown, React components, and LaTeX formulas:

  • Traditional JS Parser: ~12.5 milliseconds per file.
  • Omni-MDX (WASM/Native): ~0.53 milliseconds per file.

If your documentation site has 2,000 pages, Omni-MDX reduces the AST generation time from 25 seconds down to just 1 second.

Methodology

We believe in transparent, reproducible benchmarks. Our performance metrics are continuously tracked in our CI/CD pipeline across all supported environments.

1. JavaScript / WebAssembly (tinybench)

For the JS ecosystem, we use tinybench (the benchmarking tool powering Vitest). We measure the exact time it takes to go from a raw string to a fully queryable AST object in V8. This includes the WASM instantiation, the Rust parsing, the OCP Binary encoding, and the Zero-Copy JavaScript decoding.

2. Python (pytest-benchmark)

For our Python bindings (omni-mdx), we use pytest-benchmark. We compare Omni-Core against popular Python Markdown libraries. Because PyO3 allows direct memory access, the Python extension achieves near-identical speeds to the Node.js Native Addon, parsing hundreds of megabytes of text per second.

3. Rust Core (criterion)

At the lowest level, we benchmark the isolated Rust parser using criterion. This allows us to track micro-optimizations in the Lexer. The raw Rust engine operates in the realm of microseconds (µs), completely saturating the CPU’s L1 cache thanks to its flat memory layout.

Beyond Speed: Memory Predictability

While the execution speed is the headline feature, the most significant architectural win is Memory Predictability.

JavaScript parsers allocate millions of temporary objects (AST nodes, tokens, regex matches) on the V8 heap, triggering massive Garbage Collection (GC) pauses that block the main thread.

Because Omni-MDX performs all allocations in Rust and transfers data via a flat OCP Binary buffer, the host language’s memory heap remains virtually untouched. Your server uses less RAM, avoids GC spikes, and maintains a perfectly flat performance curve even under heavy concurrent load.

Boosted by omni-mdx native node

On this page

  • The 23x Speed Multiplier
  • Methodology
  • 1. JavaScript / WebAssembly (tinybench)
  • 2. Python (pytest-benchmark)
  • 3. Rust Core (criterion)
  • Beyond Speed: Memory Predictability
Edit this page on GitHub

Caught a typo or want to improve the docs? Submitting a PR is the best way to help!