OMNI-CORE LogoOMNI-CORE
omni-mdxomni-3D (soon)Open SourceAbout
GitHubDocumentation
OMNI-CORE

Knowledge must flow freely to shape the future.

Ecosystem

  • omni-mdx
  • omni-3D

Resources

  • Documentation
  • Interactive Playground

Legal & Open Source

  • GitHub Organization
  • Notice

TOAQ GROUP © 2024 - 2026

Released under the MIT License.

Navigation

Getting Started

  • Introduction
    • Web & Next.js
    • Python Engine
    • Build from Source
  • Syntax Guide

Web Integration

  • Next.js Integration
  • Binary AST Transfer
  • Custom Components
  • Unified & Plugins Ecosystem Integration
    • Basic App Router
    • Advanced Rendering
    • Live Client Editor

Python

  • Introduction & Core Engine
    • Basic Parsing & Traversal
    • Advanced Analysis & RAG
    • Native Qt Rendering
    • HTML & Web Rendering
    • Basic Parsing
    • Advanced Analysis
    • HTML Rendering
    • Qt Rendering

Architecture & Core

    • Design Philosophy
    • The Rendering Pipeline
    • Lexing & Tokenization
    • AST Node Design
    • Math & JSX Handling
    • Protocol Specification
    • Zero-Copy Decoding
    • Memory Lifecycle
    • WASM Bindings (Browser)
    • Node.js Native Addons
    • Python Bindings (PyO3)
  • Security
    • Benchmarks
    • Fuzzing Results
Docs
Web Integration
Examples
Advanced Rendering

Example: Advanced Rendering & Custom Components

Last Updated March 22, 2026

The true strength of Omni-Core lies in its ability to interweave rich text with highly interactive React components. This is the perfect approach for rendering complex datasets or experimental protocols.

ℹ️ Information
Full Source Code: Clone and test this environment directly from omni-mdx-sandbox/next.

Rendering Specific Components

Imagine you are documenting an experimental protocol and need to display an interactive player linked to a specific vocal dataset. You can map a custom XML tag present in your MDX directly to a heavy, interactive React component.

1. The Source MDX File

In your Markdown document, you naturally insert your component:

mdx
## Vocal Dataset Analysis (Cohort A)

The following excerpt demonstrates intonation variations during the cognitive load test:


2. The React Mapping (mdx-components.tsx)

On the Next.js side, you intercept this tag to inject your interactive client component:

tsx
"use client";

import { VocalDatasetPlayer } from '@/components/science/VocalDatasetPlayer';
import { InlineMath, BlockMath } from '@/components/science/MathRenderer';

export const MDX_COMPONENTS = {
  // Redefining standard HTML tags
  h1: (props: any) => 

, // Mapping your custom analysis components VocalDatasetPlayer: (props: any) => , // Rendering mathematics (LaTeX) extracted by the AST InlineMath: ({ content }: { content: string }) => , BlockMath: ({ content }: { content: string }) => , };

Thanks to this strict separation between parsing (Rust) and rendering (React), the engine never slows down, even if your document contains dozens of charts, mathematical equations, or audio samples.

Boosted by omni-mdx native node

On this page

  • Rendering Specific Components
  • 1. The Source MDX File
  • 2. The React Mapping (mdx-components.tsx)
Edit this page on GitHub

Caught a typo or want to improve the docs? Submitting a PR is the best way to help!