LLM Logic

Deterministic reasoning on-chain.

LLM Logic

Protocol NAVI employs a specialized inference engine to execute Large Language Model logic within the constraints of the Solana runtime.

Model Specifications

Architecture
Encoder-Only Transformer[cite: 350]
Tokenizer
Custom Event-Tokenizer (TRANSFER, SWAP, LP_CHANGE)[cite: 332]
Context Window
Event Sequence-based[cite: 342]
Training Data
Geyser Feed + Historical Indexed Storage[cite: 366]
Primary Task
Contextual Reasoning & Anomaly Classification[cite: 321]
Inference Latency
Sub-second (optimized via Rust FFI)[cite: 399]

Inference Flow

Determinism

To ensure consensus among validators, the LLM inference must be strictly deterministic. We achieve this by:

  • Fixed Seed: All random number generation is seeded by the block hash.
  • Quantization: Using integer-only arithmetic to avoid floating-point non-determinism.
  • Greedy Decoding: Selecting the most probable token at each step.