LLM Logic
Deterministic reasoning on-chain.
LLM Logic
Protocol NAVI employs a specialized inference engine to execute Large Language Model logic within the constraints of the Solana runtime.
Model Specifications
Inference Flow
Determinism
To ensure consensus among validators, the LLM inference must be strictly deterministic. We achieve this by:
- Fixed Seed: All random number generation is seeded by the block hash.
- Quantization: Using integer-only arithmetic to avoid floating-point non-determinism.
- Greedy Decoding: Selecting the most probable token at each step.
