Frequently Asked Questions
AI Verification FAQ
The Brief
Negentropy, Entropy & AI Hallucination
What is negentropy in AI? +
Negentropy — negative entropy — is the principle that structured systems require continuous effort to maintain order against the natural drift toward disorder. In AI, negentropy refers to the process of imposing verifiable structure on probabilistic output. Vertix applies negentropy to AI governance by independently measuring the accuracy of every claim an AI makes, producing confidence scores that quantify how much order was successfully imposed on each assertion.
Why does AI hallucinate? +
Large language models generate text by predicting one token at a time, with each prediction conditioned on every token before it. Over long sequences, each probabilistic choice compounds the uncertainty of every prior choice — entropy accumulates. Hallucination is not a bug or a failure of training. It is the natural thermodynamic consequence of running a probabilistic system over a long enough sequence without external verification.
What is AI entropy? +
In the context of AI output, entropy refers to the accumulating uncertainty in a language model’s predictions over long sequences. Every token prediction is a probabilistic choice, and over thousands of tokens — such as a 50-page contract review or a full compliance check — these choices compound. The result is drift away from ground truth, which manifests as hallucination, fabricated citations, and invented facts presented with full confidence.
Can prompt engineering prevent AI hallucination? +
Prompt engineering can improve the starting conditions for a language model, but it cannot change the underlying mechanics. The model still generates a long sequence of probabilistic tokens, and entropy still accumulates with each prediction. A well-crafted prompt may delay drift, but it cannot prevent it. Structural verification — checking AI output against source documents after generation — is required to catch hallucination reliably.
The Product
Verification, Scoring & Audit
What is AI verification? +
AI verification is the process of independently checking every claim an AI makes against source documents, regulatory data, or other ground truth. Unlike guardrails or output filters that attempt to constrain generation, verification happens after the AI produces its output — measuring accuracy claim by claim and producing a confidence score for each assertion.
What is AI confidence scoring? +
A confidence score is a measurement of how reliably an AI claim can be traced back to verifiable source material. High confidence means the assertion aligns with the original document and applicable regulatory provisions. Low confidence means the claim could not be independently verified and requires human review. Unlike the AI’s own self-assessment, confidence scores in Vertix are computed through independent verification against external sources.
What is an AI audit trail? +
An AI audit trail is a complete, exportable record of every step in the verification process — from the original document’s cryptographic hash at upload, through each claim that was verified, to the final confidence scores and flagged items. This provides the documented evidence that regulated organizations need to demonstrate due diligence when using AI for work products that affect compliance, legal, or financial decisions.
What is an AI verification layer? +
A verification layer sits between AI output and the humans who act on it. Instead of trusting AI output at face value or reviewing every word manually, a verification layer independently checks each claim, scores confidence, and flags assertions that can’t be traced to source material. The result is a governed output where verified claims are distinguished from unverifiable ones — reducing human review workload while maintaining accountability.
How is Vertix different from AI guardrails? +
Guardrails attempt to constrain what an AI can say during generation — filtering inputs and outputs to prevent harmful or inaccurate content. Vertix takes a different approach: the AI produces freely without constraints, and then every claim is independently verified after generation. This means the AI’s full reasoning capability is preserved while accuracy is measured and documented through an external verification process.
Use Cases
Industries & Applications
Does Vertix work for legal documents? +
Yes. Vertix verifies AI-generated analysis of legal documents — operating agreements, trust instruments, corporate governance documents — against the actual source text and applicable state statutes. Each claim the AI makes about the document is independently checked and scored. Vertix currently supports legal verification across multiple jurisdictions including New Mexico, Florida, Illinois, New York, and Delaware.
Can Vertix verify financial analysis? +
Yes. Vertix verifies AI-generated financial work products — variance analyses, compliance reviews, and financial statement analysis — against source documents and applicable accounting standards. Claims about figures, calculations, and regulatory compliance are independently checked and scored for confidence.
What industries does Vertix support? +
Vertix is built for any regulated industry where AI output drives decisions that carry compliance, legal, or financial risk. Current verticals include legal and finance, with bio research, data, sales, marketing, customer support, enterprise search, and product management in development. The same verification engine adapts to each domain through industry-specific regulatory data.
Getting Started
Try Vertix
How do I try Vertix? +
Upload a document at app.vertixiq.com. Vertix will generate an AI analysis and then independently verify every claim against the source document. You’ll receive a confidence-scored verification report with a full audit trail. No setup required, no sales call needed.
How long does AI verification take? +
A typical contract review takes approximately 3–5 minutes from upload to verified output. This includes AI analysis, claim extraction, independent verification of each assertion, confidence scoring, and audit trail generation.
What file formats does Vertix support? +
Vertix currently supports PDF and DOCX document uploads. Each uploaded document receives a SHA-256 cryptographic hash at the point of upload, establishing chain of custody from intake through verified output.
Security & Trust
Data Protection & Chain of Custody
How is my data protected? +
Uploaded documents are processed through Vertix’s verification pipeline and then purged from storage after governance is complete. Vertix retains the verification results, confidence scores, and audit artifacts — but not the original source documents. All data is encrypted in transit and at rest, with tenant isolation enforced at the application layer.
What is chain of custody in AI verification? +
Chain of custody means every document processed through Vertix receives a SHA-256 cryptographic hash at upload, creating an immutable record that the document has not been altered from intake through verified output. This hash is included in the exportable audit artifact, allowing organizations to prove document integrity to regulators or auditors.
Have a question we didn’t answer?