FAQ

Frequently asked
questions.

Honest answers to the hardest questions — including the ones skeptics ask.

A note on transparency
We answer all questions directly. If something is not disclosed, we say so explicitly and explain why. We do not hide limitations behind marketing language. INDETERMINATE is a valid answer here too.
How it works

Universal Kernel is a hybrid discovery motor. When you submit a problem, it first exhausts all classical deterministic approaches — 6 different methodologies, 4 variations each, 24 total. This happens in under 200 milliseconds and requires no AI.

If classical approaches resolve the problem, the motor returns the result with a full SHA-256 audit chain. No LLM is involved. If classical approaches reach a genuine knowledge frontier — meaning 2 or more gaps remain unresolved after exhaustion — the motor queues a job for your configured LLM and returns a job ID immediately.

The LLM proposes new language. A separate falsification agent challenges it. If the proposal survives, it enters the Human Gate where a qualified human reviewer must approve or reject before anything enters the Knowledge Base.

Three fundamental differences. First, determinism: the LLM is called only after 24 classical variations fail — not as the first resort. Second, falsification: a second independent agent actively tries to disprove the LLM proposal before it advances. Third, human oversight: no LLM output ever enters the Knowledge Base without documented human approval.

When you call an LLM directly, you get a probabilistic response with no audit trail, no formal verification, no human gate, and no EU AI Act compliance. Universal Kernel provides all four as infrastructure — not as optional add-ons.

INDETERMINATE is not a failure. It is a documented declaration of a knowledge boundary.

Most AI systems fabricate an answer when they do not know. Universal Kernel returns INDETERMINATE with a full audit trail explaining exactly which variations were tried, which gaps remained, and what the frontier looks like. This is more honest and more useful than a fabricated answer.

INDETERMINATE sessions are stored with full SHA-256 audit chains and can be exported for regulatory documentation. Under EU AI Act Art. 15, accuracy and robustness requirements are better served by a documented INDETERMINATE than by a confident fabrication.

No. The Human Gate cannot be bypassed.

This is not a configuration option or a policy. It is architecturally enforced at the database level. A proposal that reaches the Human Gate has status AWAITING_HUMAN and cannot advance to PROMOTED without a human reviewer submitting a documented decision with a minimum 20-character reasoning string.

The human_reviewer_required: true field in every domain configuration is validated server-side and rejected if false. This field exists to document compliance with EU AI Act Art. 14 — and the motor enforces it, not just records it.

Every event in a session — creation, exhaustion result, LLM proposal, falsification, human decision — is hashed with SHA-256. Each hash includes the previous hash, creating an immutable chain similar to a blockchain.

If any event is altered after the fact, the chain breaks. This is detectable by any external auditor who holds the chain. The chain is included in every export — JSON, HTML, and CSV formats all contain the full event chain with hashes.

This directly satisfies EU AI Act Art. 12 requirements for record keeping and traceability.

Architecture and security

The motor runs on our servers. The source code is not distributed, licensed, or shared. You interact with the motor exclusively through the API.

This is the same model used by Anthropic, OpenAI, Stripe, and every major API provider. You do not receive the source code of the Anthropic API when you use Claude. You receive the capability. Universal Kernel works identically.

The API documentation describes every endpoint, every parameter, every response format, and every error code. That is what you need to build on top of the motor. The internal implementation is not part of what you are purchasing.

Every instance has a unique instance_id that is the primary key for all data. Sessions, Knowledge Base entries, events, and reviews are all partitioned by instance_id at the database level.

Every API call is authenticated with a developer key, and every operation on an instance verifies that the instance belongs to the authenticated developer. A developer cannot read, write, or modify another developer's instances. An instance cannot access another instance's data.

This is enforced at the query level — not at the application level. Even if application code contained a bug, the database constraints prevent cross-instance data access.

You can learn what the motor does from API responses. You cannot learn how it does it.

The API responses document outcomes: session IDs, status codes, gap counts, variation counts, SHA-256 hashes, proposals, decisions. These are outputs. The internal implementation — the Constitutional Compiler, the DIPEngine inference engine, the exhaustion algorithm, the VAAR classification protocol — is not present in any API response.

This is equivalent to analysing PostgreSQL query results to reverse-engineer the PostgreSQL internals. The outputs do not contain the implementation. Extensive analysis of outputs will tell you what the motor produces under which conditions. It will not tell you how the motor produces it.

Every session and every Knowledge Base entry can be exported at any time in JSON format — which contains the full SHA-256 audit chain, all events, all proposals, and all human decisions. You own your data and can export it at any moment.

In the event of service discontinuation, developers would receive advance notice and a data export window. The exported JSON files are self-contained and human-readable. They do not require Universal Kernel infrastructure to be useful as compliance documentation.

EU AI Act compliance

No. And any provider that claims this is misleading you.

Universal Kernel provides infrastructure that directly satisfies Arts. 12, 13, 14, and 15 for the AI discovery component of your product. It does not cover other aspects of your product — data governance, conformity assessment, CE marking, EU database registration, or obligations that depend on your specific deployment context.

What it does do: it gives you a documented, auditable, human-supervised AI component with a SHA-256 chain ready for regulatory submission. This is a significant portion of the technical compliance work. The rest depends on your product, your sector, and your legal counsel.

For the technical components that Universal Kernel covers — Arts. 12-15 — yes. Integration takes days, not months. The audit chain, human gate, and INDETERMINATE documentation are operational from the first API call.

The broader compliance work — risk classification, conformity assessment, EU database registration — typically takes 8-14 months according to compliance specialists. If you are starting today, you are at the limit of the realistic timeline. Universal Kernel removes the technical infrastructure bottleneck so your remaining time can focus on governance and documentation.

Yes, if your AI system is used within the EU or produces outputs that affect EU residents — regardless of where your servers are located or where your company is registered. This mirrors the extra-territorial scope of GDPR.

A company headquartered in Brazil, the US, or Japan that serves European clients with an AI system falls within scope. The EU-Mercosur agreement (provisional application from May 2026) further accelerates cross-border AI compliance requirements for Latin American companies accessing EU markets.

Pricing and access

Because one instance serves exactly one client. This is not a convention — it is architecturally enforced. You cannot share an instance between two clients, and you cannot use one instance for multiple clients regardless of how similar they are.

Per-instance pricing aligns cost with value: each client you serve with Universal Kernel infrastructure has exactly one billing unit. When a client stops, you deactivate the instance and billing stops that day. No minimum terms, no pro-rata complexity.

This model also makes your own pricing to your clients straightforward: your Universal Kernel cost per client is known, fixed, and predictable. You can build your margin on top of it with confidence.

Yes. Request API access and we will work with you on an evaluation arrangement before any billing commitment. We want developers to integrate with confidence, not under pressure.

Contact us at hello@genesis-engine.tech to discuss evaluation access.

Any LLM you have access to. The motor supports Anthropic Claude, OpenAI GPT models, any OpenAI-compatible API (Groq, Together, Mistral, local deployments), and Ollama for fully local inference with no external API calls.

You configure the LLM per instance. Different instances can use different LLMs. The motor handles orchestration — calling your LLM only when the deterministic engine declares a knowledge frontier, logging the response, and routing it through falsification and the Human Gate.

Your LLM API key is stored server-side and never exposed in API responses or client-facing interfaces.

Not currently. Universal Kernel is infrastructure for production AI systems in regulated domains. A free tier with meaningful limits would not be useful for compliance validation, and a free tier without limits would not be sustainable.

If you are evaluating for a genuine production use case, contact us — we can discuss evaluation terms that make sense for your situation.

Still have questions?

If your question is not answered here, we want to hear it. Unanswered questions become FAQ entries. Contact us at hello@genesis-engine.tech or use the contact form.