Services / AI Systems Engineering
Most organisations deploying AI are deploying probabilistic systems — systems that produce outputs without a documentable reasoning path, that cannot explain why they produced a specific output for a specific input, and that produce different outputs for identical inputs under different conditions. For many applications this is acceptable. For any application where a regulator, a judge, a clinician, or a customer requires an explanation of the decision — it is not.
The EU AI Act reaches full enforcement in August 2026. High-risk AI systems — defined in Annex III and covering credit scoring, employment decisions, critical infrastructure management, biometric identification, educational assessment, law enforcement, migration control, and administration of justice — require a documented risk management system, transparency to affected persons, human oversight mechanisms, and conformity assessment before deployment. Organisations that deploy high-risk AI systems after August 2026 without these requirements are exposed to fines of up to 3% of global annual turnover. Organisations deploying prohibited AI practices face fines of up to 7%.
This service engineers AI systems that can satisfy these requirements — not by attaching compliance documentation to systems that cannot structurally satisfy them, but by designing systems where auditability, explainability, and deterministic behaviour are architectural properties, not post-hoc additions. The scientific foundation for this approach is the Unified Model Equation framework — the principle that every output of a correctly engineered system has a traceable, verifiable causal path from its inputs. That is not a regulatory aspiration. It is an engineering specification.
Design and governance architecture only. System implementation is separate and additional.
Design phase. Implementation timeline depends on system complexity and your team’s capability — outside our control.
Full enforcement for high-risk AI systems. Organisations without compliant architecture in place by this date are exposed to regulatory action.
What Current AI Deployments Get Wrong
The specific structural failures. Each is preventable. Most are invisible until a regulator, a lawyer, or an incident makes them visible.
The enthusiasm for deploying AI is running significantly ahead of the governance and engineering rigour required to deploy it safely and legally in regulated contexts. The failures below are structural — not edge cases, not implementation mistakes, but consequences of the fundamental architecture of probabilistic AI systems applied to domains that require determinism, auditability, and explainability. Each has an engineering solution. The solution requires designing the system correctly from the start, not retrofitting compliance documentation onto a system whose architecture cannot satisfy the underlying requirements.
EU AI Act — What It Requires & When
Specific articles. Specific deadlines. Specific penalties. Not a future consideration.
The EU AI Act applies to organisations that place AI systems on the EU market or put them into service within the EU — and, through its provisions on deployers, to UK organisations that use AI systems developed in the EU or used in EU-connected contexts. For UK financial services firms operating under DORA with EU counterparties, healthcare organisations using AI systems with EU-origin training data, and technology companies with EU customers: the Act’s requirements are not academic. The enforcement timeline below is not provisional — these are enacted law with fixed dates.
| Article | Scope | Requirement | What this means in practice |
|---|---|---|---|
| Art. 9 | High-risk systems (Annex III) | Risk Management System | Documented risk management system covering identification and analysis of known and foreseeable risks, estimation and evaluation of risks, and specification of risk management measures. Must be reviewed and updated throughout the system lifecycle. Cannot be a one-time compliance exercise. |
| Art. 10 | High-risk systems | Data Governance | Training, validation, and testing data must be subject to data governance practices. Data must be examined for possible biases. Data must be relevant, representative, free of errors and complete to the extent possible. Data residency and processing conditions must be documented. Historical data used for training must have its provenance and selection criteria documented. |
| Art. 11–12 | High-risk systems | Technical Documentation & Logging | Technical file must be prepared before placing the system on the market or into service. Must contain: general description, design process, development methodology, monitoring, functioning, and capabilities. Logging capability that enables monitoring of the system’s operation — logs must be automatically generated and retained for the period specified by the provider (minimum 6 months in most cases). |
| Art. 13 | High-risk systems | Transparency & Information | Deployed high-risk AI systems must provide instructions for use to deployers that include: intended purpose, level of accuracy, limitations, human oversight requirements, and technical measures for human oversight. Where the system interacts with natural persons, they must be informed they are interacting with an AI system (unless obvious from context). |
| Art. 14 | High-risk systems | Human Oversight | High-risk AI systems must be designed and developed in such a way as to be effectively overseen by natural persons during use. This must include: ability to understand capabilities and limitations, ability to detect and address anomalies and failures, ability to disregard, override, or reverse outputs, and ability to intervene on the system’s operation or interrupt it. |
| Art. 15 | High-risk systems | Accuracy & Robustness | High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity — for the entire lifecycle. Accuracy metrics must be declared in the instructions for use. The system must be resilient to errors, faults, inconsistencies in data, and to adversarial inputs. Performance must be consistent and not degrade in a way that affects the system’s compliance with these requirements. |
| Art. 26 | Deployers of high-risk systems | Deployer Obligations | Deployers (organisations using high-risk AI systems built by vendors) must: assign human oversight to qualified persons, use systems only in accordance with instructions for use, monitor the system’s operation, keep logs for periods specified by the provider, inform the provider of serious incidents, conduct their own fundamental rights impact assessment where the system affects the public. Vendor compliance does not discharge deployer obligations. |
| Art. 22 | GPAI models with systemic risk | Systemic Risk Management | Providers of general-purpose AI models deemed to have systemic risk (based on compute thresholds) must: perform model evaluations including adversarial testing, assess and mitigate systemic risks, report serious incidents, ensure cybersecurity protections. For organisations that fine-tune or customise GPAI models for deployment: obligations transfer proportionally based on the degree of modification. |
Engagement Tiers — Scope, Price, Timeline
Four engagement types. Architecture and governance design only. Implementation is always separate.
AI systems engineering engagements range from a rapid compliance assessment to a full enterprise AI governance programme. All produce architecture designs, governance frameworks, and compliance documentation — not implemented systems. System implementation is carried out by your data science and engineering team, or by a technology implementation partner, from the specifications we produce. The separation matters: we are independent of any AI platform vendor, and our recommendations are not influenced by implementation commercial interests.
Bilateral Obligations
What both parties commit to. Specific to AI engagements.
Questions that need answers before August 2026
Start with an AI readiness assessment. The earlier it happens, the more options you have.
A 90-minute session in which we review your current AI deployments, classify them against applicable regulatory requirements, and identify where the material gaps are. At the end of the session, you know what you have, what it means under the EU AI Act and applicable sector regulations, and whether the remediation required is documentation-level or architectural. That assessment is what determines which engagement type is appropriate and what the realistic timeline to compliance is.
The time available before August 2026 is shorter than most organisations recognise when they account for the months required for knowledge elicitation, architecture design, implementation, compliance documentation, legal review, and conformity assessment. The organisations that will be demonstrably compliant in August 2026 are those that started in 2025 or earlier, not those that start in spring 2026.