Skip to main content

Privacy-Enhancing Technologies & Confidential Computing

Services  /  Privacy-Enhancing Technologies & Confidential Computing

The conventional approach to data privacy is access control: restrict who can see the data, log who accessed it, and hope that the controls hold. This approach has a fundamental limitation — it makes privacy a governance problem rather than an engineering problem. The data exists in plaintext somewhere. Everyone with access to that location can see it. The controls fail when an insider abuses access, when credentials are stolen, when a cloud provider’s infrastructure is compromised, or when a data breach exposes what was assumed to be protected. Privacy-enhancing technologies take a different approach: they make computation possible without requiring access to the underlying data in plaintext. The privacy guarantee is mathematical, not organisational.

Four technologies define the current frontier of privacy-preserving computation. Federated learning trains machine learning models across distributed datasets without centralising the underlying data — the model comes to the data rather than the data coming to the model. Secure multi-party computation allows multiple parties to jointly compute a function over their combined data without any party seeing the other parties’ inputs. Homomorphic encryption allows computation on encrypted data, producing results that decrypt to the same answer as computation on the plaintext. Differential privacy provides a mathematical privacy guarantee for statistical queries and machine learning by adding calibrated noise that prevents individual record inference without materially affecting aggregate results.

These technologies are not experimental. They are in production in healthcare systems training diagnostic models across hospital networks, in financial services detecting fraud across banks that cannot share transaction records, in advertising measurement systems that compute campaign effectiveness without exposing individual browsing behaviour, and in genomics research that identifies drug targets across patient cohorts that cannot be legally aggregated. The constraint is not availability — the libraries exist, the hardware acceleration is improving, and the regulatory pressure to use these technologies is increasing. The constraint is knowing which technology solves the specific problem, at what performance cost, with what security assumptions, and how to implement it correctly at production scale.

Price Range
£22,000 – £280,000+
Technology selection, privacy architecture design, implementation specification, and security analysis. Production implementation is additional.
Duration
6 – 24 weeks
Architecture design phase. Production implementation adds 3–18 months depending on technology choice and system complexity.
Technologies
Federated Learning · Secure Multi-Party Computation (MPC) · Homomorphic Encryption (FHE/PHE) · Differential Privacy · Trusted Execution Environments · Private Set Intersection · Zero-Knowledge Proofs
Regulatory
UK GDPR Article 25 (privacy by design) · EU AI Act Article 10 (training data governance) · DPDIB (UK Data Protection and Digital Information Bill) · NHS DSPT · FCA data science guidance
Contract
Fixed-price. 50% on signing, 50% on delivery acceptance.
Implementation complexity is consistently underestimatedPrivacy-enhancing technologies introduce performance overhead, protocol complexity, and security assumptions that are not present in conventional systems. Homomorphic encryption is 1,000–1,000,000× slower than plaintext computation on current hardware. Federated learning introduces communication overhead, convergence challenges, and new attack surfaces. MPC requires a network protocol between parties that is more complex than any single-party system. These costs are manageable and in many cases reducing rapidly — but they must be accurately estimated and architecturally planned for before implementation begins.

Four technologies with different privacy guarantees, different performance characteristics, and different deployment requirements. The correct choice depends on the threat model, the computation required, the parties involved, and the performance constraints of the application.

The most common mistake in PET deployment is selecting a technology based on its name recognition rather than its fit to the specific problem. Federated learning is not a substitute for differential privacy. Homomorphic encryption is not a substitute for secure multi-party computation. Each addresses a different privacy problem, with different security assumptions, at a different performance cost. The technology selection is the most consequential decision in a PET programme and the one most commonly made incorrectly.

Technology 1
Federated Learning
In federated learning, model training is distributed across the data holders. Each participant trains on their local data, computes gradient updates, and sends the updates — not the data — to a central aggregator or to peer participants. The aggregated model improves from all participants’ data without any participant’s raw data leaving their environment. The model comes to the data rather than the data coming to the model. Federated learning is production-ready: Google deploys it for keyboard prediction, Apple for Siri improvements, and a growing number of healthcare networks for diagnostic model training.
Privacy guarantee
Raw data does not leave the data holder’s environment. But gradient updates can leak information about the training data — gradient inversion attacks can reconstruct training examples from updates. Federated learning alone is not a formal privacy guarantee; it requires differential privacy as an additional layer for strong privacy protection.
Performance cost
Communication overhead from sending gradient updates, not the raw data. Convergence requires more rounds than centralised training. For models with large parameter counts, update communication can dominate training time. GPU compute at each participant: equivalent to standard distributed training, no cryptographic overhead.
Security assumptions
The aggregator is assumed to be honest but curious (sees the updates, does not see the data) or, in decentralised FL, no central party sees anything. If the aggregator is malicious, gradient inversion attacks are possible. Secure aggregation protocols using MPC can prevent the aggregator from seeing individual updates.
When federated learning is the right choice
Multiple parties each have training data that cannot be legally or contractually shared, but all would benefit from a model trained on the combined data. The primary computation is ML model training, not general-purpose analytics. The parties trust each other sufficiently to share gradient updates (or secure aggregation is added). Latency requirements are compatible with multi-round training. Examples: NHS hospital network training a diagnostic model, consortium of banks training a fraud detection model, pharmaceutical companies training a drug interaction model on confidential trial data.
When it is not the right choice
The required computation is a query or analytics operation rather than model training. The parties do not trust each other sufficiently even with gradient aggregation. The data holders have heterogeneous data distributions that make federated convergence impractical. The required privacy guarantee is formal and mathematical rather than the practical protection of not sharing raw data. Real-time inference is required over combined data (FL produces a model, not a query interface).
Technology 2
Secure Multi-Party Computation
Secure multi-party computation (MPC) allows two or more parties to jointly compute a function over their combined data such that no party learns anything about the other parties’ inputs beyond what is revealed by the function’s output. The computation is performed by a cryptographic protocol — typically using secret sharing, garbled circuits, or oblivious transfer — that ensures the inputs remain private even from the other parties. MPC has a formal security proof under well-defined adversary models: the result is a mathematical guarantee, not an organisational policy. Production deployments include privacy-preserving advertising measurement (Apple, Meta), secure genome-wide association studies, and inter-bank financial settlement.
Privacy guarantee
Formally proved under the semi-honest (honest-but-curious) or malicious adversary model, depending on the protocol. The security guarantee is cryptographic: no computationally bounded adversary can learn more about a party’s input than what is logically implied by the output, even if they deviate from the protocol.
Performance cost
Significant: 100×–10,000× slower than plaintext computation depending on the circuit complexity, adversary model, and network latency. Recent progress with SPDZ, MOTION, and MP-SPDZ frameworks has made practical MPC feasible for moderate-scale computations. Not suitable for general-purpose high-throughput processing on current hardware.
Security assumptions
Semi-honest protocols assume parties follow the protocol but try to learn information from their view of the execution. Malicious protocols provide security even when parties deviate. The output itself reveals information — MPC prevents learning more than the output implies; it does not prevent inference from the output.
When MPC is the right choice
Two or more parties need to compute a specific function over their joint data without revealing their inputs to each other, and the computation is bounded in complexity (a specific query, a specific model inference, a specific analytics operation rather than general-purpose data access). The parties cannot trust each other at all — MPC provides security even against a malicious party in some protocols. Examples: two banks jointly computing customer overlap without revealing their customer lists, an insurer and a hospital jointly computing risk scores without revealing either’s underlying data, private set intersection for identity matching.
When it is not the right choice
The computation is high-throughput or requires real-time latency that the protocol overhead cannot meet. More than 3–5 parties are involved (MPC communication complexity scales with party count). The function to be computed is not well-defined in advance — MPC protocols are optimised for specific functions. The performance budget cannot accommodate the overhead of current MPC frameworks for the required computation complexity.
Technology 3
Homomorphic Encryption
Homomorphic encryption (HE) allows computation on encrypted data. A client encrypts their data, sends the ciphertext to a server, the server computes on the ciphertext using homomorphic operations, and returns an encrypted result that the client can decrypt to get the answer. The server never has access to the plaintext. Fully homomorphic encryption (FHE) supports arbitrary computation on encrypted data. Partially homomorphic encryption (PHE) supports limited operation types (addition only, or multiplication only). Levelled HE supports a bounded circuit depth. FHE is no longer just theoretical: Microsoft SEAL, OpenFHE, and TFHE are production-ready libraries deployed in cloud services and financial applications.
Privacy guarantee
The server learns nothing about the plaintext data. The guarantee rests on the hardness of the underlying lattice problems (Ring-LWE, RLWE), which are also believed to be quantum-resistant — making FHE a technology that complements post-quantum cryptography rather than conflicting with it.
Performance cost
Severe for FHE: 1,000×–1,000,000× slower than plaintext computation for current FHE schemes on general-purpose hardware. GPU and dedicated ASIC acceleration (Intel HEXL, NVIDIA cuFHE) is reducing this gap rapidly. Practical FHE today is constrained to specific function classes: simple arithmetic operations, neural network inference with constrained architectures, and database queries with specific access patterns.
Practical use today
CKKS scheme for approximate arithmetic (ideal for ML inference on floating-point data). BFV/BGV for exact integer arithmetic. TFHE for Boolean circuits with fast bootstrapping. PHE for specific narrow applications: Paillier encryption for addition-only use cases (vote tallying, private aggregation, sum statistics).
When HE is the right choice
A client wants to outsource computation to an untrusted server without revealing the data being processed — cloud ML inference on sensitive medical or financial data, encrypted database queries, private information retrieval. The computation is arithmetic and bounded in complexity. The client-server model fits the deployment (client encrypts, server computes, client decrypts). The “server sees nothing” guarantee is required rather than the “parties see nothing about each other” guarantee of MPC. Examples: a hospital running ML inference on a cloud provider’s model without revealing patient data, private database queries where the query reveals nothing about what was searched.
When it is not the right choice
The computation requires non-arithmetic operations at scale (complex branching, non-polynomial activations) that make FHE circuits impractically large. Latency requirements are incompatible with FHE overhead on current hardware. The deployment is multi-party collaborative computation (MPC is typically more efficient for this). The use case requires general-purpose encrypted data processing rather than a specific bounded function.
Technology 4
Differential Privacy
Differential privacy provides a formal mathematical privacy guarantee for statistical queries and machine learning. A mechanism is (ε, δ)-differentially private if the probability of any outcome differs by at most e⊃ε between a dataset and any dataset that differs by one record. Intuitively: the presence or absence of any single individual’s record in the dataset has a bounded effect on the output, making it impossible to determine from the output whether any specific individual was in the dataset. Implemented by adding calibrated noise to query results or gradient updates during training. Apple, Google, Microsoft, and Meta all deploy differential privacy in production systems at scale.
Privacy guarantee
Formal mathematical guarantee: the privacy loss ε quantifies the worst-case information an adversary can learn about any individual’s record from the output. This is the strongest formal privacy guarantee available for statistical computation, and it composes — the total privacy loss of multiple differentially private queries can be tracked and bounded.
Performance cost
Accuracy reduction proportional to the privacy parameter ε: lower ε (stronger privacy) requires more noise and reduces accuracy. The privacy-utility trade-off must be explicitly quantified and accepted. For ML training with DP-SGD, tight ε values (ε < 1) typically reduce model accuracy by 2–10 percentage points on classification benchmarks, though recent advances in amplification and accounting are reducing this gap.
Deployment context
DP-SGD for differentially private ML training. Local DP for data collection from individuals (each individual’s data is randomised before submission). Central DP for query answering on a trusted curator’s dataset. Rényi DP and Gaussian DP for tighter accounting in ML training. PATE (Private Aggregation of Teachers’ Ensembles) for private knowledge transfer.
When differential privacy is the right choice
A formal, quantifiable privacy guarantee is required for regulatory or legal purposes — not a best-efforts approach but a mathematical bound on privacy loss. The computation is statistical (aggregates, summaries, ML training) rather than individual record retrieval. The privacy-utility trade-off is acceptable for the application’s accuracy requirements. Membership inference risk is a specific concern (DP directly addresses the attack surface that membership inference exploits). Examples: health data statistics for research publication, ML model training on personal data with formal privacy guarantees, census data release, federated learning with formal gradient privacy guarantees, survey data with local randomisation.
When it is not the right choice
The application requires individual record retrieval — DP is a property of statistical computation and cannot protect individual queries in a way that preserves query accuracy. The accuracy loss from the required ε is unacceptable for the application. The data is not statistical in nature. The threat model requires protection against a curator who is themselves malicious — DP in the central model requires trusting the curator.

Seven implementation failures specific to privacy-enhancing technologies. Each one provides the guarantee on paper while failing to deliver it in practice.

PET implementations have a unique failure mode: the privacy guarantee is real, but it applies to the protocol in isolation. The surrounding system — the data pipeline that feeds the protocol, the output processing that follows it, the auxiliary information available to an adversary, the implementation choices that affect the security assumptions — can invalidate the guarantee without touching the cryptography. A correctly implemented FHE scheme in a system that logs the plaintexts before encryption provides no privacy. A correctly implemented federated learning protocol with a gradient inversion-vulnerable update scheme provides no privacy against the aggregator. The cryptographic core can be correct while the system fails.

01
Federated learning gradient updates are not protected against inversion attacks
Federated learning’s promise is that raw data does not leave participants. What does leave is gradient updates — the parameter changes computed from the local training data. Gradient inversion attacks, demonstrated to be effective in 2020 and since improved substantially, can reconstruct training examples from gradient updates with high fidelity for image classifiers, text classifiers, and tabular data models. An aggregator who collects individual gradient updates — rather than using secure aggregation — can reconstruct the training data of individual participants. Deployments that implement federated learning without secure aggregation or without differential privacy applied to the gradient updates provide significantly weaker privacy than their architects assumed.
What this looks like in a healthcare deployment
A hospital network deploys federated learning for a diagnostic model, believing that patient data never leaves the hospitals. The central aggregator receives individual per-hospital gradient updates. A research paper published 18 months after deployment demonstrates that the specific model architecture used is vulnerable to gradient inversion, and that individual patient chest X-ray images can be reconstructed from the gradient updates with sufficient quality for clinical recognition. The hospitals’ belief that patient data had not left their systems was correct for the raw data. It was incorrect for the gradient updates that represented it.
Architecture approach that prevents this
Secure aggregation: using a cryptographic protocol (typically based on secret sharing or homomorphic encryption) that allows the aggregator to compute the sum of gradient updates without seeing any individual participant’s update. The aggregator sees only the aggregate, not the contributors. Combined with DP-SGD applied at each participant to add calibrated noise before the update is computed, this provides both practical and formal privacy against gradient inversion at the aggregation layer.
02
Differential privacy budget is not tracked across queries or training runs
Differential privacy is a composable guarantee: each query or training step consumes a privacy budget, and the total privacy loss is the sum (or a tighter composition bound) across all queries. A system that provides (ε=1)-DP per query but executes 1,000 queries without budget tracking has consumed 1,000 units of privacy budget, not 1. The composition theorem means that answering unlimited queries with DP noise eventually provides no privacy guarantee — the privacy loss accumulates without bound. Many differential privacy deployments set an ε for individual queries and never track the cumulative budget, making the per-query guarantee meaningless as a system-level property.
What this looks like in practice
A government statistical agency deploys differentially private query answering with ε=0.5 per query, believing they are providing strong privacy. Researchers make 2,000 queries over 18 months, each at ε=0.5. The cumulative privacy loss under basic composition is ε=1,000. Under advanced composition, it is higher than the composition bound assumed. The privacy guarantee that justified the data sharing arrangement has long since been exhausted. The agency has been providing effectively no differential privacy protection for the majority of the queries made against the dataset.
Architecture approach that prevents this
Privacy accountant implementation as a core infrastructure component: every differentially private operation consumes from a global privacy budget that is tracked and reported. The privacy accountant uses Rényi DP accounting or Gaussian DP accounting for tighter bounds than basic composition. The system enforces a hard budget limit — additional queries are refused when the budget is exhausted — or the budget is reset on a defined cycle with documented justification. The privacy budget and its current consumption are reported as operational metrics alongside the system’s functional metrics.
03
Homomorphic encryption is applied correctly but the surrounding system leaks plaintext
The formal guarantee of homomorphic encryption applies to the computation: the server performing the computation has access to ciphertext, not plaintext. The guarantee does not extend to the data pipeline surrounding the computation. If the client application logs the plaintext before encryption, the plaintext is in the logs. If the client sends the plaintext to an error reporting service when the encryption fails, the plaintext leaves the protected environment. If the decrypted result is cached in an unencrypted store for performance, the results are in the cache. If the key management system that holds the decryption keys is accessible to the same parties as the ciphertext, the encryption provides no protection. The HE core can be cryptographically correct while the surrounding system fails to protect the plaintext.
The most common plaintext leakage point
A financial services company deploys FHE for cloud ML inference, ensuring that the ML model provider never sees plaintext transaction data. The client application sends transaction features as ciphertext to the cloud. The application’s error logging, however, logs all input features in plaintext for debugging. The logs are stored in the same cloud environment as the inference service. The ML model provider, whose compute environment the client is using, has access to the log storage through the shared cloud tenancy. The FHE prevents the ML provider from seeing plaintext during inference. The logging exposes it through a side channel that the FHE design did not consider.
Architecture approach that prevents this
Full data flow analysis before deployment: every path through which plaintext can exit the protected environment is identified and either eliminated or explicitly accepted as within the threat model. The threat model for the FHE deployment specifies which parties are trusted and what trust level each has. Every component in the data pipeline is assessed against the threat model — not just the cryptographic components. Log sanitisation and encrypted log storage are infrastructure requirements, not optional optimisations.
04
MPC protocol is secure against the specified adversary model but deployed in a context where a stronger adversary applies
MPC protocols are proved secure against a specific adversary model: semi-honest (parties follow the protocol but try to learn information from their view), or malicious (parties may deviate from the protocol arbitrarily). Semi-honest protocols are significantly more efficient than malicious protocols and are often deployed when the deployment context actually involves malicious adversaries. An organisation that deploys a semi-honest MPC protocol between two parties who have incentive to cheat — competitors computing a joint function — has deployed a protocol whose security proof does not apply to its deployment context. A malicious party can learn information from a semi-honest protocol that they could not learn from a correctly-implemented malicious protocol.
When this matters
Two competing financial institutions deploy a semi-honest MPC protocol to compute portfolio overlap for systemic risk assessment. The protocol is secure if both parties follow it. Institution A, knowing the result of the computation before Institution B does in one protocol round, deviates from the protocol in a way permitted by the semi-honest threat model — they observe their own view and make inferences. A malicious MPC protocol would have prevented this. The semi-honest protocol is functioning correctly within its security proof; the deployment context assumed a weaker adversary than was actually present.
Architecture approach that prevents this
Threat model specification before protocol selection: the adversary model for the MPC deployment must be established from the actual relationship between the parties, not assumed to be semi-honest because semi-honest protocols are easier to deploy. If the parties have any incentive to deviate from the protocol, the malicious adversary model applies and a malicious-secure protocol is required. The performance cost of the malicious protocol is the cost of operating in the actual threat environment. If malicious-secure performance is unacceptable, the deployment should be redesigned to achieve mutual trust before using a semi-honest protocol.
05
Federated learning convergence fails on non-IID data without diagnosis or remediation
Standard federated averaging (FedAvg) assumes that the local datasets at each participant are independently and identically distributed samples from the global data distribution. In almost every real deployment, they are not. Hospital A specialises in cardiology and its data skews heavily towards cardiac diagnoses. Hospital B is a general hospital. Hospital C serves a specific demographic with disease prevalence patterns different from the general population. Non-IID data causes FedAvg to converge slowly, to a worse global optimum, and sometimes not to converge meaningfully at all. The federated model can be significantly worse than a model trained at any single participant, providing neither the privacy benefit of federation nor the capability benefit. Non-IID data is not an edge case — it is the norm in every real-world federated deployment.
What non-IID failure looks like operationally
A diagnostic AI programme deploys federated learning across 8 hospital sites. After 200 rounds of federated training, the global model achieves 71% accuracy on the test set. The local models trained at individual sites achieve 84–91% accuracy on their local test data. The federated model is materially worse than any individual site model. The programme has implemented the privacy protection of federation at the cost of making the model worse for every participant than if they had not participated. The convergence failure was not diagnosed during the design phase and was not discovered until the production evaluation against local models.
Architecture approach that prevents this
Data heterogeneity analysis before deploying federated learning: measuring the divergence between local data distributions across participants and quantifying the expected convergence impact. For deployments with high heterogeneity, FedProx, SCAFFOLD, or personalised federated learning approaches that tolerate non-IID data are specified. Convergence benchmarking against the centralised training baseline is conducted before production deployment to verify the federated model meets the minimum acceptable performance threshold.
06
The privacy guarantee is formally correct but the output enables re-identification through auxiliary information
Privacy-enhancing technologies protect against specific inference attacks by an adversary with no auxiliary information. Real adversaries have auxiliary information. A differentially private aggregate that does not individually identify anyone can, when combined with publicly available data about the individuals in the dataset, enable re-identification. A federated model that does not reveal training data can, when queried with known examples, confirm their membership. An MPC result that reveals only the intended output can, when combined with the querying party’s prior knowledge of the other parties’ data, reveal more than intended. The formal privacy guarantee is a bound on the information revealed by the protocol in isolation. It is not a bound on the information inferable from the protocol output combined with auxiliary information.
The Netflix re-identification precedent
Netflix released an anonymised movie rating dataset satisfying k-anonymity with k=5 — a state-of-the-art privacy standard at the time. Researchers demonstrated that combining the released dataset with IMDB public ratings allowed re-identification of the majority of records whose IMDB ratings were publicly visible. The anonymisation was correctly implemented against the specified threat model (an adversary with no auxiliary information). The actual adversary had auxiliary information, invalidating the privacy guarantee. Every PET deployment must model the adversary’s auxiliary information, not just the adversary’s access to the protected computation.
Architecture approach that prevents this
Auxiliary information threat modelling as a mandatory component of every PET design: identification of what auxiliary information a realistic adversary would have, and assessment of what can be inferred from the protocol output combined with that auxiliary information. For public data releases, re-identification risk assessment against the specific auxiliary datasets available to the expected adversary. Privacy guarantees are stated with explicit scope: “assuming an adversary with no auxiliary information beyond X, this protocol provides (ε, δ)-DP”.
07
Trusted execution environments are treated as unconditionally secure
Trusted Execution Environments (TEEs) — Intel SGX, AMD SEV, ARM TrustZone — provide hardware-based isolation for computation, preventing the host OS, hypervisor, and cloud provider from accessing the data being processed inside the enclave. They are a powerful component of confidential computing. They are not unconditionally secure. SGX has been compromised by multiple side-channel attacks (Spectre, Meltdown, SGAxe, CrossTalk, and others) that extract secrets from enclaves without requiring OS-level access. TEE security depends on maintaining up-to-date microcode, correctly implementing the attestation protocol, not relying on the TEE for more than it provides, and carefully managing the enclave’s attack surface. Deployments that treat TEE isolation as equivalent to perfect cryptographic isolation have a security model that does not hold under the demonstrated attack surface.
The SGX side-channel attack surface
A confidential computing deployment uses Intel SGX to process sensitive healthcare records on untrusted cloud infrastructure. The deployment assumes that data inside the SGX enclave is inaccessible to the cloud provider. SGAxe (published 2020) demonstrated that SGX’s attestation keys can be extracted using microarchitectural attacks, allowing an adversary with physical access to the hardware or with root access to the host to forge attestation and extract enclave secrets. The security claim was correct for an adversary without microarchitectural access. It was incorrect for a cloud provider with physical access to the hardware.
Architecture approach that prevents this
TEE security is treated as one layer in a defence-in-depth architecture, not as a complete solution. The threat model explicitly addresses side-channel attacks and specifies which attack classes the TEE is assumed to resist and which it is not. Microcode and firmware maintenance is treated as a security-critical operational requirement with the same urgency as OS patching. For deployments where TEE side-channel attacks are within the threat model, the TEE is combined with cryptographic PETs rather than substituted for them.
08
Privacy-enhancing technologies are deployed without verifiable auditability or continuous assurance
A system may be architected with correct privacy-enhancing technologies at deployment time, yet lack any mechanism to prove that those guarantees remain intact over time. Cryptographic protocols, differential privacy mechanisms, federated learning pipelines, and secure enclaves all rely on correct implementation, correct configuration, and correct operation. Drift occurs: configurations change, dependencies update, logging expands, new integrations are added, and operational shortcuts are introduced under performance or business pressure. Without continuous verification, the system can silently degrade from a formally private system into one that leaks sensitive information. Privacy guarantees are not static properties; they are operational properties that must be continuously enforced and verified.
What silent degradation looks like in production
A large-scale analytics platform deploys differential privacy with a properly configured privacy accountant and strict query limits. Six months later, a new engineering team introduces a parallel analytics endpoint for internal use, bypassing the privacy accountant to improve query latency. Over time, this endpoint becomes widely used across the organisation. No audit mechanism detects that queries are being executed outside the DP framework. The system continues to report compliance with differential privacy guarantees, while in reality a significant portion of queries are executed without any privacy protection. The failure is not in the original design, but in the absence of continuous assurance and enforceable auditability.
Architecture approach that prevents this
Cryptographic and system-level auditability as a first-class requirement: every privacy-sensitive operation is logged in a tamper-evident audit trail (e.g. append-only logs with cryptographic integrity guarantees). Continuous verification systems validate that all data flows, queries, and computations pass through the enforced privacy controls. Remote attestation, reproducible builds, and policy enforcement layers ensure that only approved code paths can process sensitive data. Privacy guarantees are continuously monitored as operational metrics, with automated alerts and hard enforcement when deviations occur. Independent audit capability — internal or external — is built into the system design, ensuring that privacy claims can be verified at any point in time, not just asserted at deployment.

Three engagement types. Technology selection and feasibility, single-system PET architecture, and multi-party collaborative privacy programme.

All three engagements produce architecture designs and implementation specifications. Production implementation — writing the application code, configuring the PET libraries, deploying the infrastructure — is performed by your engineering team or an implementation partner from our specifications. The technology selection engagement can be conducted independently as a precursor to architecture design, or the two phases can be combined into a single engagement for organisations where the technology choice is already clear.

Engagement Type 1
PET Technology Selection & Feasibility Assessment
For organisations that know they need privacy-preserving computation but do not yet know which technology, or that have chosen a technology but have not validated whether it is technically and economically feasible for their specific use case. The technology selection is the highest-leverage decision in a PET programme — the wrong choice wastes 6–18 months and produces a system that either provides inadequate privacy or impractical performance. This engagement prevents that outcome before any implementation investment is made. Can proceed directly to Type 2 or Type 3 architecture design at the conclusion.
£22,000
Fixed · VAT excl.
6 weeksType 1 fee credited in full against Type 2 or Type 3 if engaged within 90 days of delivery.
Problem Analysis
Privacy problem specification: what data, from which parties, must remain private from which other parties, under what adversary capability — specified formally enough to map to a PET threat model
Computation specification: what function must be computed over the protected data — the specific operation, its inputs, its outputs, and the information the output reveals about the inputs
Performance requirements: latency, throughput, and computational budget constraints that any viable technology must satisfy
Regulatory mapping: UK GDPR Article 25 (privacy by design), Article 32 (appropriate technical measures), relevant sector-specific requirements (NHS DSPT, FCA, ICO guidance) and how PET deployment satisfies them
Auxiliary information threat model: what information a realistic adversary would have beyond the protocol output, and how it constrains the viable technology choices
Technology Evaluation
Candidate technology assessment: each of the four core PETs (and TEEs and ZKPs where relevant) evaluated against the specific privacy problem, computation, and performance requirements
Performance benchmarking: for each candidate technology, an estimated performance profile on the specified computation using representative hardware — using published benchmarks for standard operations rather than claimed performance from vendor marketing
Privacy guarantee comparison: the privacy guarantee each technology provides for the specific problem, stated formally with the assumptions and adversary model
Library and infrastructure landscape: which production-ready libraries and frameworks exist for each technology, their maturity level, their known limitations, and their support availability
Hybrid architecture assessment: whether combining technologies (FL + DP, MPC + TEE, FHE + DP) would provide a better privacy-utility-performance trade-off than any single technology
Feasibility Output
Technology recommendation: primary technology recommendation with documented justification, and the alternatives considered with why they were not recommended for this use case
Privacy-utility-performance trade-off analysis: for the recommended technology, the specific trade-off that must be accepted — the accuracy loss from DP, the latency overhead of FHE, the convergence conditions for FL
Feasibility verdict: is the recommended technology feasible for this use case with current hardware and library maturity? If yes, at what implementation cost? If not, what must change for it to become feasible (hardware advance, algorithm improvement, use case relaxation)?
Implementation cost estimate: the engineering effort required to implement the recommended technology for the specified use case, to the level of precision available from the feasibility analysis
Go/no-go recommendation: a clear statement of whether proceeding to architecture design is justified by the feasibility analysis — including the case where the technology is not currently feasible for the use case and investment should be deferred
When the feasibility verdict is “not yet feasible”Some PET use cases are not practically feasible on current hardware. Fully homomorphic encryption for real-time inference on complex neural networks is not currently practical at production latency requirements. Highly communication-efficient MPC for very large numbers of parties has not yet been solved at scale. When the feasibility analysis concludes that a use case cannot be solved with current PET capabilities, we will say so, explain what constraints prevent feasibility, and identify whether a modified use case (reduced model complexity, approximate computation, relaxed latency requirement) would be feasible. We do not recommend proceeding to architecture design when the technology is not ready for the use case.
What Your Team Must Provide
Privacy problem description: which data, which parties, what must be kept private from whom — specific enough to distinguish the relevant PET threat models
Computation specification: what function is being computed — a description precise enough to estimate the circuit depth or communication complexity for MPC/FHE feasibility
Performance constraints: hard latency and throughput requirements, hardware budget, network topology between parties if applicable
Regulatory context: which regulatory frameworks apply and what they require for privacy protection of the data in question
What Is Not in This Engagement
Architecture design: Type 1 produces the technology recommendation and feasibility assessment; the full architecture is the subject of Type 2 or Type 3
Proof of concept implementation: the feasibility assessment uses published benchmarks and circuit complexity analysis, not a prototype implementation
Regulatory compliance documentation: the Type 1 output includes regulatory mapping; full compliance documentation is produced in Type 2 or Type 3 as appropriate
Engagement Type 2
Single-System PET Architecture & Implementation Specification
For organisations with a specific production system that must be designed or redesigned to use privacy-enhancing technology for a single defined use case. Builds from the Type 1 technology selection (or from an independently-established technology decision) to produce a complete architecture and implementation specification for the PET deployment: the protocol design, the data flow architecture, the cryptographic parameter selection, the performance optimisation approach, the privacy budget management, the security analysis, and the implementation specification for the engineering team.
£65,000
Fixed · less £22k Type 1 credit · VAT excl.
12 weeksExcludes production implementation, which adds 3–12 months depending on the technology and system complexity.
Architecture Design
Full data flow architecture: every path from raw data through the PET protocol to the output, with the trust boundary at each stage and the data state (plaintext, encrypted, shared, noisy) at each transition
Protocol specification: for MPC, the specific protocol, its round complexity, and its communication structure; for FHE, the scheme, parameter set, and circuit structure; for FL, the aggregation protocol and secure aggregation specification; for DP, the mechanism, sensitivity analysis, and privacy accountant design
Cryptographic parameter selection: key sizes, security parameters, noise levels — with formal justification for each choice and the security level it provides
Auxiliary information threat modelling: identification of all auxiliary information a realistic adversary would have and assessment of what they can infer from the protocol output combined with that information
Privacy guarantee formalisation: the formal privacy guarantee the designed system provides, stated with its assumptions and the adversary model it protects against
Plaintext leakage analysis: identification of every path through which plaintext could escape the protected environment and the architectural control for each
Performance & Optimisation
Performance profiling: benchmarking the specified protocol on representative hardware against the production performance requirements, with explicit pass/fail against the requirements
Optimisation specification: the specific algorithmic and implementation optimisations required to meet performance requirements — batching strategies, circuit optimisation for FHE, gradient compression for FL, protocol selection for MPC
Hardware acceleration specification: where GPU or dedicated acceleration (Intel HEXL for NTT, NVIDIA for FHE bootstrapping, hardware MPC acceleration) can reduce latency to acceptable levels
Scaling architecture: how the system scales as data volume, participant count, or query rate increases — the bottlenecks at each scale point and the architectural solutions
Privacy-utility trade-off documentation: explicit quantification of the accuracy/utility cost of the privacy mechanism, with the rationale for the selected trade-off point
Implementation & Compliance
Library selection: specific library recommendation (TensorFlow Federated, PySyft, OpenFHE, SEAL, MOTION, MP-SPDZ, Google DP library, etc.) with version and configuration requirements
Implementation specification: component-by-component engineering specification for the privacy-critical components, written at the level of detail required for implementation without architectural reinterpretation
Test specification: the cryptographic correctness tests, the privacy guarantee verification tests, and the performance acceptance tests for the implemented system
UK GDPR Article 25 compliance documentation: the privacy-by-design assessment demonstrating how the architecture satisfies the data minimisation and integrity/confidentiality requirements
Privacy budget management specification: for DP deployments, the privacy accountant implementation, budget tracking, and enforcement mechanism
Production implementation
Ongoing privacy budget monitoring
The performance profiling finding that most commonly changes the architectureHomomorphic encryption schemes optimised for one operation type (CKKS for approximate arithmetic) perform inadequately for others (exact Boolean operations). MPC protocols optimised for small circuits are impractical for the circuit sizes that real ML inference requires. The performance profiling step, conducted against the actual production hardware budget and the actual circuit complexity of the specified computation, frequently reveals that the initially selected approach cannot meet the latency requirement and that a different algorithm, a different parameter set, or a hybrid approach is required. This finding is much better made in the architecture phase at £65,000 than in the implementation phase at £300,000+.
What Your Team Must Provide
Confirmed technology selection from Type 1 or an equivalent prior assessment — Type 2 begins from a technology decision, not from technology selection
Hardware budget and infrastructure constraints: the compute, memory, and network resources available for the PET deployment
Engineering team profile: what PET library experience the implementation team has — the implementation specification is calibrated to the team’s familiarity with the relevant libraries and cryptographic concepts
Data flow documentation: the current system architecture showing where the sensitive data flows from source to use, as the basis for the PET insertion point analysis
What Is Not in This Engagement
Production implementation: engineering team implements from the specification — or an implementation partner is engaged who works from our specification
Multi-party programme management: where the PET involves multiple independent organisations (the typical MPC deployment), each organisation’s internal implementation is their responsibility; the Type 2 specification addresses the protocol and interface that all parties implement
Full multi-party collaborative programme: where the primary challenge is coordinating multiple independent organisations through a PET programme, Type 3 is appropriate
Engagement Type 3
Multi-Party Privacy Programme
For deployments where the privacy-enhancing technology involves multiple independent organisations — competitor banks jointly computing financial risk, hospital networks collaborating on diagnostic models, government departments sharing data for policy analytics, pharmaceutical companies pooling clinical trial data. Multi-party programmes have all the technical complexity of a single-organisation deployment plus the organisational complexity of coordinating independent parties through a joint technical programme. Legal framework, governance, liability allocation, and trust establishment between parties are programme requirements that are as consequential as the cryptographic design. All Type 3 engagements are individually scoped.
From £120,000
Individually scoped · VAT excl.
From 20 weeksLegal framework and inter-party governance are typically the longest phases. Technical design cannot be finalised until the legal framework is agreed.
What Type 3 Adds
Inter-party trust model: what each party must trust about the other parties for the chosen PET to provide the intended privacy guarantee, and whether that trust level is achievable given the parties’ relationships
Legal framework design: the contractual structure that governs the joint computation — data sharing agreements, liability allocation, dispute resolution, exit provisions — designed in coordination with each party’s legal counsel
Governance structure: the operational governance for the multi-party programme — who can modify the protocol, who audits compliance, how disputes about protocol behaviour are resolved, who approves new participants
Data controller/processor analysis: the UK GDPR legal basis for each party’s data processing in the context of the joint computation, and whether any party becomes a joint controller as a result
Audit architecture: how each party can verify that the other parties are implementing the protocol as specified, without the verification mechanism itself revealing protected information
Why Multi-Party Programmes Are Different
Technology selection cannot be made unilaterally: the protocol must be acceptable to all parties, which typically requires negotiation about which party’s security requirements constrain the choice
Data heterogeneity is almost always an issue: independent organisations’ data is rarely identically distributed, and the implications for federated convergence or MPC circuit complexity must be assessed across all parties’ datasets
Network topology between parties is a constraint: MPC communication overhead depends on the network latency and bandwidth between parties, which may differ significantly from laboratory benchmarks
Each party has its own legal counsel who will have views on the legal framework, and those views will not all be consistent from the outset — legal framework negotiation is often the critical path
A party that withdraws from the programme after implementation is committed must be planned for: the protocol must handle participant dropout without compromising the privacy guarantees for remaining participants
Type 3 Requirements
Named programme sponsor at each participating organisation with authority to commit to the programme timeline and legal framework
Legal counsel at each organisation available from programme inception — the legal framework is a programme dependency, not a programme output, and cannot be deferred to the end
Preliminary data characterisation from each party: the data held, its format, its volume, and its distributional properties relative to the intended joint computation — without this, the feasibility of the chosen PET cannot be assessed across all parties
Governance framework agreement before technical design: the parties must agree on the programme governance structure before the technical specification can be finalised, because governance decisions affect the technical architecture

Client Obligations
Performance requirements must be stated as hard constraints, not aspirations
PET performance profiles span many orders of magnitude depending on the technology and the computation complexity. The difference between a 50ms latency requirement and a 5,000ms latency requirement determines which technologies are feasible. If the performance requirement is stated as aspirational rather than binding — “ideally under 100ms but we could accept more” — the architecture will be designed to meet the aspirational requirement, which may not be achievable, rather than the binding requirement, which is. State the hard constraint: the maximum latency or minimum throughput below which the system cannot serve its purpose, regardless of what performance improvements would be desirable.
If performance requirements change after the architecture is designedRelaxed performance requirements may make a simpler or more efficient technology viable and reduce implementation cost. Tightened performance requirements may require architectural revision. Changes to the binding performance requirement after Phase 2 architecture design begins are treated as scope changes and assessed for impact before proceeding.
The privacy problem must be precisely defined — vague privacy requirements produce vague privacy architectures
The privacy requirement that motivates a PET deployment is frequently stated in qualitative terms: “we want to be sure the data stays private,” “we don’t want the cloud provider to see the data,” “we need to comply with GDPR.” These are not precise enough to select a PET. The precise statement requires: which data (identified to the specific fields or records), private from which parties (the cloud provider? a specific adversary? other participants in a federated computation?), under what adversary capability (semi-honest? malicious? with auxiliary information?), and with what formal or practical privacy guarantee (formal DP bound? practical protection without a formal guarantee?). The Type 1 engagement produces this precise specification as its first output. Engaging without this precision produces a technology recommendation that may not address the actual privacy concern.
If the privacy requirement cannot be precisely statedThe Type 1 engagement begins with a structured privacy problem elicitation workshop rather than assuming the problem is already defined. The workshop produces the precise specification. The workshop is included in the Type 1 fee. The time it takes depends on the organisation’s familiarity with the privacy threat model; complex or novel privacy problems take longer to specify precisely.
RJV Obligations
Privacy guarantees stated formally with explicit assumptions — never as qualitative claims
Privacy guarantees for PET deployments are stated formally: “this protocol provides (ε, δ)-differential privacy with ε=X, δ=Y, under the assumption that the curator is trusted and the adversary has no auxiliary information beyond the output.” They are never stated qualitatively: “this system provides strong privacy.” The difference matters because qualitative claims cannot be audited, cannot be composed with other privacy guarantees, and do not translate into specific risk levels. When we state a privacy guarantee, we state it formally, we state its assumptions, and we state what it does not protect against. If the formal guarantee is weaker than the client’s requirement, we say so before the architecture is finalised, not after it is implemented.
If the achievable formal guarantee is weaker than the stated requirementWe disclose this in the Type 1 feasibility assessment before any architecture design begins. The options are: relax the requirement to what is achievable, select a different technology that provides a stronger guarantee at higher cost, or accept that the use case cannot be addressed with current PET capabilities. We do not proceed to architecture design with a guarantee deficit that has not been explicitly accepted.
Technology recommendations are vendor-neutral and library-neutral
We have no commercial relationships with PET library vendors, cloud confidential computing providers, or PET infrastructure companies. The library and framework recommendations in our specifications are based on technical fitness: maturity, performance on the specified computation, security audit status, maintenance activity, and community support. Where multiple libraries are comparable, we document the comparison and note the factors that would cause us to prefer one in specific circumstances. We do not recommend a specific library because of vendor relationships or because we have experience with that library at the expense of recommending a better-suited alternative.
If our recommended library is not available in the client’s technology stack or requires licensing that the client cannot acceptWe identify the best available alternative within the constraints and document the trade-offs relative to the original recommendation. Where the best available alternative within constraints is materially inferior to the unconstrained recommendation, we document this explicitly so the constraint can be reviewed.

Start with a 90-minute session. Bring your privacy problem — the specific data, the specific parties, the specific computation — and we will tell you which technology can solve it and whether it is currently feasible.

We apply the technology selection framework in the session: mapping your privacy problem to the four PET threat models, assessing the computation complexity against current technology performance, and identifying the most promising technology candidates for your specific use case. By the end of the session you know whether your privacy problem is solvable with current PET capabilities, which technology or combination of technologies is most likely to solve it, and what the primary technical and organisational obstacles are.

If the answer is that your use case is not feasible with current PET capabilities — the latency requirement cannot be met, the computation is too complex for current FHE or MPC implementations, the data heterogeneity is too severe for federated convergence — you will know that before spending 12 months and £500,000 discovering it at the implementation stage.

Format
Video call or in-person in London. 90 minutes.
Cost
Free. No commitment.
Lead time
Within 5 business days of contact.
Bring
Your privacy problem: what data, from which parties, must stay private from whom. The computation you need to perform over the data. Your performance constraints. The regulatory framework that motivates the privacy requirement. Any prior work you have done — proofs of concept, vendor conversations, research papers you have considered. And honestly: whether you already have a technology in mind or are genuinely open to the assessment outcome.
Attendees
Data scientist or ML engineer who understands the computation and the data, and a privacy or security lead who understands the regulatory and adversary context. From RJV: a specialist in applied cryptography and privacy-enhancing technologies.
After
Written technology assessment summary within 3 business days. Type 1 scope and fee proposal within 5 business days if you want to proceed to a formal feasibility assessment.