Post-quantum migration is not “install the new algorithm and remove the old one.” It is a comprehensive cryptographic transformation of every system in the estate that uses public-key cryptography. The obstacles below are not edge cases — they are the norm. Every significant PQC migration programme encounters most of them. Understanding them before beginning the migration determines whether the programme takes 18 months or 4 years to reach the same outcome.
01
You do not know where all your cryptography is
The first requirement of any migration is a complete inventory of what is being migrated. Most organisations have no comprehensive cryptographic asset inventory. Cryptography exists in places that are not obvious — embedded in commercial off-the-shelf software, in TLS handshakes that are never reviewed, in custom-written protocol implementations from a decade ago, in IoT devices that were deployed and forgotten, in middleware that nobody owns, in legacy systems that predate the team that operates them. A PQC migration without a cryptographic inventory is a migration of the systems the team knows about. The systems the team does not know about remain vulnerable.
What happens without an inventory
An organisation completes a PQC migration of its primary application stack. A regulator performs an independent assessment. It finds a legacy system used for inter-departmental communications that was not in the migration scope because it was not in anyone’s inventory. The legacy system uses RSA-1024 — already classically weak, and the first target of a CRQC. The migration programme is declared incomplete. A second programme is required.
How the engagement addresses this
Phase 1 is a comprehensive cryptographic asset inventory using active scanning, passive traffic analysis, code review, configuration analysis, and vendor documentation review. The inventory is the foundation of every subsequent decision. We will not proceed to strategy design until the inventory is sufficiently complete — meaning we have covered all in-scope systems with documented confidence levels for each.
02
PQC key and signature sizes are significantly larger than classical equivalents
The NIST-standardised PQC algorithms are mathematically sound and considered secure against quantum attack. They are also significantly larger than the RSA and ECC algorithms they replace. ML-KEM-768 (Kyber) has a public key of 1,184 bytes versus 256 bytes for P-256 ECDH. ML-DSA-65 (Dilithium) has a signature size of 3,309 bytes versus 64 bytes for Ed25519. These size increases have real performance and compatibility implications: TLS handshake sizes increase, packet fragmentation occurs on MTU-limited networks, certificate chain sizes increase, HSM storage requirements increase, and protocol implementations that assumed fixed key sizes fail silently or loudly depending on how defensively they were written.
Where this causes failures
A financial trading system migrates its message authentication to ML-DSA. The trading protocol was designed with fixed-size authentication fields based on Ed25519 signature sizes. The larger ML-DSA signatures exceed the field length. The parsing code truncates silently. Authentication fails intermittently on a subset of message types. The failure pattern takes 3 weeks of investigation to attribute to the signature size change because no one documented the field size constraint.
How the engagement addresses this
Performance and compatibility impact assessment as part of the migration strategy: for each system in the inventory, the size impact of the PQC migration is calculated, tested in a representative non-production environment, and documented before the migration specification is written. Protocol implementations with fixed-size fields are identified and the remediation for each (field size extension, protocol version negotiation, or alternative algorithm selection) is specified before implementation begins.
03
Hardware security modules do not yet fully support PQC algorithms
Hardware Security Modules (HSMs) — the tamper-resistant devices used for high-assurance key management, certificate authority operations, and code signing — are updated through firmware upgrades and hardware refresh cycles that lag the publication of cryptographic standards by 12–36 months. Many HSMs currently in production do not support the NIST PQC algorithms in hardware. Software-only PQC implementations on general-purpose hardware do not provide the side-channel resistance and key protection properties that the HSM was deployed to provide. Migrating to PQC in software while retaining the HSM for key protection creates a hybrid that may be less secure than either approach alone.
What organisations discover at this point
An organisation discovers its HSM vendor’s PQC-capable firmware is in beta for its current HSM model and will not be production-ready for 14 months. The alternative — a new HSM model with native PQC support — requires a full HSM replacement programme including key migration, HA reconfiguration, and integration testing. What was planned as a 6-month cryptographic migration becomes a 2-year infrastructure programme.
How the engagement addresses this
HSM inventory and vendor roadmap assessment as a specific component of Phase 1. For each HSM in scope: current firmware version, PQC support status, vendor roadmap for PQC firmware availability, and the lead time for hardware refresh if firmware upgrade is not available. The migration strategy accounts for HSM readiness constraints and includes interim measures for high-assurance operations during the transition period.
04
Third-party dependencies control your cryptographic migration pace
Most organisations’ cryptographic posture is substantially determined by the libraries and platforms they depend on — OpenSSL, BouncyCastle, Java’s JCE, Windows CNG, Apple’s Security framework, cloud provider TLS implementations. Each of these has its own PQC support timeline, its own algorithm selection, and its own performance characteristics. An organisation that wants to migrate its web API to ML-KEM key exchange is dependent on its TLS library’s support, its load balancer’s support, its CDN’s support, its client libraries’ support, and every intermediate network device that terminates or inspects TLS. Any one of these that does not support the negotiated algorithm causes the connection to fall back to classical algorithms.
What dependency mapping reveals
A UK financial institution attempting to migrate its customer-facing API to hybrid TLS discovers that its CDN provider does not support ML-KEM in its TLS implementation. The CDN is used for DDoS mitigation and cannot be removed from the path. The migration of the API endpoint is blocked until the CDN provider ships ML-KEM support — which is on their roadmap for Q2 of the following year. The migration plan must be revised around this constraint.
How the engagement addresses this
Dependency mapping for every system in the migration scope: for each cryptographic usage, the library, platform, or service providing the implementation is identified, its current PQC support status is documented, and its roadmap is assessed. The migration strategy sequences the migration based on dependency readiness, identifying which systems can be migrated immediately, which are blocked on specific dependencies, and what the dependency resolution options are for each block.
05
IoT and embedded devices cannot be updated and will outlive the CRQC timeline
Industrial control systems, medical devices, building management systems, network infrastructure hardware, and consumer IoT devices typically have firmware update mechanisms that are either unavailable, limited to specific components, or require physical access. Many devices currently deployed have expected operational lifespans of 10–25 years and use classical public-key cryptography implemented in hardware or in firmware that was compiled against classical cryptographic libraries. These devices will be in operation when quantum computers capable of attacking their cryptography arrive. They cannot be remediated in place. The organisation must either plan for early replacement, implement a network-layer mitigation, or accept the risk explicitly.
Why this is frequently underestimated
A healthcare trust operates 340 networked medical devices. Assessment reveals that 280 of them have firmware that cannot be updated to support PQC algorithms. 180 of those devices are within 5 years of their expected end-of-life. 100 are mid-cycle replacements that the trust cannot afford to replace early on the current capital budget. The PQC programme must now include a multi-year device replacement plan and interim network segmentation as a compensating control.
How the engagement addresses this
IoT and embedded device assessment as a distinct workstream within the cryptographic inventory: for every device category, the update mechanism, the cryptographic implementation, the expected operational lifespan, and the replacement cost. The migration strategy includes IoT-specific recommendations: early replacement scheduling, network segmentation to limit the blast radius of compromised devices, quantum-safe VPN tunnels to protect IoT communications at the network layer when device-level remediation is not feasible.
06
Hybrid classical-PQC schemes must be designed carefully to avoid downgrade attacks
The recommended transitional approach is hybrid cryptography: combining a classical algorithm (ECDH) with a PQC algorithm (ML-KEM) so that the session key is secure as long as either algorithm is secure. This provides protection against both quantum attack (PQC layer) and potential PQC implementation vulnerabilities (classical layer). However, hybrid schemes introduce a negotiation phase. If the negotiation is not protected against downgrade attack — where an adversary intercepts the negotiation and forces both parties to use only the classical algorithm — the hybrid provides no quantum protection at all. Designing a hybrid scheme that resists downgrade requires careful attention to the negotiation mechanism and the failure modes of each component library’s hybrid implementation.
Where downgrade vulnerabilities appear
A hybrid TLS implementation using X25519+ML-KEM-768 has a fallback path to X25519-only for clients that do not support the hybrid key share. An adversary performing an active man-in-the-middle attack strips the ML-KEM key share from the Client Hello, causing the server to fall back to classical-only. The session is established under classical ECDH with no quantum protection. No alert is raised. The hybrid implementation provided no security against the HNDL attack because the fallback was not protected.
How the engagement addresses this
Hybrid scheme design review for every migration phase that involves hybrid cryptography: the negotiation mechanism, the fallback conditions, the fallback protection mechanisms, and the alert and audit trail for any fallback event. Where the library or platform does not support adequately protected fallback, the migration strategy specifies a non-hybrid approach for that system rather than a downgrade-vulnerable hybrid.
07
Performance degradation in constrained environments
PQC algorithms are computationally more expensive than their classical equivalents. ML-KEM key encapsulation is several times slower than ECDH key exchange on standard hardware. ML-DSA signature generation is 20–50× slower than Ed25519 on constrained hardware. For high-throughput systems, the performance impact may require infrastructure scaling. For constrained embedded systems, the performance impact may mean that PQC operations exceed the available processing budget entirely. For real-time systems with strict latency requirements — industrial control, trading systems, SCADA — the added latency of PQC operations in the critical path may be unacceptable without architectural changes.
What performance testing reveals
A trading platform migrates its order authentication from Ed25519 to ML-DSA. Benchmark testing shows that ML-DSA signature verification is 38× slower than Ed25519 on the platform’s embedded order-matching hardware. The latency added by signature verification in the order-matching critical path exceeds the platform’s latency budget by 2.1ms. The migration cannot proceed without redesigning the authentication architecture to move signature verification off the critical path — a 6-month architectural programme not in the original migration scope.
How the engagement addresses this
Performance impact assessment for every migration candidate: benchmark testing of the selected PQC algorithm on the specific hardware and software stack of the target system, under representative load. For systems with latency or throughput constraints, the performance assessment precedes the migration specification. If the performance impact is unacceptable, the specification addresses the architectural change required to accommodate it — not by ignoring the constraint but by solving it before implementation.
08
Certificate lifecycle management must be redesigned, not just re-keyed
The PKI infrastructure that underlies TLS, code signing, and document authentication was designed around classical algorithms with specific key size and lifetime assumptions. PQC certificates have different size properties, different performance characteristics in chain validation, and different lifetime considerations. Migrating to PQC certificates requires not just re-issuing certificates with new algorithms but rethinking the certificate hierarchy, the validation chain performance, the CRL and OCSP infrastructure, the CA key ceremony procedures for PQC CAs, and the certificate pinning implementations that will reject PQC certificates if they hard-code classical algorithm identifiers. Organisations that re-issue certificates without addressing the PKI architecture produce a PQC certificate hierarchy that is operationally worse than the classical hierarchy it replaced.
What PKI migration failures look like
A government agency migrates its document signing PKI to ML-DSA. Certificate chain validation time increases by 340% due to the larger ML-DSA signature sizes in the chain and the lack of OCSP stapling configuration for the new hierarchy. High-volume document processing applications that validate thousands of signatures per minute hit processing capacity limits. The PKI was migrated; the infrastructure to operate it at production scale was not redesigned to account for the performance change.
How the engagement addresses this
PKI architecture review as a standalone workstream: the certificate hierarchy, the CA key management procedures, the OCSP and CRL infrastructure, the certificate pinning inventory, and the chain validation performance under PQC algorithm characteristics. The migration specification includes the PKI architecture changes required, not only the certificate re-issuance plan.