The Problem: CPU in the Cryptographic Path
In conventional network encryption, the host CPU sits squarely in the cryptographic path. Whether the implementation is IPsec or MACsec, the processor performs key generation, key exchange, and the bulk encryption of every packet traversing the network. This means that cryptographic keys reside in system RAM, accessible to the operating system kernel, to hypervisors, and potentially to any process with elevated privileges. The attack surface is enormous: side-channel attacks such as Spectre, Meltdown, and their successors have repeatedly demonstrated that CPU caches, branch predictors, and speculative execution pipelines leak sensitive data. A compromised kernel module, a rogue virtual machine, or even a carefully timed cache-probing attack can extract keys without ever touching the encrypted payload. The fundamental flaw is architectural: as long as the CPU handles cryptographic material, that material is exposed to the entire software stack running on that processor.
The CPU-Blind Approach
AllEyes takes a radically different approach: the host CPU never sees cryptographic keys and never performs encryption operations. The entire cryptographic pipeline, from key agreement through bulk encryption to integrity verification, executes exclusively within an FPGA. The host system can configure high-level policies such as which endpoints to connect and which cipher suites to authorize, but it has no mechanism to read, intercept, or influence the keys in use. From the CPU's perspective, the encryption layer is a black box. Traffic enters the FPGA in the clear on one side and exits encrypted on the other, with no intermediate state observable from the host. This architectural decision eliminates, rather than mitigates, the entire class of attacks that depend on software access to cryptographic material. There is no key to steal from RAM because the key was never in RAM.
Hardware Architecture
The AllEyes architecture enforces a strict separation between the management plane and the data plane. Network traffic enters the FPGA through dedicated high-speed transceivers, passes through the encryption engine, and exits on the network-facing ports, all without transiting through the host's PCIe bus or system memory. The management plane, which handles policy configuration and monitoring, communicates with the host through a separate, narrowband interface that carries no cryptographic material. Keys are generated and stored within the FPGA's hardware secure enclave, a physically isolated region of the chip with its own entropy source. The enclave enforces access controls at the silicon level: keys can be used for cryptographic operations but cannot be exported, read back, or transferred to external memory. Even a full compromise of the host operating system provides no path to the key material. This separation is not a software boundary enforced by privilege levels; it is a physical boundary enforced by the chip's interconnect topology.
Line Encryption vs Packet Encryption
Traditional IPsec operates at Layer 3, encrypting individual IP packets. Each packet incurs overhead from encapsulation headers, sequence numbers, and integrity tags. Latency varies with packet size and CPU load, typically ranging from tens of microseconds to milliseconds depending on the implementation and system utilization. AllEyes operates at Layer 2, encrypting the entire data stream at line rate, similar in concept to MACsec but implemented entirely in dedicated hardware. Every bit that enters the FPGA is encrypted and forwarded within a deterministic, fixed latency of less than one microsecond. There is no per-packet overhead, no encapsulation expansion, and no variability caused by CPU scheduling, context switches, or competing workloads. The result is wire-speed encryption with latency characteristics indistinguishable from an unencrypted link. For latency-sensitive applications such as high-frequency trading, real-time telemetry, and industrial control systems, this determinism is not a convenience but a hard requirement.
Native Post-Quantum
The AllEyes key exchange implements a hybrid scheme combining ML-KEM-1024 and X25519, executed entirely within the FPGA. ML-KEM-1024, standardized by NIST as FIPS 203, provides post-quantum security at the highest defined level (NIST Level 5, equivalent to AES-256). X25519 provides classical elliptic-curve security that has been extensively analyzed and deployed for over a decade. The hybrid combination ensures that the key exchange remains secure even if one of the two algorithms is eventually broken: an attacker would need to defeat both the lattice-based scheme and the elliptic-curve scheme simultaneously. Once the shared secret is established, data-plane encryption uses AES-256-GCM, providing authenticated encryption with associated data. The entire key lifecycle, from generation through agreement to derivation and rotation, occurs within the FPGA's secure enclave, with no key material ever exposed to external memory or the host CPU.
Crypto-Agility and FPGA
One of the principal advantages of an FPGA-based architecture is crypto-agility. Unlike fixed-function ASICs, which are permanently defined at the time of fabrication, FPGAs can be reprogrammed in the field through firmware updates. If a new vulnerability is discovered in a cryptographic primitive, or if standards bodies mandate a transition to a successor algorithm, the encryptor can be updated without replacing any hardware. This is not a theoretical benefit: the post-quantum transition is expected to produce multiple rounds of algorithm revisions as the cryptographic community gains operational experience with the new standards. An FPGA architecture transforms what would otherwise be a costly hardware refresh cycle into a routine firmware deployment. The same property applies to performance optimizations: as synthesis toolchains improve, existing hardware can be reflashed to achieve higher throughput or lower latency without physical modification. In a regulatory landscape where ANSSI, BSI, and NIST continue to refine post-quantum recommendations, crypto-agility is not optional but essential.
High-Performance Use Cases
The AllEyes platform scales from 800 Gbps to 6.4 Tbps of encrypted throughput, addressing the full spectrum of high-performance encryption requirements. In datacenter interconnects, where east-west traffic between compute clusters must be encrypted without introducing bottlenecks, line-rate encryption eliminates the performance tax traditionally associated with enabling security. In telecommunications infrastructure, where 5G backhaul and fronthaul links carry time-sensitive traffic under strict jitter budgets, the sub-microsecond deterministic latency ensures that encryption does not degrade quality of service. In financial services, where regulatory frameworks like DORA mandate encryption of inter-site links, the combination of post-quantum key exchange and wire-speed throughput satisfies both security and performance requirements simultaneously. In defense and government networks, where data classification policies demand encryption at every link and the threat model includes state-level adversaries with potential future quantum capabilities, the CPU-blind architecture provides a level of key isolation that software-based solutions cannot match.