We live in a completely connected society. A society connected by a variety of devices: laptops, mobile phones, wearables, self-driving or self-flying *things*. We have standards for a common language that allows these devices to communicate with each other. This is critical for wide-scale deployment – especially in cryptography where the smallest detail has great importance.

One of the most important standards-setting organizations is the National Institute of Standards and Technology (NIST), which is hugely influential in determining which standardized cryptographic systems see worldwide adoption. At the end of 2016, NIST announced it would hold a multi-year open project with the goal of standardizing new post-quantum (PQ) cryptographic algorithms secure against both quantum and classical computers.

Many of our devices have very different requirements and capabilities, so it may not be possible to select a “one-size-fits-all” algorithm during the process. NIST mathematician, Dustin Moody, indicated that institute will likely select more than one algorithm:

“There are several systems in use that could be broken by a quantum computer – public-key encryption and digital signatures, to take two examples – and we will need different solutions for each of those systems.”

Initially, NIST selected 82 candidates for further consideration from all submitted algorithms. At the beginning of 2019, this process entered its second stage. Today, there are 26 algorithms still in contention.

### Post-quantum cryptography: what is it really and why do I need it?

In 1994, Peter Shor made a significant discovery in quantum computation. He found an algorithm for integer factorization and computing discrete logarithms, both believed to be hard to solve in classical settings. Since then it has become clear that the ‘hard problems’ on which cryptosystems like RSA and elliptic curve cryptography (ECC) rely – integer factoring and computing discrete logarithms, respectively – are efficiently solvable with quantum computing.

A quantum computer can help to solve some of the problems that are intractable on a classical computer. In theory, they could efficiently solve some fundamental problems in mathematics. This amazing computing power would be highly beneficial, which is why companies are actually trying to build quantum computers. At first, Shor’s algorithm was merely a theoretical result – quantum computers powerful enough to execute it did not exist – but this is quickly changing. In March 2018, Google announced a 72-qubit universal quantum computer. While this is not enough to break say RSA-2048 (still more is needed), many fundamental problems have already been solved.

In anticipation of wide-spread quantum computing, we must start the transition from classical public-key cryptography primitives to post-quantum (PQ) alternatives. It may be that consumers will never get to hold a quantum computer, but a few powerful attackers who will get one can still pose a serious threat. Moreover, under the assumption that current TLS handshakes and ciphertexts are being captured and stored, a future attacker could crack these stored individual session keys and use those results to decrypt the corresponding individual ciphertexts. Even strong security guarantees, like forward secrecy, do not help out much there.

In 2006, the academic research community launched a conference series dedicated to finding alternatives to RSA and ECC. This so-called *post-quantum cryptography* should run efficiently on a classical computer, but it should also be secure against attacks performed by a quantum computer. As a research field, it has grown substantially in popularity.

Several companies, including Google, Microsoft, Digicert and Thales, are already testing the impact of deploying PQ cryptography. Cloudflare is involved in some of this, but we want to be a company that leads in this direction. The first thing we need to do is understand the real costs of deploying PQ cryptography, and that’s not obvious at all.

### What options do we have?

Many submissions to the NIST project are still under study. Some are very new and little understood; others are more mature and already standardized as RFCs. Some have been broken or withdrawn from the process; others are more conservative or illustrate how far classical cryptography would need to be pushed so that a quantum computer could not crack it within a reasonable cost. Some are very slow and big; others are not. But most cryptographic schemes can be categorized into these families: lattice-based, multivariate, hash-based (signatures only), code-based and isogeny-based.

For some algorithms, nevertheless, there is a fear they may be too inconvenient to use with today’s Internet. We must also be able to integrate new cryptographic schemes with existing protocols, such as SSH or TLS. To do that, designers of PQ cryptosystems must consider these characteristics:

- Latency caused by encryption and decryption on both ends of the communication channel, assuming a variety of devices from big and fast servers to slow and memory constrained IoT (Internet of Things) devices
- Small public keys and signatures to minimize bandwidth
- Clear design that allows cryptanalysis and determining weaknesses that could be exploited
- Use of existing hardware for fast implementation

The work on post-quantum public key cryptosystems must be done in a full view of organizations, governments, cryptographers, and the public. Emerging ideas must be properly vetted by this community to ensure widespread support.

### Helping Build a Better Internet

To better understand the post-quantum world, Cloudflare began experimenting with these algorithms and used them to provide confidentiality in TLS connections.

With Google, we are proposing a wide-scale experiment that combines client- and server-side data collection to evaluate the performance of key-exchange algorithms on actual users’ devices. We hope that this experiment helps choose an algorithm with the best characteristics for the future of the Internet. With Cloudflare’s highly distributed network of access points and Google’s Chrome browser, both companies are in a very good position to perform this experiment.

Our goal is to understand how these algorithms act when used by real clients over real networks, particularly candidate algorithms with significant differences in public-key or ciphertext sizes. Our focus is on how different key sizes affect handshake time in the context of Transport Layer Security (TLS) as used on the web over HTTPS.

Our primary candidates are an NTRU-based construction called HRSS-SXY (by **H**ülsing – **R**ijneveld – **S**chanck – **S**chwabe, and Tsunekazu **S**aito – Keita **X**agawa – Takashi **Y**amakawa) and an isogeny-based Supersingular Isogeny Key Encapsulation (SIKE). Details of both algorithms are described in more detail below in section “Dive into post-quantum cryptography”. This table shows a few characteristics for both algorithms. Performance timings were obtained by running the BoringSSL speed test on an Intel Skylake CPU.

KEM | Public Key size (bytes) | Ciphertext (bytes) | Secret size (bytes) | KeyGen (op/sec) | Encaps (op/sec) | Decaps (op/sec) | NIST level |
---|---|---|---|---|---|---|---|

HRSS-SXY | 1138 | 1138 | 32 | 3952.3 | 76034.7 | 21905.8 | 1 |

SIKE/p434 | 330 | 346 | 16 | 367.1 | 228.0 | 209.3 | 1 |

Currently the most commonly used key exchange algorithm (according to Cloudflare’s data) is the non-quantum X25519. Its public keys are 32 bytes and BoringSSL can generate 49301.2 key pairs, and is able to perform 19628.6 key agreements every second on my Skylake CPU.

Note that HRSS-SXY shows a significant speed advantage, while SIKE has a size advantage. In our experiment, we will deploy these two algorithms on both the server side using Cloudflare’s infrastructure, and the client side using Chrome Canary; both sides will collect telemetry information about TLS handshakes using these two PQ algorithms to see how they perform in practice.

### What do we expect to find?

In 2018, Adam Langley conducted an experiment with the goal of evaluating the likely latency impact of a post-quantum key exchange in TLS. Chrome was augmented with the ability to include a dummy, arbitrarily-sized extension in the TLS ClientHello (fixed number of bytes of random noise). After taking into account the performance and key size offered by different types key-exchange schemes, he concluded that constructs based on structured lattices may be most suitable for future use in TLS.

However, Langley also observed a peculiar phenomenon; client connections measured at 95th percentile had much higher latency than the median. It means that in those cases, isogeny-based systems may be a better choice. In the “Dive into post-quantum cryptography”, we describe the difference between isogeny-based SIKE and lattice-based NTRU cryptosystems.

In our experiment, we want to more thoroughly evaluate and ascribe root causes to these unexpected latency increases. We would particularly like to learn more about the characteristics of those networks: what causes increased latency? how does the performance cost of isogeny-based algorithms impact the TLS handshake? We want to answer key questions, like:

- What is a good ratio for speed-to-key size (or how much faster could SIKE get to achieve the client-perceived performance of HRSS)?
- How do network middleboxes behave when clients use new PQ algorithms, and which networks have problematic middleboxes?
- How do the different properties of client networks affect TLS performance with different PQ key exchanges? Can we identify specific autonomous systems, device configurations, or network configurations that favor one algorithm over another? How is performance affected in the long tail?

### Experiment Design

Our experiment will involve both server- and client-side performance statistics collection from real users around the world (all the data is anonymized). Cloudflare is operating the server-side TLS connections. We will enable the CECPQ2 (HRSS + X25519) and CECPQ2b (SIKE + X25519) key-agreement algorithms on all TLS-terminating edge servers.

In this experiment, the ClientHello will contain a CECPQ2 or CECPQ2b public key (but never both). Additionally, Chrome will always include X25519 for servers that do not support post-quantum key exchange. The post-quantum key exchange will only be negotiated in TLS version 1.3 when both sides support it.

Since Cloudflare only measures the server side of the connection, it is impossible to determine the time it takes for a ClientHello sent from Chrome to reach Cloudflare’s edge servers; however, we can measure the time it takes for the TLS ServerHello message containing post-quantum key exchange, to reach the client and for the client to respond.

On the client side, Chrome Canary will operate the TLS connection. Google will enable either CECPQ2 or CECPQ2b in Chrome for the following mix of architecture and OSes:

- x86-64: Windows, Linux, macOS, ChromeOS
- aarch64: Android

Our high-level expectation is to get similar results as Langley’s original experiment in 2018 — slightly increased latency for the 50th percentile and higher latency for the 95th. Unfortunately, data collected purely from real users’ connections may not suffice for diagnosing the root causes of why some clients experience excessive slowdown. To this end, we will perform follow-up experiments based on per-client information we collect server-side.

Our primary hypothesis is that excessive slowdowns, like those Langley observed, are largely due to in-network events, such as middleboxes or bloated/lossy links. As a first-pass analysis, we will investigate whether the slowed-down clients share common network features, like common ASes, common transit networks, common link types, and so on. To determine this, we will run a traceroute from vantage points close to our servers back toward the clients (not overloading any particular links or hosts) and study whether some client locations are subject to slowdowns for all destinations or just for some.

### Dive into post-quantum cryptography

Be warned: the details of PQ cryptography may be quite complicated. In some cases it builds on classical cryptography, and in other cases it is completely different math. It would be rather hard to describe details in a single blog post. Instead, we are giving you an intuition of post-quantum cryptography, rather than provide deep academic-level descriptions. We’re skipping a lot of details for the sake of brevity. Nevertheless, settle in for a bit of an epic journey because we have a lot to cover.

### Key encapsulation mechanism

NIST requires that all key-agreement algorithms have a form of key-encapsulation mechanism (KEM). The KEM is a simplified form of public key encryption (PKE). As PKE, it also allows agreement on a secret, but in a slightly different way. The idea is that the session key is an output of the encryption algorithm, conversely to public key encryption schemes where session key is an input to the algorithm. In a KEM, Alice generates a random key and uses the pre-generated public key from Bob to encrypt (encapsulate) it. This results in a ciphertext sent to Bob. Bob uses his private key to decrypt (decapsulate) the ciphertext and retrieve the random key. The idea was initially introduced by Cramer and Shoup. Experience shows that such constructs are easier to design, analyze, and implement as the scheme is limited to communicating a fixed-size session key. Leonardo Da Vinci said, “Simplicity is the ultimate sophistication,” which is very true in cryptography.

The key exchange (KEX) protocol, like Diffie-Hellman, is yet a different construct: it allows two parties to agree on a shared secret that can be used as a symmetric encryption key. For example, Alice generates a key pair and sends a public key to Bob. Bob does the same and uses his own key pair with Alice’s public key to generate the shared secret. He then sends his public key to Alice who can now generate the same shared secret. What’s worth noticing is that both Alice and Bob perform exactly the same operations.

KEM construction can be converted to KEX. Alice performs key generation and sends the public key to Bob. Bob uses it to encapsulate a symmetric session key and sends it back to Alice. Alice decapsulates the ciphertext received from Bob and gets the symmetric key. This is actually what we do in our experiment to make integration with the TLS protocol less complicated.

### NTRU Lattice-based Encryption

We will enable the CECPQ2 implemented by Adam Langley from Google on our servers. He described this implementation in detail here. This key exchange uses the HRSS algorithm, which is based on the NTRU (**N**-Th Degree **TRU**ncated Polynomial Ring) algorithm. Foregoing too much detail, I am going to explain how NTRU works and give simplified examples, and finally, compare it to HRSS.

NTRU is a cryptosystem based on a polynomial ring. This means that we do not operate on numbers modulo a prime (like in RSA), but on polynomials of degree ( N ) , where the *degree* of a polynomial is the highest exponent of its variable. For example, (x^7 + 6x^3 + 11x^2 ) has degree of 7.

One can add polynomials in the ring in the usual way, by simply adding theirs coefficients modulo some integer. In NTRU this integer is called ( q ). Polynomials can also be multiplied, but remember, you are operating in the ring, therefore the result of a multiplication is always a polynomial of degree less than (N). It basically means that exponents of the resulting polynomial are added to modulo* *(N).

In other words, polynomial ring arithmetic is very similar to modular arithmetic, but instead of working with a set of numbers less than *N*, you are working with a set of polynomials with a degree less than *N*.

To instantiate the NTRU cryptosystem, three domain parameters must be chosen:

- (N) – degree of the polynomial ring, in NTRU the principal objects are polynomials of degree (N-1).
- (p) – small modulus used during key generation and decryption for reducing message coefficients.
- (q) – large modulus used during algorithm execution for reducing coefficients of the polynomials.

First, we generate a pair of public and private keys. To do that, two polynomials (f) and (g) are chosen from the ring in a way that their randomly generated coefficients are much smaller than (q). Then key generation computes two inverses of the polynomial: $$ f_p= f^{-1} bmod{p} \ f_q= f^{-1} bmod{q} $$

The last step is to compute $$ pk = pcdot f_qcdot g bmod q $$, which we will use as public key *pk*. The private key consists of (f) and (f_p). The (f_q) is not part of any key, however it must remain secret.

It might be the case that after choosing (f), the inverses modulo (p) and ( q ) do not exist. In this case, the algorithm has to start from the beginning and generate another (f). That’s unfortunate because calculating the inverse of a polynomial is a costly operation. HRSS brings an improvement to this issue since it ensures that those inverses always exist, making key generation faster than as proposed initially in NTRU.

The encryption of a message (m) proceeds as follows. First, the message (m) is converted to a ring element (pt) (there exists an algorithm for performing this conversion in both directions). During encryption, NTRU randomly chooses one polynomial (b) called *blinder.* The goal of the blinder is to generate different ciphertexts per encyption. Thus, the ciphetext (ct) is obtained as $$ ct = (bcdot pk + pt ) bmod q $$ Decryption looks a bit more complicated but it can also be easily understood. Decryption uses both the secret value (f) and to recover the plaintext as $$ v = f cdot ct bmod q \ pt = v cdot f_p bmod p $$

This diagram demonstrates why and how decryption works.

After obtaining (pt), the message (m) is recovered by inverting the conversion function.

The underlying hard assumption is that given two polynomials: (f) and (g) whose coefficients are short compared to the modulus (q), it is difficult to distinguish (pk = frac{f}{g} ) from a random element in the ring. It means that it’s hard to find (f) and (g) given only public key *pk*.

### Lattices

NTRU cryptosystem is a grandfather of lattice-based encryption schemes. The idea of using difficult problems for cryptographic purposes was due to Ajtai. His work evolved into a whole area of research with the goal of creating more practical, lattice-based cryptosystems.

### What is a lattice and why it can be used for post-quantum crypto?

The picture below visualizes lattice as points in a two-dimensional space. A lattice is defined by the origin (O) and base vectors ( { b_1 , b_2} ). Every point on the lattice is represented as a linear combination of the base vectors, for example (V = -2b_1+b_2).

There are two classical NP-hard problems in lattice-based cryptography:

**Shortest Vector Problem**(SVP): Given a lattice, to find the shortest non-zero vector in the lattice. In the graph, the vector (s) is the shortest one. The SVP problem is NP-hard only under some assumptions.**Closest Vector Problem**(CVP). Given a lattice and a vector (V) (not necessarily in the lattice), to find the closest vector to (V)*.*For example, the closest vector to (t) is (z).

In the graph above, it is easy for us to solve SVP and CVP by simple inspection. However, the lattices used in cryptography have higher dimensions, say above 1000, as well as highly non-orthogonal basis vectors. On these instances, the problems get extremely hard to solve. It’s even believed future quantum computers will have it tough.

### NTRU vs HRSS

HRSS, which we use in our experiment, is based on NTRU, but a slightly better instantiation. The main improvements are:

- Faster key generation algorithm.
- NTRU encryption can produce ciphertexts that are impossible to decrypt (true for many lattice-based schemes). But HRSS fixes this problem.
- HRSS is a key encapsulation mechanism.

### CECPQ2b – Isogeny-based Post-Quantum TLS

Following CECPQ2, we have integrated into BoringSSL another hybrid key exchange mechanism relying on SIKE. It is called CECPQ2b and we will use it in our experimentation in TLS 1.3. SIKE is a key encapsulation method based on Supersingular Isogeny Diffie-Hellman (SIDH). Read more about SIDH in our previous post. The math behind SIDH is related to elliptic curves. A comparison between SIDH and the classical Elliptic Curve Diffie-Hellman (ECDH) is given.

An elliptic curve is a set of points that satisfy a specific mathematical equation. The equation of an elliptic curve may have multiple forms, the standard form is called the *Weierstrass *equation $$ y^2 = x^3 +ax +b $$ and its shape can look like the red curve.

An interesting fact about elliptic curves is have a group structure. That is, the set of points on the curve have associated a binary operation called *point addition*. The set of points on the elliptic curve is closed under addition. Thus,* *adding two points results in another point that is also on the elliptic curve.

If we can add two different points on a curve, then we can also add one point to itself. And if we do it multiple times, then the resulting operations is known as a *scalar multiplication* and denoted as* (Q = kcdot P = P+P+dots+P)* for an integer (k).

Multiplication of scalars is *commutative*. It means that two scalar multiplications can be evaluated in any order ( color{darkred}{k_a}cdotcolor{darkgreen}{k_b} = color{darkgreen}{k_b}cdotcolor{darkred}{k_a} ); this an important property that makes ECDH possible.

It turns out that carefully if choosing an elliptic curve “correctly”, scalar multiplication is easy to compute but extremely hard to reverse. Meaning, given two points (Q) and (P) such that (Q=kcdot P), finding the integer k is a difficult task known as the Elliptic Curve Discrete Logarithm problem (ECDLP). This problem is suitable for cryptographic purposes.

Alice and Bob agree on a secret key as follows. Alice generates a private key ( k_a). Then, she uses some publicly known point (P) and calculates her public key as ( Q_a = k_acdot P). Bob proceeds in similar fashion and gets (k_b) and (Q_b = k_bcdot P). To agree on a shared secret, each party multiplies their private key with the public key of the other party. The result of this is the shared secret. Key agreement as described above, works thanks to the fact that scalars can commute:

$$ color{darkgreen}{k_a} cdot Q_b = color{darkgreen}{k_a} cdot color{darkred}{k_b} cdot P iff color{darkred}{k_b} cdot color{darkgreen}{k_a} cdot P = color{darkred}{k_b} cdot Q_a $$

There is a vast theory behind elliptic curves. An introduction to elliptic curve cryptography was posted before and more details can be found in this book. Now, lets describe SIDH and compare with ECDH.

### Isogenies on Elliptic Curves

Before explaining the details of SIDH key exchange, I’ll explain the 3 most important concepts, namely: * j-invariant, isogeny* and its

**kernel.**Each curve has a number that can be associated to it. Let’s call this number a * j-invariant. *This number is not unique per curve, meaning many curves have the same value of

*, but it can be viewed as a way to group multiple elliptic curves into disjoint sets. We say that two curves are*

**j-invariant***if they are in the same set, called the*

**isomorphic***isomorphism class*. The j-invariant is a simple criterion to determine whether two curves are isomorphic. The j-invariant of a curve (E) in Weierstrass form ( y^2 = x^3 + ax + b) is given as $$ j(E) = 1728frac{4a^3}{4^3 +27b^2} $$

When it comes to * isogeny*, think about it as a map between two curves. Each point on some curve ( E ) is mapped by isogeny to the point on isogenous curve ( E’ ). We denote mapping from curve ( E ) to ( E’ ) by isogeny ( phi ) as:

$$phi: E rightarrow E’ $$

It depends on the map if those two curves are isomorphic or not. Isogeny can be visualised as:

There may exist many of those mappings, each curve used in SIDH has small number of isogenies to other curves. Natural question is how do we compute such isogeny. Here is where the * kernel* of an isogeny comes. The

*uniquely determines isogeny (up to*

**kernel***isomorphism class*). Formulas for calculating isogeny from its kernel were initially given by J. Vélu and the idea of calculating them efficiently was extended.

To finish, I will summarize what was said above with a picture.

There are two **isomorphism classes** on the picture above. Both curves (E_1) and (E_2) are **isomorphic** and have j-invariant = 6. As curves (E_3) and (E_4) have j-invariant=13, they are in a different isomorphism class. There exists an **isogeny** (phi_2) between curve (E_3) and (E_2), so they both are **isogeneous**. Curves ( phi_1 ) and ( E_2 ) are isomorphic and there is isogeny ( phi_1 ) between them. Curves ( E_1) and (E_4) are neither isomorphic nor isogeneus.

For brevity I’m skipping many important details, like details of the *finite field,* the fact that isogenies must be *separable *and that the kernel is *finite. *But curious readers can find a number of academic research papers available on the Internet.

### Big picture: similarities with ECDH

Let’s generalize the ECDH algorithm described above, so that we can swap some elements and try to use Supersingular Isogeny Diffie-Hellman.

Note that what actually happens during an ECDH key exchange is:

- We have a set of points on elliptic curve, set
*S* - We have another group of integers used for point multiplication, G
- We use an element from
*Z*to act on an element from*S*to get another element from*S*:

$$ G cdot S rightarrow S $$

Now the question is: what is our *G* and *S* in an SIDH setting? For SIDH to work, we need a big set of elements and something secret that will act on the elements from that set. This “group action” must also be resistant to attacks performed by quantum computers.

In the SIDH setting, those two sets are defined as:

- Set
*S*is a set (graph) of j-invariants, such that all the curves are supersingular: ( S = [j(E_1), j(E_2), j(E_3), …. , j(E_n)]) - Set
*G*is a set of isogenies acting on elliptic curves and transforming, for example, the elliptic curve (E_1) into (E_n):

### Random walk on supersingular graph

When we talk about *Isogeny Based Cryptography*, as a topic distinct from *Elliptic Curve Cryptography*, we usually mean algorithms and protocols that rely fundamentally on the structure of isogeny graphs. An example of such a (small) graph is pictured below.

Each vertex of the graph represents a different j-invariant of a set of supersingular curves. The edges between vertices represent isogenies converting one elliptic curve to another. As you can notice, the graph is strongly connected, meaning every vertex can be reached from every other vertex. In the context of isogeny-based crypto, we call such a graph a *supersingular isogeny graph*. I’ll skip some technical details about the construction of this graph (look for those here or here), but instead describe ideas about how it can be used.

As the graph is strongly connected, it is possible to *walk* a whole graph by starting from any vertex, randomly choosing an edge, following it to the next vertex and then start the process again on a new vertex. Such a way of visiting edges of this graph is called a *random walk.*

The random walk is a key concept that makes isogeny based crypto feasible. When you look closely at the graph, you can notice that each vertex has a small number of edges incident to it, this is why we can compute the isogenies efficiently. But also for any vertex there is only a limited number of isogenies to choose from, which doesn’t look like good base for a cryptographic scheme. The key question is – where does the security of the scheme come from exactly? In order to get it, it is necessary to visit a couple hundred vertices. What it means in practice is that secret isogeny (of *large degree*) is constructed as a composition of multiple isogenies (of *small, prime degree*). Which means, the secret isogeny is:

This property and properties of the isogeny graph are **what makes** some of us believe that **scheme** has a good chance to be **secure*** . *More specifically, there is no efficient way of finding a path that connects ( E_0 ) with ( E_n ), even with quantum computer at hand. The security level of a system depends on value

*n*– the number of steps taken during the walk.

The random walk is a core process used when both generating public keys and computing shared secrets. It starts with party generating random value *m *(see more below), starting curve (E_0)* *and points P and Q on this curve. Those values are used to compute the kernel of an isogeny ( R_1 ) in the following way:

$$ R_1 = P + m cdot Q $$

Thanks to formulas given by Vélu we can now use the point ( R_1 )* *to compute the isogeny, the party will choose to move from a vertex to another one. After the isogeny ( phi_{R_1} ) is calculated it is applied to ( E_0 ) which results in a new curve ( E_1 ):

$$ phi_{R_1}: E_0 rightarrow E_1 $$

Isogeny is also applied to points P and Q. Once on ( E_1 ) the process is repeated. This process is applied *n *times, and at the end a party ends up on some curve ( E_n ) which defines isomorphism class, so also j-invariant.

### Supersingular Isogeny Diffie-Hellman

The core idea in SIDH is to compose two random walks on an isogeny graph of elliptic curves in such a way that the end node of both ways of composing is the same.

In order to do it, scheme sets public parameters – starting curve ( E_0 ) and 2 pairs of base points on this curve * ( (PA,QA) )* ,* ( (PB,QB) )*. Alice generates her random secret keys *m, *and calculates a secret isogeny ( phi_q ) by performing a *random walk* as described above. The walk finishes with 3 values: elliptic curve ( E_a ) she has ended up with and pair of points ( phi_a(PB) ) and ( phi_a(QB) ) after pushing through Alice’s secret isogeny. Bob proceeds analogously* *which results in the triple ( {E_b, phi_b(PA), phi_b(QA)} ). The triple forms a public key which is exchanged between parties.

The picture below visualizes the operation. The black dots represent curves grouped in the same *isomorphism classes *represented by light blue circles. Alice takes the orange path ending up on a curve ( E_a ) in a separate isomorphism class than Bob after taking his dark blue path ending on ( E_b ). SIDH is parametrized in a way that Alice and Bob will always end up in different isomorphism classes.

Upon receipt of triple ( { E_a, phi_a(PB), phi_a(QB) } ) from Alice, Bob will use his secret value *m *to calculate a new kernel – but instead of using point (PA) and (QA) to calculate an isogeny kernel, he will now use images ( phi_a(PB) ) and ( phi_a(QB) ) received from Alice:

$$ R’_1 = phi_a(PB) + m cdot phi_a(QB) $$

Afterwards, he uses ( R’_1 ) to start the walk again resulting in the isogeny ( phi’_b: E_a rightarrow E_{ab} ). Allice proceeds analogously resulting in the isogeny (phi’_a: E_b rightarrow E_{ba} ). With isogenies calculated this way, both Alice and Bob will converge in the same isomorphism class. The math math may seem complicated, hopefully the picture below makes it easier to understand.

Bob computes a new isogeny and starts his random walk from ( E_a ) received from Alice. He ends up on some curve (E_{ba}). Similarly, Alice calculates a new isogeny, applies it on ( E_b ) received from Bob and her random walk ends on some curve (E_{ab}). Curves (E_{ab}) and (E_{ba}) are not likely to be the same, but construction guarantees that they are isomorphic*.* As mentioned earlier, isomorphic curves have the same value of j-invariant, hence the shared secret is a value of j-invariant (j(E_{ab})).

Coming back to differences between SIDH and ECDH – we can split them into four categories: the elements of the group we are operating on, the cornerstone computation required to agree on a shared secret, the elements representing secret values, and the difficult problem on which the security relies.

In ECDH there is a secret key which is an integer scalar, in case of SIDH it is a secret isogeny, which also is generated from an integer scalar. In the case of ECDH one multiplies a point on a curve by a scalar, in the case of SIDH it is a random walk in an isogeny graph. In the case of ECDH, the public key is a point on a curve, in the case of SIDH, the public part is a curve itself and the image of some points after applying isogeny. The shared secret in the case of ECDH is a point on a curve, in the case of SIDH it is a j-invariant.

### SIKE: Supersingular Isogeny Key Encapsulation

SIDH could potentially be used as a drop-in replacement of the ECDH protocol. We have actually implemented a proof-of-concept and added it to our implementation of TLS 1.3 in the tls-tris library and described (together with Mozilla) implementation details in this draft. Nevertheless, there is a problem with SIDH – the keys can be used only once. In 2016, a few researchers came up with an active attack on SIDH which works only when public keys are reused. In the context of TLS, it is not a big problem, because for each session a fresh key pair is generated (ephemeral keys), but it may not be true for other applications.

SIKE is an isogeny key encapsulation which solves this problem. Bob can generate SIKE keys, upload the public part somewhere in the Internet and then anybody can use it whenever he wants to communicate with Bob securely. SIKE reuses SIDH – internally both sides of the connection always perform SIDH key generation, SIDH key agreement and apply some other cryptographic primitives in order to convert SIDH to KEM. SIKE is implemented in a few variants – each variant corresponds to the security levels using 128-, 192- and 256-bit secret keys. Higher security level means longer running time. More details about SIKE can be found here.

SIKE is also one of the candidates in NIST post-quantum “competition“.

I’ve skipped many important details to give a brief description of how isogeny based crypto works. If you’re curious and hungry for details, look at either of these Cloudflare meetups, where Deirdre Connolly talked about isogeny-based cryptography or this talk by Chloe Martindale** **during PQ Crypto School 2017. And if you would like to know more about quantum attacks on this scheme, I highly recommend this work.

## Conclusion

Quantum computers that can break meaningful cryptographic parameter settings do not exist, yet. They won’t be built for at least the next few years. Nevertheless, they have already changed the way we look at current cryptographic deployments. There are at least two reasons it’s worth investing in PQ cryptography:

- It takes a lot of time to build secure cryptography and we don’t actually know when today’s classical cryptography will be broken. There is a need for a good mathematical base: an initial idea of what may be secure against something that doesn’t exist yet. If you have an idea, you also need good implementation, constant time, resistance to things like time and cache side-channels, DFA, DPA, EM, and a bunch of other abbreviations indicating side-channel resistance. There is also deployment of, for example, algorithms based on elliptic curves were introduced in ’85, but started to really be used in production only during the last decade, 20 or so years later. Obviously, the implementation must be blazingly fast! Last, but not least, integration: we need time to develop standards to allow integration of PQ cryptography with protocols like TLS.
- Even though efficient quantum computers probably won’t exist for another few years, the threat is real. Data encrypted with current cryptographic algorithms can be recorded now with hopes of being broken in the future.

Cloudflare is motivated to help build the Internet of tomorrow with the tools at hand today. Our interest is in cryptographic techniques that can be integrated into existing protocols and widely deployed on the Internet as seamlessly as possible. PQ cryptography, like the rest of cryptography, includes many cryptosystems that can be used for communications in today’s Internet; Alice and Bob need to perform some computation, but they do not need to buy new hardware to do that.

Cloudflare sees great potential in those algorithms and believes that some of them can be used as a safe replacement for classical public-key cryptosystems. Time will tell if we’re justified in this belief!