ED448
Introduction
Since Edwards’ discovery of a new elliptic curve form in 2007, implementors have produced a steady stream of implementations of this form. Edwards curves tend to be easier to implement securely than the previously used curve shapes because they support complete addition formulas. These formulas do not have exceptional cases that would lead to a division by zero. Edwards curves are also faster, and have marginally simpler formulas as well. However, curves in Edwards or twisted Edwards form always have a co-factor divisible by 4, so prime-order curves such as the NIST curves cannot be used in this formula.
Here I detail the design of an Edwards curve with a 448-bit field. I hope that this curve will provide enough security to satisfy conservative users, but still be fast enough for those who are performance-conscious. It would therefore be useful as a more conservative supplement to Curve25519 and Ed25519.
Security rationale
Before going on, it is important to consider why a stronger elliptic curve would be desirable. In particular, why would anyone need a curve stronger than the existing 256-bit-field curves? These curves are said to require about as much work to break as AES-128 (e.g. Curve25519 has work factor ), but in strong attack models they may require more work than AES. Depending on the mode, symmetric encryption may be susceptible to multiple-target attacks, which could allow an attacker to recover the first key of n targets in time approximately . For elliptic curves, batch attacks do not speed up the recovery of the first key, only the subsequent ones.
What sorts of attacks might break an elliptic curve with a conjectured security level near 128 bits? Here are several possibilities:
- An attacker could use brute force. This is unlikely to be feasible for several decades at least, but designers might be concerned about it for long-term security.
- An attacker might build a quantum computer capable of running Shor’s algorithm. This would break every elliptic curve cryptosystem that could fit in that computer’s memory, so a larger curve would not be helpful.
- A mathematical breakthrough might render all elliptic curve cryptography weak, or at least much weaker than expected. Depending on the breakthrough, a larger curve might resist attack due to its size, or it might fall victim to several smaller ones.
- A breakthrough might break only curves with special properties, such as complex multiplication or a certain field shape. Or it might break all curves which do not have those properties.
- A protocol might have a loose security bound, and might allow an attacker to break it with only a tiny fraction of the work of solving the discrete log problem. This might enable an attack on curves that were previously out of reach, but only in that protocol.
- Security bugs or side channels might compromise an implementation. The defense against this is simplicity, not field size.
- Security architects might want to over-engineer a system, for marketing or just for extra confidence. Or having already done this, they might want to migrate from the NIST or Brainpool elliptic curves to an Edwards curve, without weak-ening their design.
Some of these issues can be mitigated by using a curve that is slightly bigger, such as Scott’s curve modulo . Others favor a curve that is significantly larger, but it is difficult to evaluate exactly how large. If a significantly larger curve is chosen, the usual work factor estimate means almost nothing, since any attempt to attack such a curve would require a significant breakthrough to render that estimate obsolete.
I decided to design a curve of a “significantly larger” or so-to-say “overkill” level. The currently popular curves in this field include the NIST and Brainpool curves at 384, 512 and 521 bits, so I aimed for something in this range. Since it is always possible to use a larger curve at the cost of worse performance, I aimed to find the best trade-off between performance and field size.