oss-sec mailing list archives

Re: Best practices for signature verifcation


From: Demi Marie Obenour <demiobenour () gmail com>
Date: Mon, 5 Jan 2026 13:13:38 -0500

On 1/5/26 08:45, Clemens Lang wrote:
Hi,

On 3. Jan 2026, at 22:29, Demi Marie Obenour <demiobenour () gmail com> wrote:

On 1/1/26 15:41, Clemens Lang wrote:
Note that there are some outside requirements that at least companies will not be able to ignore:

- CNSA 2.0 (relevant for US government customers) does not allow SLH-DSA, only ML-DSA
- Common Criteria certification requires elliptic curves >= 384 bits or RSA >= 3072 bits, ruling out ed25519
- use of FIPS-certified primitives (historically a problem for solutions implemented in Go, or shipping their own 
implementation instead of re-using OpenSSL, for example)

Some of these rule out signify, for example.

I think I understand the constraints you are operating under.  However,
I do not believe most open source developers and cryptographers
will choose to operate within these constraints.  In my experience,
most of them are more concerned about security and ease of use
and implementation.  OpenSSL has a reputation for difficult to use
securely, and my experience is that libraries like libsodium, the Go
cryptographic libraries, or the RustCrypto crates are preferred.

I agree that most open source developers will not (I would even argue *should not*) choose anything to satisfy US 
government rules.

However, OpenPGP is also used for package signing in Linux distributions, and a few of those will care about this use 
case. All I’m saying is any alternative to OpenPGP that ignores these requirements will face adoption challenges. And 
yes, I know crypto agility has pitfalls when done wrong — but always just using ed25519 doesn’t cut it, either.

Personally, I would use ML-DSA for packages, and both ML-DSA and
SLH-DSA for metadata.  Both signatures would be required to validate
for metadata to be accepted.  That's enough for FIPS.

I would not use SLH-DSA for packages due to its size overhead, but
I would ensure that the code could support it.

Personally, I don't see cryptographers and open source developers
becoming more interested in complying to FIPS 140, CNSA, and similar
requirements.  I suspect that a better approach would be to change
the requirements to those necessary for actual security.

While I agree on technical merit, I don’t think the US government will care. It has taken years for FIPS to accept 
EdDSA. We’re talking decades, or more likely never, for ChaCha20-Poly1305 or Argon2. In the meantime, distributions 
that have users in the US public sector will probably just stay on OpenPGP.

Sadly so.

These could include things like:

- Only using algorithms that have been published in a reputable
 location, such as FIPS or a (possibly informational) IETF RFC.

Algorithms published in FIPS are by definition acceptable for the US public sector. So is being in FIPS now bad or 
good?

FIPS 140 is problematic.  ML-DSA and SLH-DSA are fine.

- Not being vulnerable to timing side-channels.

How do you propose we show the absence of timing side-channels? And why haven’t we done that for the last 20 years 
when side channels kept popping up left and right? And does that novel approach also cover power side channels?

Measurement and dynamic analysis.

Any solution that hopes to be widely adopted should be able to address those, if necessary through cryptographic 
agility.
WireGuard is a very widely adopted counterexample.  It is used by
TailScale and many commercial VPN providers.  I suspect that the
individuals and companies that push new cryptographic algorithms and
protocols are very poorly represented among Red Hat's customers.

There’s even a US senator pushing NIST to allow Wireguard [1], but as things stand today, the US public sector cannot 
use it.

That is good!

Personally, I would wrap IPsec in WireGuard.  WireGuard might not be
officially considered to provide security, but it will help against
real-world attacks.

Note that this only affects a subset of Red Hat's customers, and RHEL does package Wireguard, so I wouldn’t say that 
people and organizations that push new algorithms and protocols are poorly represented.

Look, I don’t like these US special cases, either, and NIST is too slow in adopting better algorithms (in the case of 
password-based key derivation functions, I’d even argue dangerously so). If we want a better replacement for OpenPGP, 
we should just have all requirements on the table. I just don’t want anybody to be surprised down the road when 
Amazon Linux, OpenSUSE or Azure Linux stay on OpenPGP.

For password-based key derivation functions, I would combine Argon2 and
PBKDF in a configuration that is secure if either of the individual
functions is secure.  That's easy: run both and hash the results
together.  Any attack must crack both.

[1]: https://www.wyden.senate.gov/imo/media/doc/Wyden%20Letter%20to%20NIST%20Re%20Gov%20Use%20of%20Secure%20VPNs.pdf
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)

Attachment: OpenPGP_0xB288B55FFF9C22C1.asc
Description: OpenPGP public key

Attachment: OpenPGP_signature.asc
Description: OpenPGP digital signature


Current thread: