Why One Algorithm Isn’t Enough
The history of cryptography is a history of unexpected breaks. MD5 was considered secure for fifteen years before practical collision attacks appeared. SHA-1 was the standard for a decade before theoretical attacks materialized, followed by practical ones. The NIST PQC competition itself eliminated SIKE — a finalist — after a devastating classical attack was found in 2022, years into the evaluation process.
Lattice-based cryptography (the family underlying Dilithium) has been studied seriously for roughly two decades. Compare this to RSA’s nearly fifty years of analysis or AES’s twenty-five. While no practical attacks have emerged against MLWE, the shorter track record means a non-zero probability of surprises. A blockchain that commits to a single algorithm without a migration path is betting its entire security on that algorithm’s permanence.
The response is not to avoid commitment — Dilithium is the best available choice today — but to build the architecture for algorithm rotation into the protocol from the start. Adding a second algorithm to a system designed for one is far harder than supporting multiple algorithms from the beginning.
Dilithium-First Strategy
The BTQ project adopts a “Dilithium-first” strategy: implement one algorithm thoroughly before expanding to multiple. This is deliberate pragmatism, not algorithmic monoculture.
The rationale for Dilithium as the first algorithm:
- NIST standardization: Published as FIPS 204 in August 2024, making it the first NIST-standardized post-quantum digital signature scheme. This provides institutional legitimacy and community trust.
- Security analysis: Dilithium has survived the most extensive public cryptanalysis of any PQC signature scheme — years of scrutiny during the NIST competition plus ongoing academic analysis.
- Implementation maturity: Reference implementations exist in multiple languages, hardware acceleration (AVX2) is available, and the algorithm has been integrated into major TLS libraries.
- Performance balance: Verification (~1.5ms) is faster than ECDSA (~2ms). Signing (~3ms) is practical for interactive use. Key and signature sizes, while large, are smaller than SPHINCS+ and comparable to Falcon.
Implementing one algorithm well teaches lessons that apply to all subsequent algorithms: how to handle large signatures in script validation, how to adjust block weight accounting, how to manage wallet key storage, and how to design address formats. These are generic problems whose solutions transfer to any PQC scheme.
Falcon: Smaller Signatures, Complex Math
FALCON (Fast Fourier Lattice-based Compact Signatures over NTRU) was selected alongside Dilithium in the NIST PQC competition and is being standardized as FN-DSA (FIPS 206). Its key advantage: much smaller signatures.
| Property | Dilithium2 | Falcon-512 |
|---|---|---|
| Public key | 1,312 bytes | 897 bytes |
| Signature | 2,420 bytes | ~666 bytes |
| Per-input witness | ~3,733 bytes | ~1,564 bytes |
Falcon’s ~666-byte signatures are less than a third the size of Dilithium’s, which would significantly ease the block capacity and payout batching pressures described earlier. A Falcon transaction would be roughly 6x larger than ECDSA instead of 15x.
The tradeoff is implementation complexity. Falcon’s signing process requires high-precision floating-point arithmetic to sample from a discrete Gaussian distribution over NTRU lattices. This makes constant-time implementation — essential for preventing side-channel attacks — considerably harder than for Dilithium, which uses only integer arithmetic. The signing process is also more resource-intensive, though verification is fast.
For a blockchain, Falcon’s smaller signatures make it attractive for transaction throughput, while the implementation complexity makes it a higher-risk first choice. As a second algorithm added after Dilithium has proven the integration patterns, Falcon could provide meaningful size improvements for users willing to accept a newer implementation.
SPHINCS+: The Conservative Backup
SPHINCS+ (standardized as SLH-DSA, FIPS 205) takes a fundamentally different approach to post-quantum security. While Dilithium and Falcon rely on the hardness of lattice problems, SPHINCS+ relies only on the properties of hash functions — collision resistance and preimage resistance. These are the most studied and trusted primitives in all of cryptography.
The security argument for SPHINCS+ is minimal: if hash functions work, SPHINCS+ is secure. No algebraic structure, no number-theoretic assumptions, no lattice problems. Just hashing. Grover’s algorithm provides only a quadratic speedup against hash functions, which is compensated by doubling the hash output size.
The cost is size. SPHINCS+ signatures range from 7,856 bytes (SPHINCS+-128s, optimized for size) to 49,856 bytes (SPHINCS+-256f, optimized for speed at the highest security level). Even the smallest parameter set produces signatures 3x larger than Dilithium and 12x larger than Falcon. For a blockchain, this means even fewer transactions per block and higher fees.
SPHINCS+ serves as the ultimate hedge: if a breakthrough weakens all lattice-based assumptions simultaneously, SPHINCS+ remains secure because it depends on entirely different mathematics. NIST recommends it specifically for applications requiring long-term signature verification or the most conservative security assumptions. In a multi-algorithm blockchain, SPHINCS+ would be the option for users who prioritize security certainty above all else.
Designing a Multi-Algorithm Framework
The BTQ consensus parameters include an explicit framework for multi-algorithm support:
enum SignatureAlgorithm {
NONE, // Legacy mode (any algorithm accepted)
DILITHIUM, // CRYSTALS-Dilithium (NIST FIPS 204)
FALCON, // Falcon (NIST FIPS 206)
SPHINCS // SPHINCS+ (NIST FIPS 205)
}
Currently set to NONE (allowing any algorithm), this field is designed for future soft-fork activation of algorithm enforcement. The architectural pattern is intentional: adding a new algorithm requires implementing the cryptographic primitives, adding opcodes, and activating the consensus field — not redesigning the framework.
Key design decisions for multi-algorithm support:
- Per-output algorithm choice: Each UTXO can use a different signature scheme. Users select their algorithm when creating an address, not when the network activates a consensus rule.
- Cross-algorithm transactions: A single transaction can have inputs from different algorithm types (e.g., one Dilithium input and one Falcon input). The script interpreter validates each input with the appropriate algorithm.
- Algorithm-specific weight accounting: Different signature sizes should produce different weight contributions, ensuring that the block weight system correctly reflects the actual data burden of each algorithm.
For BIP-360’s approach to Bitcoin, the P2QRH (Pay-to-Quantum-Resistant-Hash) proposal similarly supports multiple algorithms — the output format includes an algorithm identifier so the verifier knows which scheme to use. This convergence in design between independent implementations suggests that multi-algorithm support is a natural requirement, not an over-engineering choice.
The Cryptographic Agility Principle
Cryptographic agility is the ability of a system to transition between cryptographic algorithms without redesigning the system itself. It has been a design principle in protocol engineering since the TLS suite negotiation mechanism, which allows clients and servers to agree on algorithms at connection time.
For a blockchain, cryptographic agility is harder because the consensus rules are shared and immutable. You cannot negotiate algorithms per-connection; you must support specific algorithms at the protocol level, and adding new ones requires a consensus change (typically a soft fork). But you can design the protocol so that adding a new algorithm is a bounded change — new opcodes, new address prefix, new weight parameters — rather than a full protocol redesign.
The lesson from the PQC transition itself reinforces this principle. The reason Bitcoin faces such a difficult migration is that ECDSA was hardcoded as the only signature algorithm, with no framework for alternatives. Every proposal to add PQC to Bitcoin requires significant consensus changes because the original protocol assumed a single signature scheme would last forever.
Building a multi-algorithm framework from the start does not mean implementing every algorithm immediately. It means designing the data structures, consensus rules, and validation paths so that a future algorithm can be added through a predictable, bounded upgrade process. The cost is modest additional complexity in the consensus code. The benefit is avoiding the painful, contentious, system-wide migration that Bitcoin now faces.