SPDZ: Authenticated Secret Sharing
Introduction
- The SPDZ protocol (pronounced “speeds”) is a powerful framework for secure multiparty computation (MPC) that allows multiple parties to jointly compute a function on their private inputs without revealing them. Its key strength lies in being both efficient and secure against malicious adversaries—even if some parties deviate arbitrarily from the protocol.
- At a high level, SPDZ separates computation into two phases:
- Offline phase: a heavy preprocessing step that can be done before the actual inputs are known. Here, the parties generate correlated randomness—most importantly, authenticated secret shares of random values. These include “MACs” (message authentication codes) that allow each party to later verify the correctness of computations.
- Online phase: the lightweight part, where actual inputs are provided and the computation proceeds quickly using the preprocessed data. The MACs ensure that even if some parties cheat, the others can detect any inconsistency.
- Intuitively, you can think of SPDZ as a system where every secret value is “locked” with a tag that can be used to detect corruption. The tags (or MACs) are generated using a shared secret key $\alpha$ , and no one knows $\alpha$ entirely. During computation, parties manipulate their secret shares just as in normal MPC, but they also update the corresponding MACs in parallel. At the end, when a value is revealed, the MACs allow everyone to confirm that it was computed honestly—without exposing $\alpha$ .
Setting
SPDZ is a maliciously secure multiparty computation (MPC) protocol that achieves security even with a dishonest majority.
- Dishonest majority: up to $t = n - 1$ corruptions
Requires computational assumptions (here: somewhat homomorphic encryption). - Active security:
- Security with abort
- No fairness
- Arithmetic circuits:
SPDZ typically operates over a finite field $\mathbb{F}_p$ with large prime $p$ .
It can also handle Boolean circuits and computations over rings.
In this article, we focus only on computation over finite fields.
Preprocessing Model
SPDZ separates computation into two phases:
- Offline (Preprocessing) phase:
Performed before inputs are known, producing authenticated randomness such as Beaver triples and random masks. - Online phase:
Uses the preprocessed material to perform actual secure computation efficiently.
The goal of preprocessing is to offload the cryptographic and computational overhead to the offline phase, enabling fast and communication-efficient computation in the online phase.
We first describe the online phase assuming preprocessing is complete, then explain how preprocessing is performed.
Online Phase
Authentication and Notation
Each secret value $x \in \mathbb{F}_p$ is authenticated under a global MAC key $\alpha$ .
Each party $P_i$ holds:
\[\langle x \rangle_i = (x_i, \gamma_{x,i}, \alpha_i),\]where:
- $x_i$ is the share of $x$ such that $\sum_i x_i = x$ ,
- $\alpha_i$ is the share of the global MAC key $\alpha$ such that $\sum_i \alpha_i = \alpha$ ,
- $\gamma_{x,i}$ is the share of the MAC value $\gamma_x = \alpha x$ such that $\sum_i \gamma_{x,i} = \gamma_x$ .
This is sometimes called a “zero-time MAC”:
it is as hard for an adversary to forge as inverting a random element in $\mathbb{F}_p$ , i.e., with probability $1/p$ .
The security here relies on the fact that both the secret key $\alpha$ and the MAC value $\gamma_x$ remain hidden during the check. Only the value $x$ will be revealed later during verification.
Intuitively, we can view this as a simple MAC forgery game: the adversary attempts to produce a valid pair $(x+e, \gamma_x+e’)$ satisfying
\[\alpha (x+e) = \gamma_x + e'.\]Rearranging gives
\[\alpha e = e'.\]Since $\alpha$ is secret and uniformly random in $\mathbb{F}_p$ , any attempt to satisfy this equation without knowing $\alpha$ is equivalent to guessing a random field element—succeeding only with probability $1/p$ .
All randomness used thereafter — including masks, triples, and challenge values — is authenticated using this same mechanism. We will discuss how such authenticated preprocessing is achieved later.
Inputting a Value
Suppose party $P_i$ wants to input a private value $x_i$ :
- From preprocessing, select a random authenticated sharing $\langle r \rangle$ .
- $P_i$ opens $r$ to learn the plaintext value.
- $P_i$ broadcasts $\epsilon = x_i - r$ .
- Each party then reconstructs $\langle x \rangle = \langle r \rangle + \epsilon.$
Both the value and MAC shares are updated according to the rule of addition by constant described below.
Every party with input values performs this procedure.
Addition
Given two authenticated values $\langle x \rangle$ and $\langle y \rangle$ :
\[\langle z \rangle = \langle x \rangle + \langle y \rangle\]is computed locally by each party:
\[x_i' = x_i + y_i, \quad \gamma_{z,i} = \gamma_{x,i} + \gamma_{y,i}.\]The MAC remains consistent because MACs are linear homomorphic:
\[\alpha z = \alpha(x + y) = \alpha x + \alpha y.\]Addition by Constant
If we add a public constant $c \in \mathbb{F}_p$ :
\[\langle z \rangle = \langle x \rangle + c\]Each party performs:
\[x_i' = \begin{cases} x_i + c & \text{for one designated party},\\ x_i & \text{otherwise,} \end{cases} \qquad \gamma_{z,i} = \gamma_{x,i} + \alpha_i\, c .\]The MAC term is adjusted by $\alpha_i c$ so that $\sum_i \gamma_{z,i} = \alpha(x+c)$ .
Multiplication
Given $\langle x \rangle$ and $\langle y \rangle$ , use a Beaver triple
\[\langle a \rangle, \langle b \rangle, \langle c \rangle \quad \text{with } c = a b\]from preprocessing, we would like to compute $z = x y$:
- let $d = x - a$ , $e = y - b$ .
- Open $d, e$ publicly.
- Locally compute
\(\langle z \rangle = \langle c \rangle + d \langle b \rangle + e \langle a \rangle + de.\)
- Propagate the MACs accordingly:
This MAC update follows directly from:
\[\alpha z = \alpha(ab + db + ea + de) = \alpha ab + d\alpha b + e\alpha a + de\alpha.\]Thus $\sum_i \gamma_{z,i} = \alpha z$ , preserving authenticity.
Opening Values
- When a value $\langle x \rangle$ needs to be revealed:
- Open the value:
Each party sends its share $x_i$ to all others, and everyone reconstructs $x = \sum_i x_i$ . - Compute local tags:
Each party computes
\(d_i = \alpha_i x - \gamma_{x,i}.\)
- Commit–reveal on $d_i$ :
- Commit phase: Each party broadcasts a binding commitment to $d_i$ .
This prevents a corrupted party from adapting its $d_i$ after seeing the honest parties’ values. - Reveal phase: Once all commitments are posted, each party opens its committed $d_i$ .
Everyone checks that the revealed $d_i$ match the commitments and that
If this equality fails, the protocol aborts immediately.
This check ensures MAC consistency and is equivalent to the MAC forgery game: any successful forgery would imply knowledge of $\alpha$ , which is only possible with probability $1/p$ .
- Commit phase: Each party broadcasts a binding commitment to $d_i$ .
- Open the value:
- Alternatively, the opening can be performed by a single designated verifier, let’s say $P_1$ . Every party sends its share $x_i$ to $P_1$ , who reconstructs $x$ and broadcasts the result.
This variant is equivalent because, in the standard opening approach, a corrupted party could already send incorrect $x_i$ values to others. Delegating the verification to one party does not increase this risk—if $P_1$ is corrupted, any misbehavior simply causes the same outcome (just adding some errors). What matters for correctness is whether the MAC check passes; if it fails, the protocol aborts.
The broadcast used here is a two-round broadcast with abort; since the protocol allows abort, strong broadcast consistency is unnecessary.
Batch MAC Checking
Performing a MAC check after every multiplication (each involving two openings) would be inefficient.
Instead, SPDZ combines all checks into a single batched verification.
Suppose $t$ values have been opened:
\[\langle x_0 \rangle, \langle x_1 \rangle, \dots, \langle x_t \rangle.\]Rather than checking each independently, the parties form a random linear combination using coefficients $e_0, e_1, \dots, e_t \in \mathbb{F}_p$ :
(generated during preprocessing and revealed only after all openings are completed).
Define a virtual combined value:
Each party then holds the corresponding MAC share
\[\gamma_{y,i} = \sum_{j=0}^{t} e_j \gamma_{x_j,i}.\]The parties now perform a single MAC check on $\langle y \rangle$ , exactly as in the normal opening procedure:
each party computes
commits to $d_i$ , and verifies that
\[\sum_i d_i = 0.\]It is crucial that the coefficients $e_j$ are chosen after all openings have been completed; otherwise, an adversary could adapt its forgeries to the coefficients and escape detection. The random coefficients $e_j$ prevent the adversary from selectively canceling errors—it must forge the MACs for all opened values simultaneously to pass the check.
By choosing $e_j$ afterward , the check guarantees soundness of $1/p$ .
This procedure can also be further optimized using batch broadcast:
instead of performing a full two-round broadcast for every opening, parties can send only hashes of all received values in the second round. This can reduce communication overhead heavily.
Polynomial MAC Check
While the batched MAC check greatly reduces the number of verification rounds, it still requires opening multiple random coefficients $e_j$ , which can be expensive for large circuits.
To make this more efficient, SPDZ introduces the polynomial check, which uses powers of a single random value instead of many independent ones.
Let $e \in \mathbb{F}_p$ be an authenticated random value generated during preprocessing and revealed only after all openings are completed.
Define the polynomially combined value:
Then continue the MAC check as described above.
This approach requires only one random value $e$ rather than many independent $e_j$ , significantly reducing preprocessing and communication overhead while maintaining the same checking structure.
Conceptually, the parties are verifying that a random degree- $t$ polynomial—constructed from all prior openings—evaluates to zero at the random point $e$ . There are $t$ roots for a degree- $t$ polynomial over $\mathbb{F}_p$ , therefore the soundness error here is $t/p$ .
The trade-off here is a slightly weaker soundness bound compared to the batch method where the soundness error is $1/p$ .
Preprocessing Phase
In the offline phase, SPDZ generates authenticated randomness for later use in the online computation.
This includes random values, Beaver triples for multiplication, and authenticated MAC shares.
The preprocessing uses a somewhat homomorphic encryption (SHE) layer that allows the parties to generate these materials without revealing their plaintexts.
Distributed Decryption
Here, we assume an SHE scheme that can do a number of addition but only one multiplication before the noise propogate too much to decrypt.
We can view ciphertexts under the homomorphic encryption scheme as pairs of polynomials $(c_0, c_1)$ in the ring $\mathbb{Z}_q[X]/(f(X))$ , where $q \gg p$ and $p$ is the plaintext domain.
The secret key is a polynomial $s$ , and decryption of ciphertext $(c_0, c_1)$ is done as:
To distribute the decryption key, we split $s$ additively as:
\[s = s_1 + s_2 + \cdots + s_n,\]where each party $P_i$ holds its own share $s_i$ .
Each $P_i$ computes and sends
\[d_i = -s_i \cdot c_1 + t_i,\]where $t_i \bmod p = 0$ and $t_i$ has small coefficients, serving as extra “noise” to hide $s_i$ .
Finally, the decrypted message is reconstructed as:
\[m = (c_0 - (d_1 + \cdots + d_n)) \bmod p.\]This achieves semi-honest security, as each share individually reveals nothing about $s_i$ . But this is enough for our purpose as the corrupted parties can always inject error in opening. We will catch this error by MAC check.
Generating triples
Multiplication triples $\langle a \rangle, \langle b \rangle, \langle c \rangle$ are generated as follows:
Local encryption generation:
Each party $P_i$ selects random $a_i, b_i, r_i$ and broadcasts
\(A_i = \text{Enc}_{pk}(a_i), \quad B_i = \text{Enc}_{pk}(b_i), \quad R_i = \text{Enc}_{pk}(r_i),\)
along with zero-knowledge proofs of plaintext knowledge. This ensures that the parties don’t just give something arbitrary here but rather some valid input share that they know.Homomorphic aggregation:
The ciphertexts are added homomorphically:
\(A = \sum_i A_i, \quad B = \sum_i B_i, \quad R = \sum_i R_i,\quad D = A \cdot B.\)
The resulting ciphertext $D$ encrypts a noisy version of $ab$ , since noise accumulates during multiplication.Distributed decryption:
Using the distributed decryption method above, parties jointly decrypt $D - R$ , obtaining $d = ab - r + e$ . Decryption is the process where error $e$ could be added by the corrupted parties.Constructing fresh encryption:
Obtain a fresh encryption of $c$ by:
\(C = R+\text{Enc}_{pk}(d).\)
This will be used later when adding MAC to $c$ .Output shares:
$P_1$ outputs $a_1, b_1, c_1 = r_1 + d$ ;
other parties output $a_i, b_i, c_i = r_i$ .
Hence, the sum satisfies $c = ab + e$ .
The error term $e$ will be addressed by the MAC mechanism introduced next.
Adding MACs
Once and for all, the parties generate an encryption $K = \text{Enc}_{pk}(\alpha)$ of the global MAC key $\alpha$ as in the above section.
Given a ciphertext $X = \text{Enc}_{pk}(x)$ , we wish to compute additive shares of its MAC $\alpha x$ .
- Each party $P_i$ chooses random $s_i$ and broadcasts $S_i = \text{Enc}_{pk}(s_i)$ with a ZK proof of plaintext knowledge.
- Compute homomorphically:
\(S = \sum_i S_i, \quad KX = K \cdot X.\) - Use distributed decryption to decrypt $KX - S$ , yielding $d = \alpha x - s + e’$ .
- $P_1$ outputs $m_1 = s_1 + d$ ; all others output $m_i = s_i$ .
Now the MAC shares satisfy:
\[\sum_i m_i = \alpha x + e',\]where $e’$ is a small error term chosen by the adversary.
Detecting Errors in Multiplication Triples
The generated triples $\langle a \rangle, \langle b \rangle, \langle c \rangle$ may not perfectly satisfy $c = ab$ due to the noise term $e$. To ensure correctness, SPDZ performs a verification phase before the triples are used in computation.
For each test, an additional sacrifice triple $\langle x \rangle, \langle y \rangle, \langle z \rangle$ is consumed to verify the validity of $\langle a \rangle, \langle b \rangle, \langle c \rangle$. Note that this test can be run in parallel.
Take an unreliable triple $\langle a \rangle, \langle b \rangle, \langle c \rangle$ , and do the following:
- Open a random value $\langle t \rangle$ (same for all tested triples).
- Compute
\(u = t a - x, \quad v = t b - y,\)
partially opening each to obtain $u, v$ . - Compute and open
\(t c - z - u y - v x + u v.\)
If this value is not $0$, the protocol aborts. - Verify MACs for all partial openings, using the same MAC check procedure as in the online phase.
It can be shown that if $ab \neq c$ , the above test fails for all but one specific value of $t$ ,
or the MAC check fails. Either event occurs with probability at most $1/p$ .
Hence, after this batch test, all remaining triples satisfy $c = ab$ with overwhelming probability. Essentially, we sacrifice one triple to obtain a correct triple.
At this point, the preprocessing phase completes, producing authenticated random values and verified multiplication triples.
These are then used directly in the online computation phase described earlier, ensuring correctness and active security with abort under dishonest majority.