Zyzzyva: Speculative Byzantine Fault Tolerance


Download Zyzzyva: Speculative Byzantine Fault Tolerance


Preview text

Zyzzyva: Speculative Byzantine Fault Tolerance
Ramakrishna Kotla, Lorenzo Alvisi, Mike Dahlin, Allen Clement, and Edmund Wong
Dept. of Computer Sciences University of Texas at Austin
{kotla,lorenzo,dahlin,aclement,elwong}@cs.utexas.edu

ABSTRACT
We present Zyzzyva, a protocol that uses speculation to reduce the cost and simplify the design of Byzantine fault tolerant state machine replication. In Zyzzyva, replicas respond to a client’s request without first running an expensive three-phase commit protocol to reach agreement on the order in which the request must be processed. Instead, they optimistically adopt the order proposed by the primary and respond immediately to the client. Replicas can thus become temporarily inconsistent with one another, but clients detect inconsistencies, help correct replicas converge on a single total ordering of requests, and only rely on responses that are consistent with this total order. This approach allows Zyzzyva to reduce replication overheads to near their theoretical minima.
Categories and Subject Descriptors
D.4.5 [Operating Systems]: Reliability—Fault-tolerance; D.4.7 [Operating Systems]: Organization and Design— Distributed systems; H.3.4 [Information Storage and Retrieval]: Systems and Software—Distributed systems
General Terms
Performance, Reliability
Keywords
Byzantine fault tolerance, Speculative execution, Replication, Output commit
1. INTRODUCTION
Three trends make Byzantine Fault Tolerant (BFT) replication increasingly attractive for practical deployment. First, the growing value of data and and falling costs of hardware make it advantageous for service providers to trade increasingly inexpensive hardware for the peace of mind potentially provided by BFT replication. Second, mounting evidence of
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SOSP’07, October 14–17, 2007, Stevenson, Washington, USA. Copyright 2007 ACM 978-1-59593-591-5/07/0010 ...$5.00.

non-fail-stop behavior in real systems [2, 5, 6, 27, 30, 32, 36, 39, 40] suggest that BFT may yield significant benefits even without resorting to n-version programming [4, 15, 33]. Third, improvements to the state of the art in BFT replication techniques [3, 9, 10, 18, 33, 41] make BFT replication increasingly practical by narrowing the gap between BFT replication costs and costs already being paid for non-BFT replication. For example, by default, the Google file system uses 3-way replication of storage, which is roughly the cost of BFT replication for f = 1 failures with 4 agreement nodes and 3 execution nodes [41].
This paper presents Zyzzyva1, a new protocol that uses speculation to reduce the cost and simplify the design of BFT state machine replication [19, 35]. Like traditional state machine replication protocols [9, 33, 41], a primary proposes an order on client requests to the other replicas. In Zyzzyva, unlike in traditional protocols, replicas speculatively execute requests without running an expensive agreement protocol to definitively establish the order. As a result, correct replicas’ states may diverge, and replicas may send different responses to clients. Nonetheless, applications at clients observe the traditional and powerful abstraction of a replicated state machine that executes requests in a linearizable [13] order because replies carry with them sufficient history information for clients to determine if the replies and history are stable and guaranteed to be eventually committed. If a speculative reply and history are stable, the client uses the reply. Otherwise, the client waits until the system converges on a stable reply and history.
The challenge in Zyzzyva is ensuring that responses to correct clients become stable. Ultimately, replicas are responsible for ensuring that all requests from a correct client eventually complete, but a client waiting for a reply and history to become stable can speed the process by supplying information that will either cause the request to become stable rapidly within the current view or trigger a view change. Note that because clients do not require requests to commit but only to become stable, clients act on requests in one or two phases rather than the customary three [9, 33, 41].
Essentially, Zyzzyva rethinks the sync [28] for BFT: instead of pessimistically ensuring that replicas establish a final order on requests before communicating with a client, we move the output commit to the client. Leveraging the client in this way offers significant practical advantages. Compared
1Zyzzyva (ZIZ-uh-vuh) is the last word in the dictionary. According to dictionary.com, a zyzzyva is “any of various tropical American weevils of the genus Zyzzyva, often destructive to plants.”

Cost
Throughput Latency

Total replicas Replicas with application state
MAC ops at bottleneck server
Critical path NW 1-way latencies

PBFT
3f+1 2f+1 [41] 2+(8f+1)/b
4

Q/U
5f+1 5f+1 2+8f
2

HQ
3f+1 3f+1 4+4f
4

Zyzzyva
3f+1 2f+1 2+3f/b
3

State Machine Repl. Lower Bound
3f+1 [31] 2f+1 2† 2/3‡

Table 1: Properties of state-of-the-art and optimal Byzantine fault tolerant service replication systems toler-
ating f faults, using MACs for authentication [9], and using a batch size of b [9]. Bold entries denote protocols that match known lower bounds or those with the lowest known cost. †It is not clear that this trivial lower bound is achievable. ‡The distributed systems literature typically considers 3 one-way latencies to be the
lower bound for agreement on client requests [11, 21, 25]; 2 one-way latencies is achievable if no concurrency
is assumed. We include detailed discussion of this table in our extended technical report [17].

to state of the art protocols including PBFT [9, 33, 41], Q/U [3], and HQ [10], Zyzzyva reduces cryptographic overheads and increases peak throughput by a factor of two to an order of magnitude for demanding workloads. In fact, Zyzzyva’s replication costs, processing overheads, and communication latencies approach their theoretical lower bounds.
1.1 Why another BFT protocol?
The state of the art for BFT state machine replication is distressingly complex. In a November 2006 paper describing Hybrid-Quorum replication (HQ replication) [10], Cowling et al. draw the following conclusions comparing three state-of-the-art protocols (Practical Byzantine Fault Tolerance (PBFT) [9, 18, 33, 41], Query/Update (Q/U) [3], and HQ replication [10]):
• “In the regions we studied (up to f = 5), if contention is low and low latency is the main issue, then if it is acceptable to use 5f + 1 replicas, Q/U is the best choice, else HQ is the best since it outperforms [P]BFT with a batch size of 1.” [10]
• “Otherwise, [P]BFT is the best choice in this region: it can handle high contention workloads, and it can beat the throughput of both HQ and Q/U through its use of batching.” [10]
• “Outside of this region, we expect HQ will scale best: HQ’s throughput decreases more slowly than Q/U’s (because of the latter’s larger message and processing costs) and [P]BFT’s (where eventually batching cannot compensate for the quadratic number of messages).” [10]
Such complexity represents a barrier to adoption of BFT techniques because it requires a system designer to choose the right technique for a workload and then for the workload not to deviate from expectations.
As Table 1 indicates, Zyzzyva simplifies the design space of BFT replicated services by approaching the lower bounds in almost every key metric.
With respect to replication cost, Zyzzyva and PBFT match the lower bound both with respect to the total number of replicas that participate in the protocol and the number of replicas that must hold copies of application state and execute application requests. Both protocols hold cost advantages of 1.5–2.5 over Q/U and 1.0–1.5 over HQ depending on the number of faults to be tolerated and the relative cost of application vs. agreement node replication.

With respect to throughput, both Zyzzyva and PBFT use batching when load is high and thereby approach the lower bound on the number of authentication operations performed at the bottleneck node, and Zyzzyva approaches this bound more rapidly than PBFT. Q/U and HQ’s inability to support batching increases the work done at the bottleneck node by factors approaching 5 and 4, respectively, when one fault is tolerated and by higher factors in systems that tolerate more faults.
With respect to latency, Zyzzyva executes requests in three one-way message delays, which matches the accepted lower bound in the distributed systems literature for agreeing on a client request [11, 21, 25] and improves upon both PBFT and HQ. Q/U sidesteps this lower bound by providing a service that is slightly weaker than state machine replication (i.e., it does not put a total order on all requests) and by optimizing for cases without concurrent access to any state. This difference presents a chink in Zyzzyva’s armor, which Zyzzyva minimizes by matching the lower bound on message delays for full consensus. We believe that Zyzzyva’s other advantages over Q/U—fewer replicas, improved throughput via batching, simpler state machine replication semantics, ability to support high-contention workloads—justify this “extra” latency.
With respect to fault scalability [3], the metrics that depend on f grow as slowly or more slowly in Zyzzyva as in any other protocol.
Note that as is customary [3, 9, 10, 33, 41], Table 1 compares the protocols’ performance during the expected common case of fault-free, timeout-free execution. All of the protocols are guaranteed to operate correctly in the presence of up to f faults and arbitrary delays, but all of these protocols can pay significantly higher overheads and latencies in such scenarios. In §5.4, we consider the susceptibility of these protocols to faults and argue that Zyzzyva remains the most attractive choice.
2. SYSTEM MODEL
We assume the Byzantine failure model where faulty nodes (replicas or clients) may behave arbitrarily. We assume a strong adversary that can coordinate faulty nodes to compromise the replicated service. We do, however, assume the adversary cannot break cryptographic techniques like collision-resistant hashes, encryption, and signatures. In the public-key version of our protocol, we denote a message X signed by principal Y ’s public key as X σY . Our system ensures its safety and liveness properties if at most f repli-

cas are faulty. We assume a finite client population, any number of which may be faulty.
Our system’s safety properties hold in any asynchronous distributed system where nodes are connected by a network that may fail to deliver messages, corrupt them, delay them, or deliver them out of order. Liveness, however, is ensured only during intervals in which messages sent to correct nodes are processed within some fixed (but potentially unknown) worst case delay from when they are sent.
Our system implements a BFT service using state machine replication [9, 18, 35]. Traditional state machine replication techniques can be applied only to deterministic services. We cope with the non-determinism present in many real-word applications (such as file systems [26] and databases [38]) by abstracting the observable application state at the replicas and using the agreement stage to resolve divergences [33].
Services limit the damage done by Byzantine clients by authenticating clients, enforcing access control to deny clients access to objects they do not have a right to, and (optionally) by maintaining multiple versions of shared data (e.g., snapshots in a file system [34, 16]) so that data can be recovered from older versions if a faulty client destroys data [14].
3. PROTOCOL
Zyzzyva is a state machine replication protocol based on three sub-protocols: (1) agreement, (2) view change, and (3) checkpoint. The agreement protocol orders requests for execution by the replicas. The view change protocol coordinates the election of a new primary when the current primary is faulty or the system is running slowly. The checkpoint protocol limits the state that must be stored by replicas and reduces the cost of performing view changes.
Principles and Challenges.
Zyzzyva focuses on safety properties as they are observed by the client. In Zyzzyva, replicas can become temporarily inconsistent with one another, but clients detect inconsistencies, drive replicas to converge on a single total ordering of requests, and only rely on responses that are consistent with this total order.
Given the duties BFT replication protocols already place on clients [3, 9, 10, 22, 33, 41], it is not a large step to fully move the output commit to the client, but this small step pays big dividends. First, Zyzzyva leverages speculative execution—replicas execute a request before its order is fully established. Second, Zyzzyva leverages fast agreement protocols [11, 21, 25] to establish a request ordering in as few as three message delays. Third, the agreement subprotocol stops working on a request once a client knows the request’s order, thereby avoiding work that would otherwise be needed to establish this knowledge at the replicas.
These choices lead to two key challenges in designing Zyzzyva. First, we must specify carefully the conditions under which a request completes at a client and define agreement, checkpoint, and view change sub-protocols to retain the abstraction that requests execute on a single, correct state machine. Intuitively, a request completes when a correct client may safely act on the reply to that request. To help a client determine when it is appropriate to act on a reply, Zyzzyva appends history information to the replies received by a client so that the client can judge whether the replies are based on the same ordering of requests. Zyzzyva ensures the following safety condition:

Client Primary Replica 1 Replica 2 Replica 3
Client Primary Replica 1 Replica 2 Replica 3

1 2

Application

3f+1

4a

3

Speculative execution (a) Gracious execution

1 2

2f+1 4b
3

Application
2f+1 6 5

X

Speculative execution

Commit

(b) Faulty replica

Figure 1: Protocol communication pattern within a view for (a) gracious execution and (b) faulty replica cases. The numbers refer to the main steps of the protocol numbered in the text.
saf If a request with sequence number n and history hn completes, then any request that completes with a higher sequence number n′ ≥ n has a history hn′ that includes hn as a prefix.
Second, the view change sub-protocol must ensure liveness despite an agreement sub-protocol that never requires more than two phases to complete during a view. We shift work from the agreement sub-protocol to the view change subprotocol by introducing a new “I hate the primary” phase that guarantees that a correct replica only abandons the current view if it can ensure that all other correct replicas will join the mutiny. Zyzzyva ensures the following liveness condition under eventual synchrony2 [12]:
liv Any request issued by a correct client eventually completes.
Protocol Overview.
Zyzzyva is executed by 3f + 1 replicas, and execution is organized into a sequence of views. Within a view, a single replica is designated as the primary responsible for leading the agreement sub-protocol.
Figure 1 shows the communication pattern for a single instance of our client-centric fast agreement sub-protocol. A client sends a request to the primary, the primary forwards the request to the replicas, and the replicas execute the request and send their responses to the client. A request completes at a client in one of two ways. First, if the client receives 3f + 1 mutually-consistent responses (including an application-level reply and the history on which it depends), then the client considers the request complete and acts on it. Second, if the client receives between 2f + 1 and 3f mutually-consistent responses, then the client gathers 2f + 1 responses and distributes this commit certificate to the replicas. Once 2f +1 replicas acknowledge receiving a
2In practice eventual synchrony can be achieved by using exponentially increasing timeouts [9].

commit certificate, the client considers the request complete and acts on the corresponding reply.
If a sufficient number of replicas suspect that the current primary is faulty, then a view change occurs and a new primary is elected.
In the rest of this section, we describe the basic protocol and outline the proof of its correctness [17]. In §4 we describe a number of optimizations, all implemented in our prototype, that reduce encryption costs by replacing public key signatures with message authentication codes (MACs), improve throughput by batching requests, reduce the impact of lost messages by caching out-of-order messages, improve read performance by optimizing read-only requests, reduce bandwidth by having most replicas send hashes rather than full replies, reduce overheads by including MACs only for a preferred quorum, and improve performance in the presence of faulty nodes by including additional witness replicas.
In §4.1 we discuss Zyzzyva5, a variation of the protocol that requires 5f + 1 agreement replicas but that completes in three one-way message exchanges as in Figure 2(a) even when up to f non-primary replicas are faulty.
3.1 Node State and Checkpoint Protocol
To ground our discussion in definite terms, we begin by discussing the state maintained by each replica as summarized by Figure 2. Each replica i maintains an ordered history of the requests it has executed and a copy of the max commit certificate, the commit certificate (defined below) seen by i that covers the largest prefix of i’s stored history. The history up to and including the request with the highest sequence number covered by this commit certificate is the committed history, and the history that follows is the speculative history. We say that a commit certificate has sequence number n if n is the highest sequence number of any request in the committed history.
A replica constructs a checkpoint every CP INTERVAL requests. A replica maintains one stable checkpoint and a corresponding stable application state snapshot, and it may store up to one tentative checkpoint and corresponding tentative application state snapshot. The process by which a tentative checkpoint and application state become committed is similar to the one used by earlier BFT protocols [9, 10, 18, 33, 41], so we defer a detailed discussion to our extended technical report [17]. However, to summarize briefly: when a correct replica generates a tentative checkpoint, it sends a signed checkpoint message to all replicas. The message includes the highest sequence number of any request included in the checkpoint and a digest of the corresponding tentative checkpoint and application snapshot. A correct Zyzzyva replica considers the checkpoint and corresponding application snapshot stable when it collects f + 1 matching checkpoint messages signed by different replicas.
To bound the size of the history, a replica (1) truncates the history before the committed checkpoint and (2) blocks processing of new requests after processing 2 × CP INTERVAL requests since the last committed checkpoint.
Finally, each replica maintains a response cache containing a copy of the latest ordered request from, and corresponding response to, each client.
3.2 Agreement Protocol
Figure 1 illustrates the basic flow of the agreement subprotocol during a view. Because replicas execute requests

Label c
CC d
i, j hn
m maxn
n o OR P OM r t v

Meaning
Client ID Commit certificate Digest of client request message d = H(m) Server IDs History through sequence number n hn = H(hn−1, d) Message containing client request Max sequence number accepted by replica Sequence number Operation requested by client Order Request message Proof Of Misbehavior Application reply to a client operation Timestamp assigned to an operation by a client View number

Table 2: Labels given to fields in messages.
speculatively in the order proposed by the primary without communicating with other replicas, the key challenge is ensuring that clients only act upon replies that correspond to stable requests executed in a total order that is guaranteed to eventually commit at all correct servers. The protocol is constructed so that a request completes at a client when the client receives 3f + 1 matching responses or acknowledgements from 2f + 1 replicas that they have received a commit certificate comprising a local commit from 2f + 1 replicas. Either of these conditions serves to prove that the request will eventually be committed at all correct replicas with the same sequence number and history of preceding requests observed by the client.
To describe how the system deals with this and other challenging, but standard, issues—lost messages, faulty primary, faulty clients, etc.—we follow a request through the system, defining the rules a server uses to process each message. The numbers in Figure 1 correspond to numbers in the text identifying major steps in the protocol and Table 2 summarizes the labels we give fields in messages. Most readers will be happier if on their first reading they skip the text marked Additional Pedantic Details.
1. Client sends request to the primary.
A client c requests an operation o be performed by the replicated service by sending a message request, o, t, c σc to the replica it believes to be the primary (i.e., the primary for the last response the client received).
Additional Pedantic Details: If the client guesses the wrong primary, the retransmission mechanisms discussed in step 4c below forwards the request to the current primary. The client’s timestamp t is included to ensure exactly-once semantics of execution of requests.
2. Primary receives request, assigns sequence number, and forwards ordered request to replicas.
When the primary p receives message m = request, o, t, c σc from client c, the primary assigns a sequence number n in view v to the request and relays a message order-req, v, n, hn, d, N D σp , m to the backup replicas where v indicates the view in which the message is being sent, n is the proposed sequence number for m, d = H(m) is the digest of m, hn = H(hn−1, d) is a digest summarizing the history, and

Commit Certificate

Garbage Collected
History

History

Checkpoint

Committed Checkpoint

Max CC
Committed History

Speculative History

CP_INTERVAL

Tentative Checkpoint

client m client m-1
... client 3 client 2 client 1

Response Cache

CP_INTERVAL

Application Snapshot

Committed Snapshot

Tentative Snapshot

Active Snapshot

Figure 2: State maintained at each replica.

N D is a set of values for non-deterministic application variables (time in file systems, locks in databases, etc.) required for execution.
Additional Pedantic Details: The primary only takes the above actions if t > tc where tc is the highest timestamp previously received from c.
3. Replica receives ordered request, speculatively executes it, and responds to the client.
Upon receipt of a message order-req, v, n, hn, d, N D σp , m from the primary p, replica i accepts the ordered request if m is a well-formed request message, d is a correct digest of m, n = maxn + 1 where maxn is the largest sequence number in i’s history, and hn = H(hn−1, d). Upon accepting the message, i appends the ordered request to its history, executes the request using the current application state to produce a reply r, and sends to c a message
spec-response, v, n, hn, H(r), c, t σi , i, r, OR where OR = order-req, v, n, hn, d, N D σp .
Additional Pedantic Details: A replica may only accept and speculatively execute requests in sequence-number order, but message loss or a faulty primary can introduce holes in the sequence number space. Replica i discards the orderreq message if n ≤ maxn. If n > maxn + 1, then i discards the message, sends a message fill-hole, v, maxn + 1, n, i σi to the primary, and starts a timer. Upon receipt of a message fill-hole, v, k, n, i σi from replica i, the primary p sends a order-req, v, n′, hn′ , d, N D σp , m′ to i for each request m′ that p ordered in k ≤ n′ ≤ n during the current view; the primary ignores fill-hole requests from other views. If i receives the valid order-req messages needed to fill the holes, it cancels the timer. Otherwise the replica broadcasts the fill-hole message to all other replicas and initiates a view change when the timer fires. Any replica j that receives a fill-hole message from i sends the corresponding order-req message, if it has received one. If, in the process of filling in holes in the replica sequence, replica i receives conflicting order-req messages then the conflicting messages form a proof of misbehavior as described in protocol step 4d.
4. Client gathers speculative responses.

The client receives messages spec-response, v, n, hn, H(r), c, t σi , i, r, OR , where i identifies the replica issuing the response, from the replicas. spec-response messages from distinct replicas match if they have identical v, n, hn, H(r), c, t, and r fields. There are four cases to consider. The first three handle varying numbers of matching speculative replies without considering the OR field, while the last considers only the OR field.
4a. Client receives 3f + 1 matching responses and completes the request.
In the absence of faults, the client receives matching specresponse messages from all 3f + 1 replicas. The client then considers the request and its history to be complete and delivers the reply r to the application. Zyzzyva guarantees that even if there is a view change, all correct replicas will always execute this request at this point in their history to produce this response. Notice that although the client has a proof that the request’s place in history is irrevocably set, no server has such a proof. Indeed, a server at this point cannot determine whether a request has completed in its final order or a roll-back of the server’s state will be necessary because a faulty primary ordered the request inconsistently across replicas.
4b. Client receives between 2f + 1 and 3f matching responses, assembles a commit certificate, and transmits the commit certificate to the replicas.
If the network, primary, or some replicas are faulty, the client c may never receive responses from all 3f + 1 replicas. The client therefore sets a timer when it first issues a request: when this timer expires, if c has received matching speculative responses from between 2f + 1 and 3f replicas, then c sends a message commit, c, CC σc where CC is a commit certificate consisting of a list of 2f + 1 replicas, the replica-signed portions of the 2f + 1 matching specresponse messages from those replicas, and the corresponding 2f + 1 replica signatures.
Additional Pedantic Details: CC contains 2f + 1 signatures on the spec-response message and a list of 2f + 1 nodes, but, since all the responses received by c from repli-

cas are identical, c only needs to include one replica-signed portion of the spec-response message. Also note that, for efficiency, CC does not include the body r of the reply but only the hash H(r).
4b.1. Replica receives a commit message from a client containing a commit certificate and acknowledges with a local-commit message.
When a replica i receives a message commit, c, CC σc containing a valid commit certificate CC proving that a request should be executed with a specified sequence number and history in the current view, the replica first ensures that its local history is consistent with the one certified by CC. If so, replica i (1) updates its max commit certificate state if this certificate’s sequence number is higher than the stored certificate’s sequence number and (2) sends a message local-commit, v, d, h, i, c σi to c.
Additional Pedantic Details: If the local history simply has holes encompassed by CC’s history, then i fills them as described in 3. If, however, the two histories contain different requests for the same sequence number, then i initiates the view change protocol.
4b.2. Client receives a local-commit messages from 2f + 1 replicas and completes the request.
The client resends the commit message until it receives corresponding local-commit messages from 2f + 1 distinct replicas. The client then considers the request and its history to be complete and delivers the reply r to the application. The system guarantees that even if there is a view change, all correct replicas will always execute this request at this point in their history to produce this response.
Additional Pedantic Details: When the client first sends the commit message to the replicas it starts a timer. If this timer expires before the client receives 2f + 1 localcommit messages then the client moves on to protocol step 4c described below.
4c. Client receives fewer than 2f + 1 matching specresponse messages and resends its request to all replicas, which forward the request to the primary in order to ensure the request is assigned a sequence number and eventually executed.
Client. If the network or primary is faulty, the client c may never receive matching spec-response messages from 2f +1 replicas. The client therefore sets a second timer when it first issues a request and resends the request message to all replicas when the second timer expires. It then resets its timers and continues gathering speculative responses.
Replica. When non-primary replica i receives a message request, o, t, c σc from client c there are two possible actions for i to take. If the request matches or has a lower client-supplied timestamp than the currently cached request for client c, then i resends the cached response to c. If instead the request has a higher timestamp than the currently cached response, then i sends a message confirm-req, v, m, i σi where m = request, o, t, c σc to the primary p and starts a timer. If the replica accepts an order-req message for this request before the timeout, it processes the orderreq message as described above. If the timer fires before the primary orders the request, the replica initiates a view change.

Primary. Upon receiving the confirm request message confirm-req, v, m, i σi from replica i, the primary p checks the client’s timestamp for the request. If the request is new, p sends a new order-req message using the next sequence number to order as described in step 2; otherwise, p sends to i the cached order-req message for the most recent request from c.
Additional Pedantic Details: If replica i has received a commit certificate or stable checkpoint for a subsequent request, then the replica sends a local-commit to the client even if the client has not received a commit certificate for the retransmitted request. Additionally, if replica i does not receive the order-req message from the primary, the replica sends the confirm-req message to all other replicas. Upon receipt of a confirm-req message from another replica j, replica i sends the order-req message it received from the primary to j; if i did not receive the request from the client, i acts as if the request came from the client itself.
4d. Client receives responses indicating inconsistent ordering by the primary and sends a proof of misbehavior to the replicas, which initiate a view change to oust the faulty primary.
If client c receives a pair of spec-response messages containing valid messages OR = order-req, v, n, hn, d, N D σj for the same request (d = H(m)) in the same view v with differing sequence number n or history hn, then the pair of order-req messages constitutes a proof of misbehavior (P OM ) against the primary. Upon receipt of a P OM , c sends a message pom, v, P OM σc to all replicas. Upon receipt of a valid pom message, a replica initiates a view change and forwards the pom message to all other replicas.
Note that cases 4b and 4c are not exclusive of 4d; a client may receive messages sufficient to complete a request or form a commit certificate and also a proof of misbehavior against the primary.
3.3 View Changes
Fast agreement and speculative execution have profound effects on Zyzzyva’s view change sub-protocol. In this section we highlight the differences between the Zyzzyva view change sub-protocol and that of previous systems. For completeness we include the full view change sub-protocol in the appendix.
The view change sub-protocol must elect a new primary and guarantee that it will not introduce any changes in a history that has already completed at a correct client. To maintain this safety property, traditional view change subprotocols [9, 10, 18, 33, 41] require a correct replica that commits to a view change to stop accepting messages other than checkpoint, view-change, and new-view messages. Also, to prevent faulty replicas from disrupting the system, a view change sub-protocol should never remove a primary unless at least one correct replica commits to the view change. Hence, a correct replica traditionally commits to a view change if either (a) it observes the primary to be faulty or (b) it has a proof that f + 1 replicas have committed to a view change. On committing to a view change a correct replica sends a signed view-change message that includes the new view, the sequence number of the replica’s latest stable checkpoint (together with a proof of its stability), and the set of prepare certificates—the equivalent of commit certificates in Zyzzyva—collected by the replica.

The traditional view change completes when the new primary, using 2f + 1 view-change messages from distinct replicas, computes the history of requests that all correct replicas must adopt to enter the new view. The new primary includes this history, with a proof of validity, in a signed new-view message that it broadcasts to all replicas.
Zyzzyva maintains the overall structure of the traditional protocol, but it departs in two significant ways that together allow clients to accept a response before any replicas know that the request has been committed and allow the replicas to commit to a response after two phases instead of the traditional three.
1. First, to ensure liveness, Zyzzyva strengthens the condition under which a correct replica commits to a view change by adding a new “I hate the primary” phase to the view change sub-protocol. We explain the need for and details of this addition below by considering The Case of the Missing Phase.
2. Second, to guarantee safety, Zyzzyva weakens the condition under which a request appears in the history included in the new-view message. We explain the need for and details of this change below by considering The Case of the Uncommitted Request.
3.3.1 The Case of the Missing Phase
As Figure 1 shows, Zyzzyva’s agreement protocol guarantees that every request that completes within a view does so after at most two phases. This property may appear surprising to the reader familiar with PBFT. If we view a correct client that executes step 4b of Zyzzyva as implementing a broadcast channel between replicas, then Zyzzyva’s communication pattern maps to only two of PBFT’s three phases, one where communication is primary-to-replicas (pre-prepare) and the second involving all-to-all exchanges (either prepare or commit). Where did the third phase go? And why is it there in the first place?
The answer to the second question lies in the subtle dependencies between the agreement and view change subprotocols. No replicated service that uses the traditional view change protocol can be live without an agreement protocol that includes both the prepare and commit full exchanges.3 To see how this constraint applies to Zyzzyva, consider a scenario with f faulty replicas, one of them the primary, and suppose the faulty primary causes f correct replicas to commit to a view change and stop sending messages in the view. In this situation, a client request may only receive f + 1 responses from the remaining correct replicas, not enough for the request to complete in either the first or second phase—and, because fewer than f + 1 replicas demand a view change, there is no opportunity to regain liveness by electing a new primary.
The third phase of traditional BFT agreement breaks this stalemate: by exchanging what they know, the remaining f +1 correct replicas can either gather the evidence necessary to complete the request after receiving only f + 1 matching responses or determine that a view change is necessary.
Back to the first question: How does Zyzzyva avoid the third phase in the agreement sub-protocol? The insight is that what compromises liveness in the previous scenario is
3Unless a client can unilaterally initiate a view change. This option is unattractive when clients can be Byzantine.

that the traditional view change protocol lets correct replicas commit to a view change and become silent in a view without any guarantee that their action will lead to the view change. Instead, in Zyzzyva, a correct replica does not abandon view v unless it is guaranteed that every other correct replica will do the same, forcing a new view and a new primary.
To ensure this property, the Zyzzyva view change subprotocol adds an additional phase to strengthen the conditions under which a replica stops participating in the current view. In particular, a correct replica i that suspects the primary of view v continues to participate in the view, but expresses its vote of no-confidence in the primary by multicasting to all replicas a message i-hate-the-primary, v σi . If i receives f + 1 votes of no confidence in v’s primary, then it commits to a view change: it becomes silent, and multicasts to all replicas a view-change message that contains a proof that f + 1 replicas have no confidence in the primary for view v. A correct replica that receives a valid viewchange message joins in the mutiny and commits to the view change. As a result, Zyzzyva’s view change protocol ensures that if a correct replica commits to a view change in view v, eventually all correct replicas will. In effect, Zyzzyva shifts the costs needed to deal with a faulty primary from the critical path (the agreement protocol) to the view change sub-protocol, which is run only when the primary is faulty.
3.3.2 The Case of the Uncommitted Request
Zyzzyva replicas may never learn the outcome of the agreement protocol: only clients may know when a request has completed. How do Zyzzyva replicas identify a safe history prefix for a new view?
There are two ways in which a request r and its history may complete in Zyzzyva. Let us first consider the least problematic from the perspective of a view change: it occurs when r completes because a client receives 2f + 1 local-commit messages, implying that at least f + 1 correct replicas have stored a commit certificate for r. Traditional view change protocols already handle this case: the standard view-change message sent by a correct replica includes all commit certificates known to the replica since the latest stable checkpoint. The new primary includes in the new-view message all commit certificates that appear in any set of 2f + 1 view-change messages it receives: at least one of those view-change messages must contain a commit certificate for r.
The other case is more challenging: if r completes because the client receives 3f + 1 matching speculative responses, then no correct replica will have a commit certificate for r. We handle this case by modifying the view change subprotocol in two ways. First, correct replicas add to the information included in their view-change message all orderreq messages (without the corresponding client request) received since the latest stable checkpoint or commit certificate. Second, a correct new primary extends the history to be adopted in the new view to include all requests with an order-req message containing a sequence number higher than the largest sequence number in any commit certificate that appears in at least f + 1 of the 2f + 1 view-change messages the new primary collects.
This change weakens the conditions under which a request ordered in one view can appear in a new view: we no longer require a commit certificate but also allow a sufficient num-

ber of order-req messages to support a request’s ordering. This change ensures that the protocol continues to honor ordering commitments for any request that completes when a client gathers 3f + 1 matching speculative responses.
Notice that this change may have the side effect of assigning an order to a request that has not yet completed in the previous view. In particular, a curiosity of the protocol is that, depending on which set of 2f + 1 view-change messages the primary uses, it may, for a given sequence number, find different requests with f +1 order-req messages. This curiosity, however, is benign and cannot cause the system to violate safety. In particular, there can be two such candidate requests for the same sequence number only if at least one correct replica supports each of the candidates. In such a case, neither of the candidates could have completed by having a client receive 3f + 1 matching responses, and the system can safely assign either (or neither) request to that sequence number.
3.4 Correctness
This section sketches the proof that Zyzzyva maintains properties saf and liv defined above. Full proofs are available in the extended technical report [17].
3.4.1 Safety
We first show that our agreement sub-protocol is safe within a single view and then show that the agreement and view change protocols together ensure safety across views.
Within a View.
The proof proceeds in two parts. First we show that no two requests complete with the same sequence number n. Second we show that hn is a prefix of hn′ for n < n′ and completed requests r and r′.
Part 1: A request completes when the client receives 3f +1 matching spec-response messages in phase 1 or 2f + 1 matching local-commit messages in phase 2. If a request completes in phase 1 with sequence number n, then no other request can complete with sequence number n because correct replicas (a) send only one speculative response for a given sequence number and (b) send a local-commit message only after seeing 2f + 1 matching spec-response messages. Similarly, if a request completes with sequence number n in phase 2, no other request can complete since correct replicas only send one local-commit message for sequence number n.
Part 2: For any two requests r and r′ that complete with sequence numbers n and n′ and histories hn and hn′ respectively, there are at least 2f + 1 replicas that ordered each request. Because there are only 3f + 1 replicas in total, at least one correct replica ordered both r and r′. If n < n′, it follows that hn is a prefix of hn′ .
Across Views.
We show that any request that completes based on responses sent in view v < v′ is contained in the history specified by the new-view message for view v′. Recall that requests complete either when a correct client receives 3f + 1 matching speculative responses or 2f + 1 matching localcommits.
If a request r completes with 2f +1 matching local-commits, then at least f + 1 correct replicas have received a commit certificate for r (or for a subsequent request) and will send that commit certificate to the new primary in their

view-change message. Because there are 3f + 1 replicas in the system and 2f + 1 view-change messages in a newview message, that commit certificate will necessarily be included in the new-view message and r will be included in the history. Consider instead a request r that completes with 3f +1 matching spec-response messages and does not complete with 2f + 1 matching local-commit messages. Every correct replica will include the order-req for r in its view-change message, ensuring that the request will be supported by at least f + 1 replicas in the set of 2f + 1 view-change messages collected by the primary of view v′ and therefore be part of the new-view message.
3.4.2 Liveness
Zyzzyva guarantees liveness only during periods of synchrony. To show that a request issued by a correct client eventually completes, we first show that if the primary is correct when a correct client issues the request, then the request completes. We then show that if a request from a correct client does not complete during the current view, then a view change occurs.
Part 1: If the client and primary are correct, then protocol steps 1 through 3 ensure that the client receives specresponse messages from all correct replicas. If the client receives 3f + 1 matching spec-response messages, the request completes—and so does our proof. A client that instead receives fewer than 3f + 1 such messages will receive at least 2f + 1 of them, since there are 3f + 1 replicas and at most f of which are faulty. This client then sends a commit message to all replicas (protocol step 4b). All correct replicas send a local-commit message to the client (protocol step 4b.1), and, because there are at least 2f + 1 correct replicas, the client’s request completes in protocol step 4b.2.
Part 2: Assume the request from correct client c does not complete. By protocol step 4c, c resends the request message to all replicas when the request has not completed for a sufficiently long time. A correct replica, upon receiving the retransmitted request from c, contacts the primary for the corresponding order-req message. Any correct replica that does not receive the order-req message from the primary initiates the view change by sending an i-hate-theprimary message to all other replicas. Either at least one correct replica receives at least f + 1 i-hate-the-primary messages, or no correct replica receives at least f + 1 ihate-the-primary messages. In the first case, the replicas commit to a view change—QED. In the second case, all correct replicas that did not receive the order-req message from the primary receive it from another replica. After receiving an order-req message, a correct replica sends a spec-response to c. Because all correct replicas send a spec-response message to c, c is guaranteed to receive at least 2f + 1 such messages. Note that c must receive fewer than 2f + 1 matching spec-response messages: otherwise, c would be able to form a commit and complete the request, contradicting our initial assumption. If however, c does not receive 2f + 1 matching spec-response messages, then c is able to form a pom message: c relays this message to the replicas which in turn initiate and commit to a view change, completing the proof.
4. IMPLEMENTATION OPTIMIZATIONS
Our implementation includes several optimizations to improve performance and reduce system cost.

Replacing Signatures with MACs.
Like previous work [3, 9, 10, 18, 33, 41], we replace most signatures in Zyzzyva with MACs and authenticators in order to reduce the computational overhead of cryptographic operations. The only signatures that are not replaced with MACs are client request retransmissions and the messages of the view change protocol. The technical changes to each sub-protocol required by replacing signatures with authenticators are described in [17]. The most noticeable difference in the agreement sub-protocol is the way Zyzzyva addresses the scenario in which replica i is unable to authenticate a client request; i cannot distinguish whether the fault lies with the primary or the client. Our procedure in this case is similar to a view change and results in correct replicas agreeing to accept the request or replace it with a no-op in the sequence. The checkpoint sub- protocol adds a third phase to ensure that stable checkpoints are consistent with requests that complete through speculative execution. Finally, the view change sub-protocol includes an additional phase for gathering checkpoint and commit certificate proofs as is done in PBFT [9].
Separating Agreement from Execution.
We separate agreement from execution [41] by requiring only 2f + 1 replicas to be execution replicas. The remaining replicas serve as witness replicas [23], aiding in the process of ordering requests but not replicating the application. Clients accept a history based on the agreement protocol described in the previous section with a slight modification: a pair of responses are considered to match even if the response r and response hash H(r) fields are not identical. A client acts on a reply only after receiving the appropriate number of matching responses and f + 1 matching application replies from execution replicas. One consequence of this optimization is that a client may have to wait until it has received more than 2f + 1 responses before it can act in the second phase. We gain further benefit by biasing the primary selection criteria so that witness replicas are chosen as the primary more frequently than execution replicas. This favoritism reduces processor contention at the primary and allows requests to be ordered and processed faster.
Request Batching.
We batch concurrent requests to reduce cryptographic and communication overheads like other agreement-based replicated services [9, 18, 33, 41, 37]. Batching requests amortizes the cost of replica operations across multiple requests and reduces the total number of operations per request. One key step in batching requests is having replicas compute a single history digest corresponding to the entries in the batch. This batch history is used in responses to all requests included in the batch. If the second phase completes for any request in the batch, the second phase is considered complete for all requests in the batch and replicas respond to the retransmission of any requests in the batch with localcommit messages.
Caching Out of Order Requests.
The protocol described in section 3.2 dictates that replicas discard order request messages that are received out of order. We improve performance when the network delivers messages out of order by caching these requests until the ap-

propriate sequence number is reached. Similarly, the view change sub-protocol can order additional requests that are not supported by f + 1 speculative responses.
Read-Only Optimization.
Like PBFT [9], we improve the performance of read- only requests that do not modify the system state. A client sends read-only requests directly to the replicas which execute the requests immediately, without recording the request in the request history. As in PBFT, clients wait for 2f +1 matching replies in order to complete read-only operations. In order for this optimization to function, we augment replies to read requests with a replica’s maxn and maxCC . A client that receives 2f + 1 matching responses, including the maxn and maxCC fields, such that maxn = maxCC can accept the reply to the read. Furthermore, a client that receives 3f + 1 matching replies, even if the maxCC and maxn values are not consistent, can accept the reply to the read.
Single Execution Response.
The client specifies a single execution replica to respond with a full response while the other execution replicas send only a digest of the response. This optimization is introduced in PBFT [9] and saves network bandwidth proportional to the size of responses.
Preferred Quorums.
Q/U [3] and HQ [10] leverage preferred quorums to reduce the size of authenticators by optimistically including MACs for a subset of replicas rather than all replicas. We implement preferred quorums for the second phase; replicas authenticate speculative response messages for the client and a subset of 2f other replicas. Additionally, on the initial transmission, we allow the client to specify that replicas should authenticate speculative response messages to the client only. This optimization reduces the number of cryptographic operations performed by backup replicas to three while existing BFT systems [9, 18, 3, 10, 33, 41] require a linear number of cryptographic operations at each replica.
4.1 Making the Faulty Case Fast
We introduce a second protocol, Zyzzyva5, that uses 2f additional witness replicas (the number of execution replicas is unchanged at 2f +1) for a total of 5f +1 replicas. Increasing the number of replicas lets clients receive responses in three message delays even when f replicas are faulty [11, 21, 25]. Zyzzyva5 trades the number of replicas in the deployed system against performance in the presence of faults. Zyzzyva5 is identical to Zyzzyva with a simple modification— nodes wait for an additional f messages, i.e. if a node bases a decision on a set of 2f + 1 messages in Zyzzyva, the corresponding decision in Zyzzyva5 is based on a set of 3f + 1 messages. The exceptions to this rule are the “I hate the primary” phase of the view change protocol and the fill-hole and confirm-request sub-protocols that serve to prove that another correct replica has taken an action—these phases still require only f + 1 responses.
5. EVALUATION
This section examines the performance characteristics of Zyzzyva and compares it with existing approaches. We run our experiments on 3.0 GHz Pentium-4 machines with the

Throughput (Kops/sec) Latency per request (ms)
Unreplicated Zyzzyva
Zyzzyva5 PBFT
Q/U (ideal) HQ
Unreplicated Zyzzyva Zyzzyva5 PBFT Q/U (ideal) HQ Unreplicated Zyzzyva Zyzzyva5 PBFT Q/U (ideal) HQ Unreplicated Zyzzyva Zyzzyva5 PBFT Q/U (ideal) HQ Unreplicated Zyzzyva Zyzzyva5 PBFT Q/U (ideal) HQ Unreplicated Zyzzyva Zyzzyva5 PBFT Q/U (ideal) HQ

140 120 100
80 60 40 20
0 0

Unreplicated

Zyzzyva (B=10) Zyzzyva5 (B=10)

Zyzzyva

PBFT (B=10) Zyzzyva5

PBFT

Q/U max throughput (from [3], scaled)
HQ

20

40

60

80

100

Number of clients

Figure 3: Realized throughput for the 0/0 benchmark as the number of client varies for systems configured to tolerate f = 1 faults.
Linux 2.6 kernel. We use MD5 for MACs and AdHash [7] for incremental hashing. MD5 is known to be vulnerable, but we use it to make our results comparable with those in the literature. Since Zyzzyva uses fewer MACs per request than any of the competing algorithms, our advantages over other algorithms would be increased if we were to use the more secure, but more expensive, SHA-256.
For comparison, we run Castro et al.’s implementation of PBFT [9] and Cowling et al.’s implementation of HQ [10]; we scale up measured throughput for the small request/response benchmark by 9% [1] to account for their use of SHA-1 rather than MD5. We include published throughput measurements for Q/U [3]; we scale reported performance up by 7.5% to account for our use of 3.0 GHz rather than 2.8GHz machines. We also compare against measurements of an unreplicated server.
Unless noted otherwise, in our experiments we use all of the optimizations other than preferred quorums for Zyzzyva as described in §4. PBFT [9] does not implement preferred quorum optimization. We run with preferred quorum optimization for HQ [10]. We do not use the read-only optimization for Zyzzyva and PBFT unless we state so explicitly.
5.1 Throughput
To stress-test Zyzzyva we use the micro-benchmarks devised by Castro et al. [9]. In the 0/0 benchmark, a client sends a null request and receives a null reply. In the 4/0 benchmark, a client sends a 4KB request and receives a null reply. In the 0/4 benchmark, a client sends a null request and receives a 4KB reply.
Figure 3 shows the throughput achieved for the 0/0 benchmark by Zyzzyva, Zyzzyva5, PBFT, and HQ (scaled as noted above). For reference, we also show the peak throughput reported for Q/U [3] in the f = 1 configuration, scaled to our environment as described above. As the number of clients increases, Zyzzyva and Zyzzyva5 scale better than PBFT with and without batching. Without batching, Zyzzyva achieves a peak throughput that is 2.7 times higher than PBFT due to PBFT’s higher cryptographic overhead (PBFT performs about 2.2 times more crypto operations than Zyzzyva) and message overhead (PBFT sends and receives about 3.7 times more messages than Zyzzyva). When the batch size is increased to 10, Zyzzyva’s and Zyzzyva5’s

1

0.8

0.6

0.4

0.2

0

0/0

0/0 (r/o)

0/4

0/4 (r/o)

4/0

4/0 (r/o)

Figure 4: Latency for 0/0, 0/4, and 4/0 benchmarks for systems configured to tolerate f = 1 faults.
peak throughputs increase to 86K ops/sec suggesting that the protocol overhead at the primary is 12µs per batched request. With batching, PBFT’s throughput increases to 59K ops/sec. The 45% difference between Zyzzyva and PBFT’s peak throughput are largely accounted for PBFT’s higher cryptographic overhead (about 30%) and message overhead (about 30%) compared to Zyzzyva. Zyzzyva provides over 3 times the reported peak throughput of Q/U and over 9 times the measured throughput of HQ. This difference stems from three sources. First, Zyzzyva requires fewer cryptographic operations per request compared to HQ and Q/U. Second, neither Q/U nor HQ is able to use batching to reduce cryptographic and message overheads. Third, Q/U and HQ do not take advantage of the Ethernet broadcast channel to speed up the one-to-all communication steps.
Overall, the peak throughput achieved by Zyzzyva is within 35% of that of an unreplicated server that simply replies to client request over an authenticated channel. Note that as application-level request processing increases, the protocol overhead will fall.
5.2 Latency
Figure 4 shows the latencies of Zyzzyva, Zyzzyva5, Q/U, and PBFT for the 0/0, 0/4, and 4/0 microbenchmarks. For Q/U, which can complete in fewer message delays than Zyzzyva during contention-free periods, we use a simple bestcase implementation of Q/U with preferred quorums in which a client simply generates and sends 4f + 1 MACs with a request, each replica verifies 4f + 1 MACs (1 to authenticate the client and 4f +1 to validate the OHS state), each replica generates and sends 4f + 1 MACs (1 to authenticate the reply to the client and 4f to authenticate OHS state) with a reply to the client, and the client verifies 4f + 1 MACs. We examine both the default read/write requests that use the full protocol and read-only requests that exploit the readonly optimization.
Zyzzyva uses fast agreement to drive its latency near the optimal for an agreement protocol—3 one-way message delays [11, 21, 25]. The experimental results in Figure 4 show that Zyzzyva and Zyzzyva5 achieve significantly lower latency than the other agreement-based protocols, PBFT and HQ. As expected, Q/U’s avoidance of serialization gives it even better latency in low-contention workloads such as the one examined here, though Zyzzyva and PBFT can match

Preparing to load PDF file. please wait...

0 of 0
100%
Zyzzyva: Speculative Byzantine Fault Tolerance