Scale horizontally with performance standby nodes
Enterprise Only
Performance Standby Nodes requires Vault Enterprise Premium license.
In Vault High Availability tutorial, it was explained that only one Vault server will be active in a cluster and handles all requests (reads and writes). The rest of the servers become the standby nodes and simply forward requests to the active node.
If you are running Vault Enterprise 0.11 or later, those standby nodes can handle most read-only requests and are referred to as performance standby nodes. Performance standby nodes are designed to provide horizontal scalability of read requests within a single Vault cluster. For example, performance standbys can handle encryption or decryption of data using transit keys, GET requests of key/value secrets and other requests that do not change underlying storage. This can provide considerable improvements in throughput for traffic of this type, resulting in aggregate performance increase linearly correlated to the number of performance standby nodes deployed in a cluster.
Server Configuration
Performance standbys are enabled by default when the Vault Enterprise
license includes this feature. If you wish to disable the performance standbys,
you can do so by setting the
disable_performance_standby
flag to true
.
Since any of the nodes in a cluster can get elected as active, it is recommended to keep this setting consistent across all nodes in the cluster.
Warning
Consider a scenario where a node with performance standby disabled becomes the active node. The performance standby feature is disabled for the whole cluster although it is enabled on other nodes.
Enterprise Cluster
A highly available Vault Enterprise cluster consists of multiple servers, and there will be only one active node. The rest can serve as performance standby nodes handling read-only requests locally.
Note
As of Vault 1.2.3, Vault Enterprise with Multi-Datacenter & Scale module comes with unlimited performance standby nodes.
Integrated Storage
When using Vault Integrated Storage to persist Vault data, you can add non-voter nodes to scale read-only workloads without worrying about the quorum count. (Non-voting node is an Enterprise-only feature of Integrated Storage.)
$ vault operator raft join -non-voter <active_node_api_address>
A non-voting node has all Vault data replicated just like other voting nodes. The difference is that the non-voting node does not contribute to the quorum count.
A quorum is a majority of members from a peer set: for a set of size n
,
quorum requires at least (n+1)/2
members. For example, if there are 5 voting
nodes in a cluster, it requires 3 nodes to form a quorum. If a quorum of nodes
is unavailable for any reason, the cluster becomes unavailable and no new logs
can be committed. See the deployment
table.
Eventual consistency
When using Integrated Storage as the storage backend, only the leader node can write to the Vault storage and replicate the change to the follower nodes. Since the data replication is asynchronous operations, Vault is eventually consistent.
While performance standby nodes scale read operations, Vault Enterprise Performance Replication scales write operations. Consider the following sequence of event flow.
Performance standby nodes
Within a Vault cluster, only the active node handles write operations. For those clusters using Integrated Storage, the newly injected data is not available on the performance standby nodes until the data gets replicated to the node.
When using Consul as the storage backend, this is mitigated by the Consul's default consistency model.
Performance Replication
When a client writes a secret to a shared mount via the performance secondary cluster, the secret will be forwarded to the primary cluster. A subsequent read of the secret on the secondary cluster might not show the effect of the write due to replication lag.
Client controlled consistency
If you are running Vault 1.7 or later, all requests that modify data return the
X-Vault-Index
response header. To ensure that the state resulting from that
write request is visible to a subsequent request, you can add the returned index
header to the subsequent request.
X-Vault-Index: <base64 value taken from previous response>X-Vault-Inconsistent: forward-active-node
The node that received the request will look at the state it has locally, and if it
doesn't contain the state described by the X-Vault-Index
header, the node will
forward the request to the active node.
Drawback
If request forwarding to the active node happens often enough the active node can become a bottleneck, limiting the horizontal read scalability performance standbys are intended to provide.
Server-side consistent token
As of Vault 1.10, the token format has changed and service tokens employ server-side consistency. The requests made to nodes that cannot support read-after-write consistency due to not having the necessary write-ahead log (WAL) index to check vault tokens locally will yield a 412 status code. Vault will automatically retry the request when receiving the 412 status code. Unless there is a considerable replication delay, Vault clients experience read-after-write consistency.
In the server configuration file, you can set the allow_forwarding_via_token
parameter to enforce Vault to automatically retries requests which result in 412 (Precondition Failed) errors.
replication { resolver_discover_servers = true logshipper_buffer_length = 1000 logshipper_buffer_size = "5gb" allow_forwarding_via_header = false best_effort_wal_wait_duration = "2s" allow_forwarding_via_token = "new_token"}
Refer to the Vault Enterprise Eventual Consistency documentation for more details.