Overview
This page contains the list of deprecations and important or breaking changes for Vault 1.3.X compared to 1.4.0. Please read it carefully.
Known issues
Primary cluster address change
In Vault 1.4.0-1.4.3, a secondary cluster with a single primary_cluster_addr
configured will obtain the address of the active node in the primary cluster
via replication heartbeats from the primary cluster.
If the api_addr
and cluster_addr
in the heartbeats from the primary
cluster are not reachable from the secondary cluster, replication will not
work. This situation can arise if, for example, primary_cluster_addr
corresponds to a load balancer accessible from the secondary cluster, but the
api_addr
and cluster_addr
on the primary cluster are only accessible
from the primary cluster.
In Vault 1.4.4, we will use the primary_cluster_addr
if it has been set,
instead of relying on the heartbeat information, but it's possible to
encounter this issue in Vault 1.4.0-1.4.3.
The AWS auth engine
Users of the AWS Auth Engine should be cautious with this upgrade, because in 1.3.2 we began adding metadata to tokens issued with this method. While the metadata does help with tying tokens to a particular person or machine, it also can also take a performance toll.
Whether there's a performance toll depends on if and how you've configured the
auth/aws/config/identity
endpoint. To determine if you could be effected:
- Read your identity configuration:
$ vault read auth/aws/config/identity
- Determine what Vault is using for identity (
role_id
if unconfigured) - Determine what role type(s) you're using (
iam
and/orec2
) - Consider the rate of change of the metadata fields for each role type
Metadata fields for iam
roles:
client_arn
canonical_arn
client_user_id
auth_type
inferred_entity_type
inferred_entity_id
inferred_aws_region
account_id
Metadata fields for ec2
roles:
For example, if you use role_id
for identity and only iam
roles, and
many machines use the same role, you would conclude that the client_arn
for the machines logging in would have a high rate of change, and so you'd
see a new storage write each time a new machine logged in under that role.
If you use role_id
for identity and only iam
roles, and
and only one long-lived machine used the role, you would conclude that the
client_arn
for the machines logging in would have a low rate of change.
Unless you added the optional "role-session" to its ARN, in which case you
could still have a higher rate of change.
However, if you had configured identity to use an iam_alias
of the full_arn
,
or an ec2_alias
of instance_id
, you would be likely to see a lower rate of
change for all fields.
For users seeing a performance issue, we recommend implementing one of the aliases above, or waiting until a patch is released providing greater flexibility around whether to use this functionality.
The AWS STS region selection
The AWS Client used in Vault was updated for improved STS performance in 1.3.2 and 1.4.0 #8161, however this introduced a side effect of limiting the regions being selected for validation and a greater possibility of encountering an "invalid security token" error.
Users of the AWS auth engine should upgrade to 1.4.1 release instead, where this side effect was fixed in #8679.
LDAP auth engine and upndomain
Users of the LDAP auth engine with the upndomain
configuration setting populated
should hold off on upgrading to 1.4.x for now. We are investigating a regression
introduced by #8333. There is
no Github issue for this bug yet.
Okta auth with > 200 groups
In 1.4.0 Vault started using the official Okta Go client library. Unlike the previous Okta library it used, the official library doesn't automatically handle pagination when there are more than 200 groups listed. If a user associated with more than 200 Okta groups logs in, only 200 of them will be seen by Vault. The fix is #9580 and will eventually appear in 1.4.x and 1.5.x point releases.
AWS instance metadata timeout
In 1.4.0 Vault started using an updated AWS Go SDK which had support for v2 of the EC2 instance metadata service. However, due to the way the SDK was configured in Vault, there can be a delay of around 2 minutes when Vault relies on the instance metadata service for credentials. A fix that reduces the delay went into 1.5.5: #10133.