File: //snap/google-cloud-cli/394/help/man/man1/gcloud_beta_container_clusters_update.1
.TH "GCLOUD_BETA_CONTAINER_CLUSTERS_UPDATE" 1
.SH "NAME"
.HP
gcloud beta container clusters update \- update cluster settings for an existing container cluster
.SH "SYNOPSIS"
.HP
\f5gcloud beta container clusters update\fR \fINAME\fR (\fB\-\-anonymous\-authentication\-config\fR=\fIANONYMOUS_AUTHENTICATION_CONFIG\fR\ |\ \fB\-\-autopilot\-workload\-policies\fR=\fIWORKLOAD_POLICIES\fR\ |\ \fB\-\-autoprovisioning\-cgroup\-mode\fR=\fIAUTOPROVISIONING_CGROUP_MODE\fR\ |\ \fB\-\-autoprovisioning\-enable\-insecure\-kubelet\-readonly\-port\fR\ |\ \fB\-\-autoprovisioning\-network\-tags\fR=[\fITAGS\fR,...]\ |\ \fB\-\-autoprovisioning\-resource\-manager\-tags\fR=[\fIKEY\fR=\fIVALUE\fR,...]\ |\ \fB\-\-autoscaling\-profile\fR=\fIAUTOSCALING_PROFILE\fR\ |\ \fB\-\-complete\-credential\-rotation\fR\ |\ \fB\-\-complete\-ip\-rotation\fR\ |\ \fB\-\-containerd\-config\-from\-file\fR=\fIPATH_TO_FILE\fR\ |\ \fB\-\-database\-encryption\-key\fR=\fIDATABASE_ENCRYPTION_KEY\fR\ |\ \fB\-\-disable\-database\-encryption\fR\ |\ \fB\-\-disable\-default\-snat\fR\ |\ \fB\-\-disable\-workload\-identity\fR\ |\ \fB\-\-[no\-]enable\-autopilot\-compatibility\-auditing\fR\ |\ \fB\-\-enable\-autoscaling\fR\ |\ \fB\-\-[no\-]enable\-cilium\-clusterwide\-network\-policy\fR\ |\ \fB\-\-enable\-cost\-allocation\fR\ |\ \fB\-\-enable\-default\-compute\-class\fR\ |\ \fB\-\-enable\-fqdn\-network\-policy\fR\ |\ \fB\-\-enable\-gke\-oidc\fR\ |\ \fB\-\-enable\-identity\-service\fR\ |\ \fB\-\-enable\-image\-streaming\fR\ |\ \fB\-\-enable\-insecure\-kubelet\-readonly\-port\fR\ |\ \fB\-\-enable\-intra\-node\-visibility\fR\ |\ \fB\-\-enable\-kubernetes\-unstable\-apis\fR=\fIAPI\fR,[\fIAPI\fR,...]\ |\ \fB\-\-enable\-l4\-ilb\-subsetting\fR\ |\ \fB\-\-enable\-legacy\-authorization\fR\ |\ \fB\-\-enable\-legacy\-lustre\-port\fR\ |\ \fB\-\-enable\-logging\-monitoring\-system\-only\fR\ |\ \fB\-\-enable\-multi\-networking\fR\ |\ \fB\-\-enable\-network\-policy\fR\ |\ \fB\-\-enable\-pod\-security\-policy\fR\ |\ \fB\-\-enable\-private\-nodes\fR\ |\ \fB\-\-[no\-]enable\-ray\-cluster\-logging\fR\ |\ \fB\-\-[no\-]enable\-ray\-cluster\-monitoring\fR\ |\ \fB\-\-enable\-service\-externalips\fR\ |\ \fB\-\-enable\-shielded\-nodes\fR\ |\ \fB\-\-enable\-stackdriver\-kubernetes\fR\ |\ \fB\-\-enable\-vertical\-pod\-autoscaling\fR\ |\ \fB\-\-gateway\-api\fR=\fIGATEWAY_API\fR\ |\ \fB\-\-generate\-password\fR\ |\ \fB\-\-hpa\-profile\fR=\fIHPA_PROFILE\fR\ |\ \fB\-\-identity\-provider\fR=\fIIDENTITY_PROVIDER\fR\ |\ \fB\-\-in\-transit\-encryption\fR=\fIIN_TRANSIT_ENCRYPTION\fR\ |\ \fB\-\-logging\-variant\fR=\fILOGGING_VARIANT\fR\ |\ \fB\-\-maintenance\-window\fR=\fISTART_TIME\fR\ |\ \fB\-\-network\-performance\-configs\fR=[\fIPROPERTY1\fR=\fIVALUE1\fR,...]\ |\ \fB\-\-notification\-config\fR=[\fIpubsub\fR=\fIENABLED\fR|\fIDISABLED\fR,\fIpubsub\-topic\fR=\fITOPIC\fR,...]\ |\ \fB\-\-patch\-update\fR=[\fIPATCH_UPDATE\fR]\ |\ \fB\-\-private\-ipv6\-google\-access\-type\fR=\fIPRIVATE_IPV6_GOOGLE_ACCESS_TYPE\fR\ |\ \fB\-\-release\-channel\fR=\fICHANNEL\fR\ |\ \fB\-\-remove\-autopilot\-workload\-policies\fR=\fIREMOVE_WORKLOAD_POLICIES\fR\ |\ \fB\-\-remove\-labels\fR=[\fIKEY\fR,...]\ |\ \fB\-\-remove\-workload\-policies\fR=\fIREMOVE_WORKLOAD_POLICIES\fR\ |\ \fB\-\-security\-group\fR=\fISECURITY_GROUP\fR\ |\ \fB\-\-security\-posture\fR=\fISECURITY_POSTURE\fR\ |\ \fB\-\-set\-password\fR\ |\ \fB\-\-stack\-type\fR=\fISTACK_TYPE\fR\ |\ \fB\-\-start\-credential\-rotation\fR\ |\ \fB\-\-start\-ip\-rotation\fR\ |\ \fB\-\-tier\fR=\fITIER\fR\ |\ \fB\-\-update\-addons\fR=[\fIADDON\fR=\fIENABLED\fR|\fIDISABLED\fR,...]\ |\ \fB\-\-update\-labels\fR=[\fIKEY\fR=\fIVALUE\fR,...]\ |\ \fB\-\-workload\-policies\fR=\fIWORKLOAD_POLICIES\fR\ |\ \fB\-\-workload\-pool\fR=\fIWORKLOAD_POOL\fR\ |\ \fB\-\-workload\-vulnerability\-scanning\fR=\fIWORKLOAD_VULNERABILITY_SCANNING\fR\ |\ \fB\-\-additional\-ip\-ranges\fR=[\fIsubnetwork\fR=\fINAME\fR,\fIpod\-ipv4\-range\fR=\fINAME\fR,...]\ \fB\-\-remove\-additional\-ip\-ranges\fR=[\fIsubnetwork\fR=\fINAME\fR,\fIpod\-ipv4\-range\fR=\fINAME\fR,...]\ |\ \fB\-\-additional\-pod\-ipv4\-ranges\fR=\fINAME\fR,[\fINAME\fR,...]\ \fB\-\-remove\-additional\-pod\-ipv4\-ranges\fR=\fINAME\fR,[\fINAME\fR,...]\ |\ \fB\-\-additional\-zones\fR=[\fIZONE\fR,...]\ |\ \fB\-\-node\-locations\fR=\fIZONE\fR,[\fIZONE\fR,...]\ |\ \fB\-\-auto\-monitoring\-scope\fR=\fIAUTO_MONITORING_SCOPE\fR\ \fB\-\-logging\fR=[\fICOMPONENT\fR,...]\ \fB\-\-monitoring\fR=[\fICOMPONENT\fR,...]\ \fB\-\-disable\-managed\-prometheus\fR\ |\ \fB\-\-enable\-managed\-prometheus\fR\ |\ \fB\-\-binauthz\-policy\-bindings\fR=[\fIname\fR=\fIBINAUTHZ_POLICY\fR]\ \fB\-\-binauthz\-evaluation\-mode\fR=\fIBINAUTHZ_EVALUATION_MODE\fR\ |\ \fB\-\-enable\-binauthz\fR\ |\ \fB\-\-clear\-fleet\-project\fR\ \fB\-\-enable\-fleet\fR\ \fB\-\-fleet\-project\fR=\fIPROJECT_ID_OR_NUMBER\fR\ \fB\-\-membership\-type\fR=\fIMEMBERSHIP_TYPE\fR\ \fB\-\-unset\-membership\-type\fR\ |\ \fB\-\-clear\-maintenance\-window\fR\ |\ \fB\-\-remove\-maintenance\-exclusion\fR=\fINAME\fR\ |\ [\fB\-\-add\-maintenance\-exclusion\-end\fR=\fITIME_STAMP\fR\ :\ \fB\-\-add\-maintenance\-exclusion\-name\fR=\fINAME\fR\ \fB\-\-add\-maintenance\-exclusion\-scope\fR=\fISCOPE\fR\ \fB\-\-add\-maintenance\-exclusion\-start\fR=\fITIME_STAMP\fR]\ |\ \fB\-\-maintenance\-window\-end\fR=\fITIME_STAMP\fR\ \fB\-\-maintenance\-window\-recurrence\fR=\fIRRULE\fR\ \fB\-\-maintenance\-window\-start\fR=\fITIME_STAMP\fR\ |\ \fB\-\-clear\-resource\-usage\-bigquery\-dataset\fR\ |\ \fB\-\-enable\-network\-egress\-metering\fR\ \fB\-\-enable\-resource\-consumption\-metering\fR\ \fB\-\-resource\-usage\-bigquery\-dataset\fR=\fIRESOURCE_USAGE_BIGQUERY_DATASET\fR\ |\ \fB\-\-cluster\-dns\fR=\fICLUSTER_DNS\fR\ \fB\-\-cluster\-dns\-domain\fR=\fICLUSTER_DNS_DOMAIN\fR\ \fB\-\-cluster\-dns\-scope\fR=\fICLUSTER_DNS_SCOPE\fR\ \fB\-\-additive\-vpc\-scope\-dns\-domain\fR=\fIADDITIVE_VPC_SCOPE_DNS_DOMAIN\fR\ |\ \fB\-\-disable\-additive\-vpc\-scope\fR\ |\ \fB\-\-dataplane\-v2\-observability\-mode\fR=\fIDATAPLANE_V2_OBSERVABILITY_MODE\fR\ |\ \fB\-\-disable\-dataplane\-v2\-flow\-observability\fR\ |\ \fB\-\-enable\-dataplane\-v2\-flow\-observability\fR\ \fB\-\-disable\-dataplane\-v2\-metrics\fR\ |\ \fB\-\-enable\-dataplane\-v2\-metrics\fR\ |\ \fB\-\-disable\-auto\-ipam\fR\ |\ \fB\-\-enable\-auto\-ipam\fR\ |\ \fB\-\-disable\-l4\-lb\-firewall\-reconciliation\fR\ |\ \fB\-\-enable\-l4\-lb\-firewall\-reconciliation\fR\ |\ \fB\-\-enable\-authorized\-networks\-on\-private\-endpoint\fR\ \fB\-\-enable\-dns\-access\fR\ \fB\-\-enable\-google\-cloud\-access\fR\ \fB\-\-enable\-ip\-access\fR\ \fB\-\-enable\-k8s\-certs\-via\-dns\fR\ \fB\-\-enable\-k8s\-tokens\-via\-dns\fR\ \fB\-\-enable\-master\-global\-access\fR\ \fB\-\-enable\-private\-endpoint\fR\ \fB\-\-enable\-master\-authorized\-networks\fR\ \fB\-\-master\-authorized\-networks\fR=\fINETWORK\fR,[\fINETWORK\fR,...]\ |\ \fB\-\-enable\-autoprovisioning\fR\ \fB\-\-autoprovisioning\-config\-file\fR=\fIPATH_TO_FILE\fR\ |\ \fB\-\-autoprovisioning\-image\-type\fR=\fIAUTOPROVISIONING_IMAGE_TYPE\fR\ \fB\-\-autoprovisioning\-locations\fR=\fIZONE\fR,[\fIZONE\fR,...]\ \fB\-\-autoprovisioning\-min\-cpu\-platform\fR=\fIPLATFORM\fR\ \fB\-\-max\-cpu\fR=\fIMAX_CPU\fR\ \fB\-\-max\-memory\fR=\fIMAX_MEMORY\fR\ \fB\-\-min\-cpu\fR=\fIMIN_CPU\fR\ \fB\-\-min\-memory\fR=\fIMIN_MEMORY\fR\ \fB\-\-autoprovisioning\-max\-surge\-upgrade\fR=\fIAUTOPROVISIONING_MAX_SURGE_UPGRADE\fR\ \fB\-\-autoprovisioning\-max\-unavailable\-upgrade\fR=\fIAUTOPROVISIONING_MAX_UNAVAILABLE_UPGRADE\fR\ \fB\-\-autoprovisioning\-node\-pool\-soak\-duration\fR=\fIAUTOPROVISIONING_NODE_POOL_SOAK_DURATION\fR\ \fB\-\-autoprovisioning\-standard\-rollout\-policy\fR=[\fIbatch\-node\-count\fR=\fIBATCH_NODE_COUNT\fR,\fIbatch\-percent\fR=\fIBATCH_NODE_PERCENTAGE\fR,\fIbatch\-soak\-duration\fR=\fIBATCH_SOAK_DURATION\fR,...]\ \fB\-\-enable\-autoprovisioning\-blue\-green\-upgrade\fR\ |\ \fB\-\-enable\-autoprovisioning\-surge\-upgrade\fR\ \fB\-\-autoprovisioning\-scopes\fR=[\fISCOPE\fR,...]\ \fB\-\-autoprovisioning\-service\-account\fR=\fIAUTOPROVISIONING_SERVICE_ACCOUNT\fR\ \fB\-\-enable\-autoprovisioning\-autorepair\fR\ \fB\-\-enable\-autoprovisioning\-autoupgrade\fR\ [\fB\-\-max\-accelerator\fR=[\fItype\fR=\fITYPE\fR,\fIcount\fR=\fICOUNT\fR,...]\ :\ \fB\-\-min\-accelerator\fR=[\fItype\fR=\fITYPE\fR,\fIcount\fR=\fICOUNT\fR,...]]\ |\ \fB\-\-enable\-insecure\-binding\-system\-authenticated\fR\ \fB\-\-enable\-insecure\-binding\-system\-unauthenticated\fR\ |\ \fB\-\-logging\-service\fR=\fILOGGING_SERVICE\fR\ \fB\-\-monitoring\-service\fR=\fIMONITORING_SERVICE\fR\ |\ \fB\-\-[no\-]enable\-secret\-manager\fR\ \fB\-\-[no\-]enable\-secret\-manager\-rotation\fR\ \fB\-\-secret\-manager\-rotation\-interval\fR=\fISECRET_MANAGER_ROTATION_INTERVAL\fR\ |\ \fB\-\-[no\-]enable\-secret\-sync\fR\ \fB\-\-[no\-]enable\-secret\-sync\-rotation\fR\ \fB\-\-secret\-sync\-rotation\-interval\fR=\fISECRET_SYNC_ROTATION_INTERVAL\fR\ |\ \fB\-\-password\fR=\fIPASSWORD\fR\ \fB\-\-enable\-basic\-auth\fR\ |\ \fB\-\-username\fR=\fIUSERNAME\fR,\ \fB\-u\fR\ \fIUSERNAME\fR) [\fB\-\-async\fR] [\fB\-\-cloud\-run\-config\fR=[\fIload\-balancer\-type\fR=\fIEXTERNAL\fR,...]] [\fB\-\-istio\-config\fR=[\fIauth\fR=\fIMTLS_PERMISSIVE\fR,...]] [\fB\-\-node\-pool\fR=\fINODE_POOL\fR] [\fB\-\-location\fR=\fILOCATION\fR\ |\ \fB\-\-region\fR=\fIREGION\fR\ |\ \fB\-\-zone\fR=\fIZONE\fR,\ \fB\-z\fR\ \fIZONE\fR] [\fB\-\-location\-policy\fR=\fILOCATION_POLICY\fR\ \fB\-\-max\-nodes\fR=\fIMAX_NODES\fR\ \fB\-\-min\-nodes\fR=\fIMIN_NODES\fR\ \fB\-\-total\-max\-nodes\fR=\fITOTAL_MAX_NODES\fR\ \fB\-\-total\-min\-nodes\fR=\fITOTAL_MIN_NODES\fR] [\fIGCLOUD_WIDE_FLAG\ ...\fR]
.SH "DESCRIPTION"
\fB(BETA)\fR Update cluster settings for an existing container cluster.
.SH "EXAMPLES"
To enable autoscaling for an existing cluster, run:
.RS 2m
$ gcloud beta container clusters update sample\-cluster \e
\-\-enable\-autoscaling
.RE
.SH "POSITIONAL ARGUMENTS"
.RS 2m
.TP 2m
\fINAME\fR
The name of the cluster to update.
.RE
.sp
.SH "REQUIRED FLAGS"
.RS 2m
.TP 2m
Exactly one of these must be specified:
.RS 2m
.TP 2m
\fB\-\-anonymous\-authentication\-config\fR=\fIANONYMOUS_AUTHENTICATION_CONFIG\fR
Enable or restrict anonymous access to the cluster. When enabled, anonymous
users will be authenticated as system:anonymous with the group
system:unauthenticated. Limiting access restricts anonymous access to only the
health check endpoints /readyz, /livez, and /healthz.
\fIANONYMOUS_AUTHENTICATION_CONFIG\fR must be one of:
.RS 2m
.TP 2m
\fBENABLED\fR
\'ENABLED' enables anonymous calls.
.TP 2m
\fBLIMITED\fR
\'LIMITED' restricts anonymous access to the cluster. Only calls to the health
check endpoints are allowed anonymously, all other calls will be rejected.
.RE
.sp
.TP 2m
\fB\-\-autopilot\-workload\-policies\fR=\fIWORKLOAD_POLICIES\fR
Add Autopilot workload policies to the cluster.
Examples:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-autopilot\-workload\-policies=allow\-net\-admin
.RE
The only supported workload policy is 'allow\-net\-admin'.
.TP 2m
\fB\-\-autoprovisioning\-cgroup\-mode\fR=\fIAUTOPROVISIONING_CGROUP_MODE\fR
Sets the cgroup mode for auto\-provisioned nodes.
Updating this flag triggers an update using surge upgrades of all existing
auto\-provisioned nodes to apply the new value of cgroup mode.
For an Autopilot cluster, the specified cgroup mode will be set on all existing
and new nodes in the cluster. For a Standard cluster, the specified cgroup mode
will be set on all existing and new auto\-provisioned node pools in the cluster.
If not set, GKE uses cgroupv2 for new nodes when the cluster was created running
1.26 or later, and cgroupv1 for clusters created running 1.25 or earlier. To
check your initial cluster version, run \f5gcloud container clusters describe
[NAME] \-\-format="value(initialClusterVersion)"\fR
For clusters created running version 1.26 or later, you can't set the cgroup
mode to v1.
To learn more, see:
https://cloud.google.com/kubernetes\-engine/docs/how\-to/migrate\-cgroupv2.
\fIAUTOPROVISIONING_CGROUP_MODE\fR must be one of: \fBdefault\fR, \fBv1\fR,
\fBv2\fR.
.TP 2m
\fB\-\-autoprovisioning\-enable\-insecure\-kubelet\-readonly\-port\fR
Enables the Kubelet's insecure read only port for Autoprovisioned Node Pools.
If not set, the value from nodePoolDefaults.nodeConfigDefaults will be used.
To disable the readonly port
\f5\-\-no\-autoprovisioning\-enable\-insecure\-kubelet\-readonly\-port\fR.
.TP 2m
\fB\-\-autoprovisioning\-network\-tags\fR=[\fITAGS\fR,...]
Replaces the user specified Compute Engine tags on all nodes in all the existing
auto\-provisioned node pools in the Standard cluster or the Autopilot with the
given tags (comma separated).
Examples:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-autoprovisioning\-network\-tags=tag1,tag2
.RE
New nodes in auto\-provisioned node pools, including ones created by resize or
recreate, will have these tags on the Compute Engine API instance object and
these tags can be used in firewall rules. See
https://cloud.google.com/sdk/gcloud/reference/compute/firewall\-rules/create for
examples.
.TP 2m
\fB\-\-autoprovisioning\-resource\-manager\-tags\fR=[\fIKEY\fR=\fIVALUE\fR,...]
For an Autopilot cluster, the specified comma\-separated resource manager tags
that has the GCP_FIREWALL purpose replace the existing tags on all nodes in the
cluster.
For a Standard cluster, the specified comma\-separated resource manager tags
that has the GCE_FIREWALL purpose are applied to all nodes in the new newly
created auto\-provisioned node pools. Existing auto\-provisioned node pools
retain the tags that they had before the update. To update tags on an existing
auto\-provisioned node pool, use the node pool level flag
\'\-\-resource\-manager\-tags'.
Examples:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-autoprovisioning\-resource\-manager\-tags=tagKeys/\e
1234=tagValues/2345
$ gcloud beta container clusters update example\-cluster \e
\-\-autoprovisioning\-resource\-manager\-tags=my\-project/key1=value1
$ gcloud beta container clusters update example\-cluster \e
\-\-autoprovisioning\-resource\-manager\-tags=12345/key1=value1,\e
23456/key2=value2
$ gcloud beta container clusters update example\-cluster \e
\-\-autoprovisioning\-resource\-manager\-tags=
.RE
All nodes in an Autopilot cluster or all newly created auto\-provisioned nodes
in a Standard cluster, including nodes that are resized or re\-created, will
have the specified tags on the corresponding Instance object in the Compute
Engine API. You can reference these tags in network firewall policy rules. For
instructions, see
https://cloud.google.com/firewall/docs/use\-tags\-for\-firewalls.
.TP 2m
\fB\-\-autoscaling\-profile\fR=\fIAUTOSCALING_PROFILE\fR
Set autoscaling behaviour, choices are 'optimize\-utilization' and 'balanced'.
Default is 'balanced'.
.TP 2m
\fB\-\-complete\-credential\-rotation\fR
Complete the IP and credential rotation for this cluster. For example:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-complete\-credential\-rotation
.RE
This causes the cluster to stop serving its old IP, return to a single IP, and
invalidate old credentials. See documentation for more details:
https://cloud.google.com/kubernetes\-engine/docs/how\-to/credential\-rotation.
.TP 2m
\fB\-\-complete\-ip\-rotation\fR
Complete the IP rotation for this cluster. For example:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-complete\-ip\-rotation
.RE
This causes the cluster to stop serving its old IP, and return to a single IP
state. See documentation for more details:
https://cloud.google.com/kubernetes\-engine/docs/how\-to/ip\-rotation.
.TP 2m
\fB\-\-containerd\-config\-from\-file\fR=\fIPATH_TO_FILE\fR
Path of the YAML file that contains containerd configuration entries like
configuring access to private image registries.
For detailed information on the configuration usage, please refer to
https://cloud.google.com/kubernetes\-engine/docs/how\-to/customize\-containerd\-configuration.
Note: Updating the containerd configuration of an existing cluster or node pool
requires recreation of the existing nodes, which might cause disruptions in
running workloads.
Use a full or relative path to a local file containing the value of
containerd_config.
.TP 2m
\fB\-\-database\-encryption\-key\fR=\fIDATABASE_ENCRYPTION_KEY\fR
Enable Database Encryption.
Enable database encryption that will be used to encrypt Kubernetes Secrets at
the application layer. The key provided should be the resource ID in the format
of
\f5projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]\fR.
For more information, see
https://cloud.google.com/kubernetes\-engine/docs/how\-to/encrypting\-secrets.
.TP 2m
\fB\-\-disable\-database\-encryption\fR
Disable database encryption.
Disable Database Encryption which encrypt Kubernetes Secrets at the application
layer. For more information, see
https://cloud.google.com/kubernetes\-engine/docs/how\-to/encrypting\-secrets.
.TP 2m
\fB\-\-disable\-default\-snat\fR
Disable default source NAT rules applied in cluster nodes.
By default, cluster nodes perform source network address translation (SNAT) for
packets sent from Pod IP address sources to destination IP addresses that are
not in the non\-masquerade CIDRs list. For more details about SNAT and IP
masquerading, see:
https://cloud.google.com/kubernetes\-engine/docs/how\-to/ip\-masquerade\-agent#how_ipmasq_works
SNAT changes the packet's source IP address to the node's internal IP address.
When this flag is set, GKE does not perform SNAT for packets sent to any
destination. You must set this flag if the cluster uses privately reused public
IPs.
The \-\-disable\-default\-snat flag is only applicable to private GKE clusters,
which are inherently VPC\-native. Thus, \-\-disable\-default\-snat requires that
the cluster was created with both \-\-enable\-ip\-alias and
\-\-enable\-private\-nodes.
.TP 2m
\fB\-\-disable\-workload\-identity\fR
Disable Workload Identity on the cluster.
For more information on Workload Identity, see
.RS 2m
https://cloud.google.com/kubernetes\-engine/docs/how\-to/workload\-identity
.RE
.TP 2m
\fB\-\-[no\-]enable\-autopilot\-compatibility\-auditing\fR
Lets you run the gcloud container clusters check\-autopilot\-compatibility
(https://cloud.google.com/sdk/gcloud/reference/container/clusters/check\-autopilot\-compatibility)
command to check whether your workloads are compatible with Autopilot mode. This
flag is only applicable to clusters that run version 1.31.6\-gke.1027000 or
later.
Note: This flag causes a control plane restart.
Use \fB\-\-enable\-autopilot\-compatibility\-auditing\fR to enable and
\fB\-\-no\-enable\-autopilot\-compatibility\-auditing\fR to disable.
.TP 2m
\fB\-\-enable\-autoscaling\fR
Enables autoscaling for a node pool.
Enables autoscaling in the node pool specified by \-\-node\-pool or the default
node pool if \-\-node\-pool is not provided. If not already, \-\-max\-nodes or
\-\-total\-max\-nodes must also be set.
.TP 2m
\fB\-\-[no\-]enable\-cilium\-clusterwide\-network\-policy\fR
Enable Cilium Clusterwide Network Policies on the cluster. Use
\fB\-\-enable\-cilium\-clusterwide\-network\-policy\fR to enable and
\fB\-\-no\-enable\-cilium\-clusterwide\-network\-policy\fR to disable.
.TP 2m
\fB\-\-enable\-cost\-allocation\fR
Enable the cost management feature.
When enabled, you can get informational GKE cost breakdowns by cluster,
namespace and label in your billing data exported to BigQuery
(https://cloud.google.com/billing/docs/how\-to/export\-data\-bigquery).
Use \-\-no\-enable\-cost\-allocation to disable this feature.
.TP 2m
\fB\-\-enable\-default\-compute\-class\fR
Enable the default compute class to use for the cluster.
To disable Default Compute Class in an existing cluster, explicitly set flag
\f5\-\-no\-enable\-default\-compute\-class\fR.
.TP 2m
\fB\-\-enable\-fqdn\-network\-policy\fR
Enable FQDN Network Policies on the cluster. FQDN Network Policies are disabled
by default.
.TP 2m
\fB\-\-enable\-gke\-oidc\fR
(DEPRECATED) Enable GKE OIDC authentication on the cluster.
When enabled, users would be able to authenticate to Kubernetes cluster after
properly setting OIDC config.
GKE OIDC is by default disabled when creating a new cluster. To disable GKE OIDC
in an existing cluster, explicitly set flag \f5\-\-no\-enable\-gke\-oidc\fR.
GKE OIDC is being replaced by Identity Service across Anthos and GKE. Thus, flag
\f5\-\-enable\-gke\-oidc\fR is also deprecated. Please use
\f5\-\-enable\-identity\-service\fR to enable the Identity Service component
.TP 2m
\fB\-\-enable\-identity\-service\fR
Enable Identity Service component on the cluster.
When enabled, users can authenticate to Kubernetes cluster with external
identity providers.
Identity Service is by default disabled when creating a new cluster. To disable
Identity Service in an existing cluster, explicitly set flag
\f5\-\-no\-enable\-identity\-service\fR.
.TP 2m
\fB\-\-enable\-image\-streaming\fR
Specifies whether to enable image streaming on cluster.
.TP 2m
\fB\-\-enable\-insecure\-kubelet\-readonly\-port\fR
Enables the Kubelet's insecure read only port.
To disable the readonly port on a cluster or node\-pool set the flag to
\f5\-\-no\-enable\-insecure\-kubelet\-readonly\-port\fR.
.TP 2m
\fB\-\-enable\-intra\-node\-visibility\fR
Enable Intra\-node visibility for this cluster.
Enabling intra\-node visibility makes your intra\-node pod\-to\-pod traffic
visible to the networking fabric. With this feature, you can use VPC flow
logging or other VPC features for intra\-node traffic.
Enabling it on an existing cluster causes the cluster master and the cluster
nodes to restart, which might cause a disruption.
.TP 2m
\fB\-\-enable\-kubernetes\-unstable\-apis\fR=\fIAPI\fR,[\fIAPI\fR,...]
Enable Kubernetes beta API features on this cluster. Beta APIs are not expected
to be production ready and should be avoided in production\-grade environments.
.TP 2m
\fB\-\-enable\-l4\-ilb\-subsetting\fR
Enable Subsetting for L4 ILB services created on this cluster.
.TP 2m
\fB\-\-enable\-legacy\-authorization\fR
Enables the legacy ABAC authentication for the cluster. User rights are granted
through the use of policies which combine attributes together. For a detailed
look at these properties and related formats, see
https://kubernetes.io/docs/admin/authorization/abac/. To use RBAC permissions
instead, create or update your cluster with the option
\f5\-\-no\-enable\-legacy\-authorization\fR.
.TP 2m
\fB\-\-enable\-legacy\-lustre\-port\fR
Allow the Lustre CSI driver to initialize LNet (the virtual network layer for
Lustre kernel module) using port 6988. This flag is required to workaround a
port conflict with the gke\-metadata\-server on GKE nodes.
.TP 2m
\fB\-\-enable\-logging\-monitoring\-system\-only\fR
(DEPRECATED) Enable Cloud Operations system\-only monitoring and logging.
The \f5\-\-enable\-logging\-monitoring\-system\-only\fR flag is deprecated and
will be removed in an upcoming release. Please use \f5\-\-logging\fR and
\f5\-\-monitoring\fR instead. For more information, please read:
https://cloud.google.com/kubernetes\-engine/docs/concepts/about\-logs and
https://cloud.google.com/kubernetes\-engine/docs/how\-to/configure\-metrics.
.TP 2m
\fB\-\-enable\-multi\-networking\fR
Enables multi\-networking on the cluster. Multi\-networking is disabled by
default.
.TP 2m
\fB\-\-enable\-network\-policy\fR
Enable network policy enforcement for this cluster. If you are enabling network
policy on an existing cluster the network policy addon must first be enabled on
the master by using \-\-update\-addons=NetworkPolicy=ENABLED flag.
.TP 2m
\fB\-\-enable\-pod\-security\-policy\fR
Enables the pod security policy admission controller for the cluster. The pod
security policy admission controller adds fine\-grained pod create and update
authorization controls through the PodSecurityPolicy API objects. For more
information, see
https://cloud.google.com/kubernetes\-engine/docs/how\-to/pod\-security\-policies.
.TP 2m
\fB\-\-enable\-private\-nodes\fR
Standard cluster: Enable private nodes as a default behavior for all newly
created node pools, if \f5\-\-enable\-private\-nodes\fR is not provided at node
pool creation time.
.RS 2m
Modifications to this flag do not affect `\-\-enable\-private\-nodes` state of the
existing node pools.
.RE
Autopilot cluster: Force new and existing workloads, without explicit
\f5cloud.google.com/private\-node=true\fR node selector, to run on nodes with no
public IP address.
.RS 2m
Modifications to this flag trigger a re\-schedule operation on all existng
workloads to run on different node VMs.
.RE
.TP 2m
\fB\-\-[no\-]enable\-ray\-cluster\-logging\fR
Enable automatic log processing sidecar for Ray clusters. Use
\fB\-\-enable\-ray\-cluster\-logging\fR to enable and
\fB\-\-no\-enable\-ray\-cluster\-logging\fR to disable.
.TP 2m
\fB\-\-[no\-]enable\-ray\-cluster\-monitoring\fR
Enable automatic metrics collection for Ray clusters. Use
\fB\-\-enable\-ray\-cluster\-monitoring\fR to enable and
\fB\-\-no\-enable\-ray\-cluster\-monitoring\fR to disable.
.TP 2m
\fB\-\-enable\-service\-externalips\fR
Enables use of services with externalIPs field.
.TP 2m
\fB\-\-enable\-shielded\-nodes\fR
Enable Shielded Nodes for this cluster. Enabling Shielded Nodes will enable a
more secure Node credential bootstrapping implementation. Starting with version
1.18, clusters will have Shielded GKE nodes by default.
.TP 2m
\fB\-\-enable\-stackdriver\-kubernetes\fR
(DEPRECATED) Enable Cloud Operations for GKE.
The \f5\-\-enable\-stackdriver\-kubernetes\fR flag is deprecated and will be
removed in an upcoming release. Please use \f5\-\-logging\fR and
\f5\-\-monitoring\fR instead. For more information, please read:
https://cloud.google.com/kubernetes\-engine/docs/concepts/about\-logs and
https://cloud.google.com/kubernetes\-engine/docs/how\-to/configure\-metrics.
.TP 2m
Flags for vertical pod autoscaling:
At most one of these can be specified:
.RS 2m
.TP 2m
\fB\-\-enable\-vertical\-pod\-autoscaling\fR
Enable vertical pod autoscaling for a cluster.
.RE
.sp
.TP 2m
\fB\-\-gateway\-api\fR=\fIGATEWAY_API\fR
Enables GKE Gateway controller in this cluster. The value of the flag specifies
which Open Source Gateway API release channel will be used to define Gateway
resources. \fIGATEWAY_API\fR must be one of:
.RS 2m
.TP 2m
\fBdisabled\fR
Gateway controller will be disabled in the cluster.
.TP 2m
\fBstandard\fR
Gateway controller will be enabled in the cluster. Resource definitions from the
\f5standard\fR OSS Gateway API release channel will be installed.
.RE
.sp
.TP 2m
\fB\-\-generate\-password\fR
Ask the server to generate a secure password and use that as the basic auth
password, keeping the existing username.
.TP 2m
\fB\-\-hpa\-profile\fR=\fIHPA_PROFILE\fR
Set Horizontal Pod Autoscaler behavior. Accepted values are: none, performance.
For more information, see
https://cloud.google.com/kubernetes\-engine/docs/how\-to/horizontal\-pod\-autoscaling#hpa\-profile.
.TP 2m
\fB\-\-identity\-provider\fR=\fIIDENTITY_PROVIDER\fR
Enable 3P identity provider on the cluster.
.TP 2m
\fB\-\-in\-transit\-encryption\fR=\fIIN_TRANSIT_ENCRYPTION\fR
Enable Dataplane V2 in\-transit encryption. Dataplane v2 in\-transit encryption
is disabled by default. \fIIN_TRANSIT_ENCRYPTION\fR must be one of:
\fBinter\-node\-transparent\fR, \fBnone\fR.
.TP 2m
\fB\-\-logging\-variant\fR=\fILOGGING_VARIANT\fR
Specifies the logging variant that will be deployed on all the nodes in the
cluster. Valid logging variants are \f5MAX_THROUGHPUT\fR, \f5DEFAULT\fR. If no
value is specified, DEFAULT is used. \fILOGGING_VARIANT\fR must be one of:
.RS 2m
.TP 2m
\fBDEFAULT\fR
\'DEFAULT' variant requests minimal resources but may not guarantee high
throughput.
.TP 2m
\fBMAX_THROUGHPUT\fR
\'MAX_THROUGHPUT' variant requests more node resources and is able to achieve
logging throughput up to 10MB per sec.
.RE
.sp
.TP 2m
\fB\-\-maintenance\-window\fR=\fISTART_TIME\fR
Set a time of day when you prefer maintenance to start on this cluster. For
example:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-maintenance\-window=12:43
.RE
The time corresponds to the UTC time zone, and must be in HH:MM format.
Non\-emergency maintenance will occur in the 4 hour block starting at the
specified time.
This is mutually exclusive with the recurring maintenance windows and will
overwrite any existing window. Compatible with maintenance exclusions.
To remove an existing maintenance window from the cluster, use
\'\-\-clear\-maintenance\-window'.
.TP 2m
\fB\-\-network\-performance\-configs\fR=[\fIPROPERTY1\fR=\fIVALUE1\fR,...]
Configures network performance settings for the cluster. Node pools can override
with their own settings.
.RS 2m
.TP 2m
\fBtotal\-egress\-bandwidth\-tier\fR
Total egress bandwidth is the available outbound bandwidth from a VM, regardless
of whether the traffic is going to internal IP or external IP destinations. The
following tier values are allowed: [TIER_UNSPECIFIED,TIER_1].
See
https://cloud.google.com/compute/docs/networking/configure\-vm\-with\-high\-bandwidth\-configuration
for more information.
.RE
.sp
.TP 2m
\fB\-\-notification\-config\fR=[\fIpubsub\fR=\fIENABLED\fR|\fIDISABLED\fR,\fIpubsub\-topic\fR=\fITOPIC\fR,...]
The notification configuration of the cluster. GKE supports publishing cluster
upgrade notifications to any Pub/Sub topic you created in the same project.
Create a subscription for the topic specified to receive notification messages.
See https://cloud.google.com/pubsub/docs/admin on how to manage Pub/Sub topics
and subscriptions. You can also use the filter option to specify which event
types you'd like to receive from the following options: SecurityBulletinEvent,
UpgradeEvent, UpgradeInfoEvent, UpgradeAvailableEvent.
Examples:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-notification\-config=pubsub=ENABLED,pubsub\-topic=projects/\e
{project}/topics/{topic\-name}
$ gcloud beta container clusters update example\-cluster \e
\-\-notification\-config=pubsub=ENABLED,pubsub\-topic=projects/\e
{project}/topics/{topic\-name},\e
filter="SecurityBulletinEvent|UpgradeEvent"
.RE
The project of the Pub/Sub topic must be the same one as the cluster. It can be
either the project ID or the project number.
.TP 2m
\fB\-\-patch\-update\fR=[\fIPATCH_UPDATE\fR]
The patch update to use for the cluster.
Setting to 'accelerated' automatically upgrades the cluster to the latest patch
available within the cluster's current minor version and release channel.
Setting to 'default' automatically upgrades the cluster to the default patch
upgrade targetversion available within the cluster's current minor version and
release channel.
\fIPATCH_UPDATE\fR must be one of: \fBaccelerated\fR, \fBdefault\fR.
.TP 2m
\fB\-\-private\-ipv6\-google\-access\-type\fR=\fIPRIVATE_IPV6_GOOGLE_ACCESS_TYPE\fR
Sets the type of private access to Google services over IPv6.
PRIVATE_IPV6_GOOGLE_ACCESS_TYPE must be one of:
.RS 2m
bidirectional
Allows Google services to initiate connections to GKE pods in this
cluster. This is not intended for common use, and requires previous
integration with Google services.
.RE
.RS 2m
disabled
Default value. Disables private access to Google services over IPv6.
.RE
.RS 2m
outbound\-only
Allows GKE pods to make fast, secure requests to Google services
over IPv6. This is the most common use of private IPv6 access.
.RE
.RS 2m
$ gcloud alpha container clusters create \e
\-\-private\-ipv6\-google\-access\-type=disabled
$ gcloud alpha container clusters create \e
\-\-private\-ipv6\-google\-access\-type=outbound\-only
$ gcloud alpha container clusters create \e
\-\-private\-ipv6\-google\-access\-type=bidirectional
.RE
\fIPRIVATE_IPV6_GOOGLE_ACCESS_TYPE\fR must be one of: \fBbidirectional\fR,
\fBdisabled\fR, \fBoutbound\-only\fR.
.TP 2m
\fB\-\-release\-channel\fR=\fICHANNEL\fR
Subscribe or unsubscribe this cluster to a release channel.
When a cluster is subscribed to a release channel, Google maintains both the
master version and the node version. Node auto\-upgrade is enabled by default
for release channel clusters and can be controlled via upgrade\-scope exclusions
(https://cloud.google.com/kubernetes\-engine/docs/concepts/maintenance\-windows\-and\-exclusions#scope_of_maintenance_to_exclude).
\fICHANNEL\fR must be one of:
.RS 2m
.TP 2m
\fBNone\fR
Use 'None' to opt\-out of any release channel.
.TP 2m
\fBextended\fR
Clusters subscribed to 'extended' can remain on a minor version for 24 months
from when the minor version is made available in the Regular channel.
.TP 2m
\fBrapid\fR
\'rapid' channel is offered on an early access basis for customers who want to
test new releases.
WARNING: Versions available in the 'rapid' channel may be subject to unresolved
issues with no known workaround and are not subject to any SLAs.
.TP 2m
\fBregular\fR
Clusters subscribed to 'regular' receive versions that are considered GA
quality. 'regular' is intended for production users who want to take advantage
of new features.
.TP 2m
\fBstable\fR
Clusters subscribed to 'stable' receive versions that are known to be stable and
reliable in production.
.RE
.sp
.TP 2m
\fB\-\-remove\-autopilot\-workload\-policies\fR=\fIREMOVE_WORKLOAD_POLICIES\fR
Remove Autopilot workload policies from the cluster.
Examples:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-remove\-autopilot\-workload\-policies=allow\-net\-admin
.RE
The only supported workload policy is 'allow\-net\-admin'.
.TP 2m
\fB\-\-remove\-labels\fR=[\fIKEY\fR,...]
Labels to remove from the Google Cloud resources in use by the Kubernetes Engine
cluster. These are unrelated to Kubernetes labels.
Examples:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-remove\-labels=label_a,label_b
.RE
.TP 2m
\fB\-\-remove\-workload\-policies\fR=\fIREMOVE_WORKLOAD_POLICIES\fR
Remove Autopilot workload policies from the cluster.
Examples:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-remove\-workload\-policies=allow\-net\-admin
.RE
The only supported workload policy is 'allow\-net\-admin'.
.TP 2m
\fB\-\-security\-group\fR=\fISECURITY_GROUP\fR
The name of the RBAC security group for use with Google security groups in
Kubernetes RBAC
(https://kubernetes.io/docs/reference/access\-authn\-authz/rbac/).
To include group membership as part of the claims issued by Google during
authentication, a group must be designated as a security group by including it
as a direct member of this group.
If unspecified, no groups will be returned for use with RBAC.
.TP 2m
\fB\-\-security\-posture\fR=\fISECURITY_POSTURE\fR
Sets the mode of the Kubernetes security posture API's off\-cluster features.
To enable advanced mode explicitly set the flag to
\f5\-\-security\-posture=enterprise\fR.
To enable in standard mode explicitly set the flag to
\f5\-\-security\-posture=standard\fR
To disable in an existing cluster, explicitly set the flag to
\f5\-\-security\-posture=disabled\fR.
For more information on enablement, see
https://cloud.google.com/kubernetes\-engine/docs/concepts/about\-security\-posture\-dashboard#feature\-enablement.
\fISECURITY_POSTURE\fR must be one of: \fBdisabled\fR, \fBstandard\fR,
\fBenterprise\fR.
.TP 2m
\fB\-\-set\-password\fR
Set the basic auth password to the specified value, keeping the existing
username.
.TP 2m
\fB\-\-stack\-type\fR=\fISTACK_TYPE\fR
IP stack type of the cluster nodes. \fISTACK_TYPE\fR must be one of: \fBipv4\fR,
\fBipv4\-ipv6\fR.
.TP 2m
\fB\-\-start\-credential\-rotation\fR
Start the rotation of IP and credentials for this cluster. For example:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-start\-credential\-rotation
.RE
This causes the cluster to serve on two IPs, and will initiate a node upgrade to
point to the new IP. See documentation for more details:
https://cloud.google.com/kubernetes\-engine/docs/how\-to/credential\-rotation.
.TP 2m
\fB\-\-start\-ip\-rotation\fR
Start the rotation of this cluster to a new IP. For example:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-start\-ip\-rotation
.RE
This causes the cluster to serve on two IPs, and will initiate a node upgrade to
point to the new IP. See documentation for more details:
https://cloud.google.com/kubernetes\-engine/docs/how\-to/ip\-rotation.
.TP 2m
\fB\-\-tier\fR=\fITIER\fR
(DEPRECATED) Set the desired tier for the cluster.
The \f5\-\-tier\fR flag is deprecated. More info:
https://cloud.google.com/kubernetes\-engine/docs/release\-notes#September_02_2025.
\fITIER\fR must be one of: \fBstandard\fR, \fBenterprise\fR.
.TP 2m
\fB\-\-update\-addons\fR=[\fIADDON\fR=\fIENABLED\fR|\fIDISABLED\fR,...]
Cluster addons to enable or disable. Options are
HorizontalPodAutoscaling=ENABLED|DISABLED HttpLoadBalancing=ENABLED|DISABLED
KubernetesDashboard=ENABLED|DISABLED Istio=ENABLED|DISABLED
BackupRestore=ENABLED|DISABLED NetworkPolicy=ENABLED|DISABLED
CloudRun=ENABLED|DISABLED ConfigConnector=ENABLED|DISABLED
NodeLocalDNS=ENABLED|DISABLED GcePersistentDiskCsiDriver=ENABLED|DISABLED
GcpFilestoreCsiDriver=ENABLED|DISABLED GcsFuseCsiDriver=ENABLED|DISABLED
.TP 2m
\fB\-\-update\-labels\fR=[\fIKEY\fR=\fIVALUE\fR,...]
Labels to apply to the Google Cloud resources in use by the Kubernetes Engine
cluster. These are unrelated to Kubernetes labels.
Examples:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-update\-labels=label_a=value1,label_b=value2
.RE
.TP 2m
\fB\-\-workload\-policies\fR=\fIWORKLOAD_POLICIES\fR
Add Autopilot workload policies to the cluster.
Examples:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-workload\-policies=allow\-net\-admin
.RE
The only supported workload policy is 'allow\-net\-admin'.
.TP 2m
\fB\-\-workload\-pool\fR=\fIWORKLOAD_POOL\fR
Enable Workload Identity on the cluster.
When enabled, Kubernetes service accounts will be able to act as Cloud IAM
Service Accounts, through the provided workload pool.
Currently, the only accepted workload pool is the workload pool of the Cloud
project containing the cluster, \f5PROJECT_ID.svc.id.goog\fR.
For more information on Workload Identity, see
.RS 2m
https://cloud.google.com/kubernetes\-engine/docs/how\-to/workload\-identity
.RE
.TP 2m
\fB\-\-workload\-vulnerability\-scanning\fR=\fIWORKLOAD_VULNERABILITY_SCANNING\fR
Sets the mode of the Kubernetes security posture API's workload vulnerability
scanning.
To enable Advanced vulnerability insights mode explicitly set the flag to
\f5\-\-workload\-vulnerability\-scanning=enterprise\fR.
To enable in standard mode explicitly set the flag to
\f5\-\-workload\-vulnerability\-scanning=standard\fR.
To disable in an existing cluster, explicitly set the flag to
\f5\-\-workload\-vulnerability\-scanning=disabled\fR.
For more information on enablement, see
https://cloud.google.com/kubernetes\-engine/docs/concepts/about\-security\-posture\-dashboard#feature\-enablement.
\fIWORKLOAD_VULNERABILITY_SCANNING\fR must be one of: \fBdisabled\fR,
\fBstandard\fR, \fBenterprise\fR.
.TP 2m
\fB\-\-additional\-ip\-ranges\fR=[\fIsubnetwork\fR=\fINAME\fR,\fIpod\-ipv4\-range\fR=\fINAME\fR,...]
Add additional subnetworks named "my\-subnet" with pod ipv4 range named
"my\-range" to the cluster.
Examples:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-additional\-ip\-ranges=subnetwork=my\-subnet,\e
pod\-ipv4\-range=my\-range
.RE
.TP 2m
\fB\-\-remove\-additional\-ip\-ranges\fR=[\fIsubnetwork\fR=\fINAME\fR,\fIpod\-ipv4\-range\fR=\fINAME\fR,...]
Additional subnetworks to be removed from the cluster.
Examples:
Remove pod range named "my\-range" under additional subnetwork named
"my\-subnet" from the cluster.
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-remove\-additional\-ip\-ranges=subnetwork=my\-subnet,\e
pod\-ipv4\-range=my\-range
.RE
Remove additional subnetwork named "my\-subnet", including all the pod ipv4
ranges under the subnetwork.
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-remove\-additional\-ip\-ranges=subnetwork=my\-subnet
.RE
.TP 2m
\fB\-\-additional\-pod\-ipv4\-ranges\fR=\fINAME\fR,[\fINAME\fR,...]
Additional IP address ranges(by name) for pods that need to be added to the
cluster.
Examples:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-additional\-pod\-ipv4\-ranges=range1,range2
.RE
.TP 2m
\fB\-\-remove\-additional\-pod\-ipv4\-ranges\fR=\fINAME\fR,[\fINAME\fR,...]
Previously added additional pod ranges(by name) for pods that are to be removed
from the cluster.
Examples:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-remove\-additional\-pod\-ipv4\-ranges=range1,range2
.RE
.TP 2m
At most one of these can be specified:
.RS 2m
.TP 2m
\fB\-\-additional\-zones\fR=[\fIZONE\fR,...]
(DEPRECATED) The set of additional zones in which the cluster's node footprint
should be replicated. All zones must be in the same region as the cluster's
primary zone.
Note that the exact same footprint will be replicated in all zones, such that if
you created a cluster with 4 nodes in a single zone and then use this option to
spread across 2 more zones, 8 additional nodes will be created.
Multiple locations can be specified, separated by commas. For example:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-zone us\-central1\-a \e
\-\-additional\-zones us\-central1\-b,us\-central1\-c
.RE
To remove all zones other than the cluster's primary zone, pass the empty string
to the flag. For example:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-zone us\-central1\-a \-\-additional\-zones ""
.RE
This flag is deprecated. Use \-\-node\-locations=PRIMARY_ZONE,[ZONE,...]
instead.
.TP 2m
\fB\-\-node\-locations\fR=\fIZONE\fR,[\fIZONE\fR,...]
The set of zones in which the specified node footprint should be replicated. All
zones must be in the same region as the cluster's master(s), specified by the
\f5\-location\fR, \f5\-\-zone\fR, or \f5\-\-region\fR flag. Additionally, for
zonal clusters, \f5\-\-node\-locations\fR must contain the cluster's primary
zone. If not specified, all nodes will be in the cluster's primary zone (for
zonal clusters) or spread across three randomly chosen zones within the
cluster's region (for regional clusters).
Note that \f5NUM_NODES\fR nodes will be created in each zone, such that if you
specify \f5\-\-num\-nodes=4\fR and choose two locations, 8 nodes will be
created.
Multiple locations can be specified, separated by commas. For example:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-location us\-central1\-a \e
\-\-node\-locations us\-central1\-a,us\-central1\-b
.RE
.RE
.sp
.TP 2m
\fB\-\-auto\-monitoring\-scope\fR=\fIAUTO_MONITORING_SCOPE\fR
Enables Auto\-Monitoring for a specific scope within the cluster. ALL: Enables
Auto\-Monitoring for all supported workloads within the cluster. NONE: Disables
Auto\-Monitoring. \fIAUTO_MONITORING_SCOPE\fR must be one of: \fBALL\fR,
\fBNONE\fR.
.TP 2m
\fB\-\-logging\fR=[\fICOMPONENT\fR,...]
Set the components that have logging enabled. Valid component values are:
\f5SYSTEM\fR, \f5WORKLOAD\fR, \f5API_SERVER\fR, \f5CONTROLLER_MANAGER\fR,
\f5SCHEDULER\fR, \f5NONE\fR
For more information, see
https://cloud.google.com/kubernetes\-engine/docs/concepts/about\-logs#available\-logs
Examples:
.RS 2m
$ gcloud beta container clusters update \-\-logging=SYSTEM
$ gcloud beta container clusters update \e
\-\-logging=SYSTEM,API_SERVER,WORKLOAD
$ gcloud beta container clusters update \-\-logging=NONE
.RE
.TP 2m
\fB\-\-monitoring\fR=[\fICOMPONENT\fR,...]
Set the components that have monitoring enabled. Valid component values are:
\f5SYSTEM\fR, \f5WORKLOAD\fR (Deprecated), \f5NONE\fR, \f5API_SERVER\fR,
\f5CONTROLLER_MANAGER\fR, \f5SCHEDULER\fR, \f5DAEMONSET\fR, \f5DEPLOYMENT\fR,
\f5HPA\fR, \f5POD\fR, \f5STATEFULSET\fR, \f5STORAGE\fR, \f5CADVISOR\fR,
\f5KUBELET\fR, \f5DCGM\fR, \f5JOBSET\fR
For more information, see
https://cloud.google.com/kubernetes\-engine/docs/how\-to/configure\-metrics#available\-metrics
Examples:
.RS 2m
$ gcloud beta container clusters update \e
\-\-monitoring=SYSTEM,API_SERVER,POD
$ gcloud beta container clusters update \-\-monitoring=NONE
.RE
.TP 2m
At most one of these can be specified:
.RS 2m
.TP 2m
\fB\-\-disable\-managed\-prometheus\fR
Disable managed collection for Managed Service for Prometheus.
.TP 2m
\fB\-\-enable\-managed\-prometheus\fR
Enables managed collection for Managed Service for Prometheus in the cluster.
See
https://cloud.google.com/stackdriver/docs/managed\-prometheus/setup\-managed#enable\-mgdcoll\-gke
for more info.
Enabled by default for cluster versions 1.27 or greater, use
\-\-no\-enable\-managed\-prometheus to disable.
.RE
.sp
.TP 2m
Flags for Binary Authorization:
.RS 2m
.TP 2m
\fB\-\-binauthz\-policy\-bindings\fR=[\fIname\fR=\fIBINAUTHZ_POLICY\fR]
The relative resource name of the Binary Authorization policy to audit and/or
enforce. GKE policies have the following format:
\f5projects/{project_number}/platforms/gke/policies/{policy_id}\fR.
.TP 2m
At most one of these can be specified:
.RS 2m
.TP 2m
\fB\-\-binauthz\-evaluation\-mode\fR=\fIBINAUTHZ_EVALUATION_MODE\fR
Enable Binary Authorization for this cluster. \fIBINAUTHZ_EVALUATION_MODE\fR
must be one of: \fBdisabled\fR, \fBpolicy\-bindings\fR,
\fBpolicy\-bindings\-and\-project\-singleton\-policy\-enforce\fR,
\fBproject\-singleton\-policy\-enforce\fR.
.TP 2m
\fB\-\-enable\-binauthz\fR
(DEPRECATED) Enable Binary Authorization for this cluster.
The \f5\-\-enable\-binauthz\fR flag is deprecated. Please use
\f5\-\-binauthz\-evaluation\-mode\fR instead.
.RE
.RE
.sp
.TP 2m
\fB\-\-clear\-fleet\-project\fR
Remove the cluster from current fleet host project. Example: $ gcloud beta
container clusters update \-\-clear\-fleet\-project
.TP 2m
\fB\-\-enable\-fleet\fR
Set cluster project as the fleet host project. This will register the cluster to
the same project. To register the cluster to a fleet in a different project,
please use \f5\-\-fleet\-project=FLEET_HOST_PROJECT\fR. Example: $ gcloud beta
container clusters update \-\-enable\-fleet
.TP 2m
\fB\-\-fleet\-project\fR=\fIPROJECT_ID_OR_NUMBER\fR
Sets fleet host project for the cluster. If specified, the current cluster will
be registered as a fleet membership under the fleet host project.
Example: $ gcloud beta container clusters update \-\-fleet\-project=my\-project
.TP 2m
\fB\-\-membership\-type\fR=\fIMEMBERSHIP_TYPE\fR
Specify a membership type for the cluster's fleet membership. Example: $ gcloud
beta container clusters update \e \-\-membership\-type=LIGHTWEIGHT.
\fIMEMBERSHIP_TYPE\fR must be (only \e one value is supported):
.RS 2m
.TP 2m
\fBLIGHTWEIGHT\fR
Fleet membership representing this cluster will be lightweight.
.RE
.sp
.TP 2m
\fB\-\-unset\-membership\-type\fR
Set the membership type for the cluster's fleet membership to empty. Example: $
gcloud beta container clusters update \-\-unset\-membership\-type
.TP 2m
At most one of these can be specified:
.RS 2m
.TP 2m
\fB\-\-clear\-maintenance\-window\fR
If set, remove the maintenance window that was set with \-\-maintenance\-window
family of flags.
.TP 2m
\fB\-\-remove\-maintenance\-exclusion\fR=\fINAME\fR
Name of a maintenance exclusion to remove. If you hadn't specified a name, one
was auto\-generated. Get it with $ gcloud container clusters describe.
.TP 2m
Sets a period of time in which maintenance should not occur. This is compatible
with both daily and recurring maintenance windows. If
\f5\-\-add\-maintenance\-exclusion\-scope\fR is not specified, the exclusion
will exclude all upgrades.
Examples:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-add\-maintenance\-exclusion\-name=holidays\-2000 \e
\-\-add\-maintenance\-exclusion\-start=2000\-11\-20T00:00:00 \e
\-\-add\-maintenance\-exclusion\-end=2000\-12\-31T23:59:59 \e
\-\-add\-maintenance\-exclusion\-scope=no_upgrades
.RE
.RS 2m
.TP 2m
\fB\-\-add\-maintenance\-exclusion\-end\fR=\fITIME_STAMP\fR
End time of the exclusion window. Must take place after the start time. See $
gcloud topic datetimes for information on time formats.
This flag argument must be specified if any of the other arguments in this group
are specified.
.TP 2m
\fB\-\-add\-maintenance\-exclusion\-name\fR=\fINAME\fR
A descriptor for the exclusion that can be used to remove it. If not specified,
it will be autogenerated.
.TP 2m
\fB\-\-add\-maintenance\-exclusion\-scope\fR=\fISCOPE\fR
Scope of the exclusion window to specify the type of upgrades that the exclusion
will apply to. Must be in one of no_upgrades, no_minor_upgrades or
no_minor_or_node_upgrades. If not specified in an exclusion, defaults to
no_upgrades.
.TP 2m
\fB\-\-add\-maintenance\-exclusion\-start\fR=\fITIME_STAMP\fR
Start time of the exclusion window (can occur in the past). If not specified,
the current time will be used. See $ gcloud topic datetimes for information on
time formats.
.RE
.sp
.TP 2m
Set a flexible maintenance window by specifying a window that recurs per an RFC
5545 RRULE. Non\-emergency maintenance will occur in the recurring windows.
Examples:
For a 9\-5 Mon\-Wed UTC\-4 maintenance window:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-maintenance\-window\-start=2000\-01\-01T09:00:00\-04:00 \e
\-\-maintenance\-window\-end=2000\-01\-01T17:00:00\-04:00 \e
\-\-maintenance\-window\-recurrence='FREQ=WEEKLY;BYDAY=MO,TU,WE'
.RE
For a daily window from 22:00 \- 04:00 UTC:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-maintenance\-window\-start=2000\-01\-01T22:00:00Z \e
\-\-maintenance\-window\-end=2000\-01\-02T04:00:00Z \e
\-\-maintenance\-window\-recurrence=FREQ=DAILY
.RE
.RS 2m
.TP 2m
\fB\-\-maintenance\-window\-end\fR=\fITIME_STAMP\fR
The end time for calculating the duration of the maintenance window, as
expressed by the amount of time after the START_TIME, in the same format. The
value for END_TIME must be in the future, relative to START_TIME. This only
calculates the duration of the window, and doesn't set when the maintenance
window stops recurring. Maintenance windows only stop recurring when they're
removed. See $ gcloud topic datetimes for information on time formats.
This flag argument must be specified if any of the other arguments in this group
are specified.
This flag argument must be specified if any of the other arguments in this group
are specified.
.TP 2m
\fB\-\-maintenance\-window\-recurrence\fR=\fIRRULE\fR
An RFC 5545 RRULE, specifying how the window will recur. Note that minimum
requirements for maintenance periods will be enforced. Note that FREQ=SECONDLY,
MINUTELY, and HOURLY are not supported.
This flag argument must be specified if any of the other arguments in this group
are specified.
.TP 2m
\fB\-\-maintenance\-window\-start\fR=\fITIME_STAMP\fR
Start time of the first window (can occur in the past). The start time
influences when the window will start for recurrences. See $ gcloud topic
datetimes for information on time formats.
This flag argument must be specified if any of the other arguments in this group
are specified.
.RE
.RE
.sp
.TP 2m
Exports cluster's usage of cloud resources
At most one of these can be specified:
.RS 2m
.TP 2m
\fB\-\-clear\-resource\-usage\-bigquery\-dataset\fR
Disables exporting cluster resource usage to BigQuery.
.TP 2m
\fB\-\-enable\-network\-egress\-metering\fR
Enable network egress metering on this cluster.
When enabled, a DaemonSet is deployed into the cluster. Each DaemonSet pod
meters network egress traffic by collecting data from the conntrack table, and
exports the metered metrics to the specified destination.
Network egress metering is disabled if this flag is omitted, or when
\f5\-\-no\-enable\-network\-egress\-metering\fR is set.
.TP 2m
\fB\-\-enable\-resource\-consumption\-metering\fR
Enable resource consumption metering on this cluster.
When enabled, a table will be created in the specified BigQuery dataset to store
resource consumption data. The resulting table can be joined with the resource
usage table or with BigQuery billing export.
To disable resource consumption metering, set
\f5\-\-no\-enable\-resource\-consumption\- metering\fR. If this flag is omitted,
then resource consumption metering will remain enabled or disabled depending on
what is already configured for this cluster.
.TP 2m
\fB\-\-resource\-usage\-bigquery\-dataset\fR=\fIRESOURCE_USAGE_BIGQUERY_DATASET\fR
The name of the BigQuery dataset to which the cluster's usage of cloud resources
is exported. A table will be created in the specified dataset to store cluster
resource usage. The resulting table can be joined with BigQuery Billing Export
to produce a fine\-grained cost breakdown.
Examples:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-resource\-usage\-bigquery\-dataset=example_bigquery_dataset_name
.RE
.RE
.sp
.TP 2m
ClusterDNS
.RS 2m
.TP 2m
\fB\-\-cluster\-dns\fR=\fICLUSTER_DNS\fR
DNS provider to use for this cluster. \fICLUSTER_DNS\fR must be one of:
.RS 2m
.TP 2m
\fBclouddns\fR
Selects Cloud DNS as the DNS provider for the cluster.
.TP 2m
\fBdefault\fR
Selects the default DNS provider (kube\-dns) for the cluster.
.TP 2m
\fBkubedns\fR
Selects Kube DNS as the DNS provider for the cluster.
.RE
.sp
.TP 2m
\fB\-\-cluster\-dns\-domain\fR=\fICLUSTER_DNS_DOMAIN\fR
DNS domain for this cluster. The default value is \f5cluster.local\fR. This is
configurable when \f5\-\-cluster\-dns=clouddns\fR and
\f5\-\-cluster\-dns\-scope=vpc\fR are set. The value must be a valid DNS
subdomain as defined in RFC 1123.
.TP 2m
\fB\-\-cluster\-dns\-scope\fR=\fICLUSTER_DNS_SCOPE\fR
DNS scope for the Cloud DNS zone created \- valid only with
\f5\-\-cluster\-dns=clouddns\fR. Defaults to cluster.
\fICLUSTER_DNS_SCOPE\fR must be one of:
.RS 2m
.TP 2m
\fBcluster\fR
Configures the Cloud DNS zone to be private to the cluster.
.TP 2m
\fBvpc\fR
Configures the Cloud DNS zone to be private to the VPC Network.
.RE
.sp
.TP 2m
At most one of these can be specified:
.RS 2m
.TP 2m
\fB\-\-additive\-vpc\-scope\-dns\-domain\fR=\fIADDITIVE_VPC_SCOPE_DNS_DOMAIN\fR
The domain used in Additive VPC scope. Only works with Cluster Scope.
.TP 2m
\fB\-\-disable\-additive\-vpc\-scope\fR
Disables Additive VPC Scope.
.RE
.RE
.sp
.TP 2m
At most one of these can be specified:
.RS 2m
.TP 2m
\fB\-\-dataplane\-v2\-observability\-mode\fR=\fIDATAPLANE_V2_OBSERVABILITY_MODE\fR
(REMOVED) Select Advanced Datapath Observability mode for the cluster. Defaults
to \f5DISABLED\fR.
Advanced Datapath Observability allows for a real\-time view into pod\-to\-pod
traffic within your cluster.
Examples:
.RS 2m
$ gcloud beta container clusters update \e
\-\-dataplane\-v2\-observability\-mode=DISABLED
.RE
.RS 2m
$ gcloud beta container clusters update \e
\-\-dataplane\-v2\-observability\-mode=INTERNAL_VPC_LB
.RE
.RS 2m
$ gcloud beta container clusters update \e
\-\-dataplane\-v2\-observability\-mode=EXTERNAL_LB
.RE
Flag \-\-dataplane\-v2\-observability\-mode has been removed.
\fIDATAPLANE_V2_OBSERVABILITY_MODE\fR must be one of:
.RS 2m
.TP 2m
\fBDISABLED\fR
Disables Advanced Datapath Observability.
.TP 2m
\fBEXTERNAL_LB\fR
Makes Advanced Datapath Observability available to the external network.
.TP 2m
\fBINTERNAL_VPC_LB\fR
Makes Advanced Datapath Observability available from the VPC network.
.RE
.sp
.TP 2m
\fB\-\-disable\-dataplane\-v2\-flow\-observability\fR
Disables Advanced Datapath Observability.
.TP 2m
\fB\-\-enable\-dataplane\-v2\-flow\-observability\fR
Enables Advanced Datapath Observability which allows for a real\-time view into
pod\-to\-pod traffic within your cluster.
.RE
.sp
.TP 2m
At most one of these can be specified:
.RS 2m
.TP 2m
\fB\-\-disable\-dataplane\-v2\-metrics\fR
Stops exposing advanced datapath flow metrics on node port.
.TP 2m
\fB\-\-enable\-dataplane\-v2\-metrics\fR
Exposes advanced datapath flow metrics on node port.
.RE
.sp
.TP 2m
At most one of these can be specified:
.RS 2m
.TP 2m
\fB\-\-disable\-auto\-ipam\fR
Disable the Auto IP Address Management (Auto IPAM) feature for the cluster.
.TP 2m
\fB\-\-enable\-auto\-ipam\fR
Enable the Auto IP Address Management (Auto IPAM) feature for the cluster.
.RE
.sp
.TP 2m
At most one of these can be specified:
.RS 2m
.TP 2m
\fB\-\-disable\-l4\-lb\-firewall\-reconciliation\fR
Disable reconciliation on the cluster for L4 Load Balancer VPC firewalls
targeting ingress traffic.
.TP 2m
\fB\-\-enable\-l4\-lb\-firewall\-reconciliation\fR
Enable reconciliation on the cluster for L4 Load Balancer VPC firewalls
targeting ingress traffic. L4 LB VPC firewall reconciliation is enabled by
default.
.RE
.sp
.TP 2m
\fB\-\-enable\-authorized\-networks\-on\-private\-endpoint\fR
Enable enforcement of \-\-master\-authorized\-networks CIDR ranges for traffic
reaching cluster's control plane via private IP.
.TP 2m
\fB\-\-enable\-dns\-access\fR
Enable access to the cluster's control plane over DNS\-based endpoint.
DNS\-based control plane access is recommended.
.TP 2m
\fB\-\-enable\-google\-cloud\-access\fR
When you enable Google Cloud Access, any public IP addresses owned by Google
Cloud can reach the public control plane endpoint of your cluster.
.TP 2m
\fB\-\-enable\-ip\-access\fR
Enable access to the cluster's control plane over private IP and public IP if
\-\-enable\-private\-endpoint is not enabled.
.TP 2m
\fB\-\-enable\-k8s\-certs\-via\-dns\fR
Enable K8s client certificates Authentication to the cluster's control plane
over DNS\-based endpoint.
.TP 2m
\fB\-\-enable\-k8s\-tokens\-via\-dns\fR
Enable K8s Service Account tokens Authentication to the cluster's control plane
over DNS\-based endpoint.
.TP 2m
\fB\-\-enable\-master\-global\-access\fR
Use with private clusters to allow access to the master's private endpoint from
any Google Cloud region or on\-premises environment regardless of the private
cluster's region.
.TP 2m
\fB\-\-enable\-private\-endpoint\fR
Enables cluster's control plane to be accessible using private IP address only.
.TP 2m
Master Authorized Networks
.RS 2m
.TP 2m
\fB\-\-enable\-master\-authorized\-networks\fR
Allow only specified set of CIDR blocks (specified by the
\f5\-\-master\-authorized\-networks\fR flag) to connect to Kubernetes master
through HTTPS. Besides these blocks, the following have access as well:
.RS 2m
1) The private network the cluster connects to if
`\-\-enable\-private\-nodes` is specified.
2) Google Compute Engine Public IPs if `\-\-enable\-private\-nodes` is not
specified.
.RE
Use \f5\-\-no\-enable\-master\-authorized\-networks\fR to disable. When
disabled, public internet (0.0.0.0/0) is allowed to connect to Kubernetes master
through HTTPS.
.TP 2m
\fB\-\-master\-authorized\-networks\fR=\fINETWORK\fR,[\fINETWORK\fR,...]
The list of CIDR blocks (up to 100 for private cluster, 50 for public cluster)
that are allowed to connect to Kubernetes master through HTTPS. Specified in
CIDR notation (e.g. 1.2.3.4/30). Cannot be specified unless
\f5\-\-enable\-master\-authorized\-networks\fR is also specified.
.RE
.sp
.TP 2m
Node autoprovisioning
.RS 2m
.TP 2m
\fB\-\-enable\-autoprovisioning\fR
Enables node autoprovisioning for a cluster.
Cluster Autoscaler will be able to create new node pools. Requires maximum CPU
and memory limits to be specified.
.TP 2m
At most one of these can be specified:
.RS 2m
.TP 2m
\fB\-\-autoprovisioning\-config\-file\fR=\fIPATH_TO_FILE\fR
Path of the JSON/YAML file which contains information about the cluster's node
autoprovisioning configuration. Currently it contains a list of resource limits,
identity defaults for autoprovisioning, node upgrade settings, node management
settings, minimum cpu platform, image type, node locations for autoprovisioning,
disk type and size configuration, Shielded instance settings, and
customer\-managed encryption keys settings.
Resource limits are specified in the field 'resourceLimits'. Each resource
limits definition contains three fields: resourceType, maximum and minimum.
Resource type can be "cpu", "memory" or an accelerator (e.g. "nvidia\-tesla\-t4"
for NVIDIA T4). Use gcloud compute accelerator\-types list to learn about
available accelerator types. Maximum is the maximum allowed amount with the unit
of the resource. Minimum is the minimum allowed amount with the unit of the
resource.
Identity default contains at most one of the below fields: serviceAccount: The
Google Cloud Platform Service Account to be used by node VMs in autoprovisioned
node pools. If not specified, the project's default service account is used.
scopes: A list of scopes to be used by node instances in autoprovisioned node
pools. Multiple scopes can be specified, separated by commas. For information on
defaults, look at:
https://cloud.google.com/sdk/gcloud/reference/container/clusters/create#\-\-scopes
Node Upgrade settings are specified under the field 'upgradeSettings', which has
the following fields: maxSurgeUpgrade: Number of extra (surge) nodes to be
created on each upgrade of an autoprovisioned node pool. maxUnavailableUpgrade:
Number of nodes that can be unavailable at the same time on each upgrade of an
autoprovisioned node pool.
Node Management settings are specified under the field 'management', which has
the following fields: autoUpgrade: A boolean field that indicates if node
autoupgrade is enabled for autoprovisioned node pools. autoRepair: A boolean
field that indicates if node autorepair is enabled for autoprovisioned node
pools.
minCpuPlatform (deprecated): If specified, new autoprovisioned nodes will be
scheduled on host with specified CPU architecture or a newer one. Note: Min CPU
platform can only be specified in Beta and Alpha.
Autoprovisioned node image is specified under the 'imageType' field. If not
specified the default value will be applied.
Autoprovisioning locations is a set of zones where new node pools can be created
by Autoprovisioning. Autoprovisioning locations are specified in the field
\'autoprovisioningLocations'. All zones must be in the same region as the
cluster's master(s).
Disk type and size are specified under the 'diskType' and 'diskSizeGb' fields,
respectively. If specified, new autoprovisioned nodes will be created with
custom boot disks configured by these settings.
Shielded instance settings are specified under the 'shieldedInstanceConfig'
field, which has the following fields: enableSecureBoot: A boolean field that
indicates if secure boot is enabled for autoprovisioned nodes.
enableIntegrityMonitoring: A boolean field that indicates if integrity
monitoring is enabled for autoprovisioned nodes.
Customer Managed Encryption Keys (CMEK) used by new auto\-provisioned node pools
can be specified in the 'bootDiskKmsKey' field.
Use a full or relative path to a local file containing the value of
autoprovisioning_config_file.
.TP 2m
Flags to configure autoprovisioned nodes
.RS 2m
.TP 2m
\fB\-\-autoprovisioning\-image\-type\fR=\fIAUTOPROVISIONING_IMAGE_TYPE\fR
Node Autoprovisioning will create new nodes with the specified image type
.TP 2m
\fB\-\-autoprovisioning\-locations\fR=\fIZONE\fR,[\fIZONE\fR,...]
Set of zones where new node pools can be created by autoprovisioning. All zones
must be in the same region as the cluster's master(s). Multiple locations can be
specified, separated by commas.
.TP 2m
\fB\-\-autoprovisioning\-min\-cpu\-platform\fR=\fIPLATFORM\fR
(DEPRECATED) If specified, new autoprovisioned nodes will be scheduled on host
with specified CPU architecture or a newer one.
The \f5\-\-autoprovisioning\-min\-cpu\-platform\fR flag is deprecated and will
be removed in an upcoming release. More info:
https://cloud.google.com/kubernetes\-engine/docs/release\-notes#March_08_2022
.TP 2m
\fB\-\-max\-cpu\fR=\fIMAX_CPU\fR
Maximum number of cores in the cluster.
Maximum number of cores to which the cluster can scale.
.TP 2m
\fB\-\-max\-memory\fR=\fIMAX_MEMORY\fR
Maximum memory in the cluster.
Maximum number of gigabytes of memory to which the cluster can scale.
.TP 2m
\fB\-\-min\-cpu\fR=\fIMIN_CPU\fR
Minimum number of cores in the cluster.
Minimum number of cores to which the cluster can scale.
.TP 2m
\fB\-\-min\-memory\fR=\fIMIN_MEMORY\fR
Minimum memory in the cluster.
Minimum number of gigabytes of memory to which the cluster can scale.
.TP 2m
Flags to specify upgrade settings for autoprovisioned nodes:
.RS 2m
.TP 2m
\fB\-\-autoprovisioning\-max\-surge\-upgrade\fR=\fIAUTOPROVISIONING_MAX_SURGE_UPGRADE\fR
Number of extra (surge) nodes to be created on each upgrade of an
autoprovisioned node pool.
.TP 2m
\fB\-\-autoprovisioning\-max\-unavailable\-upgrade\fR=\fIAUTOPROVISIONING_MAX_UNAVAILABLE_UPGRADE\fR
Number of nodes that can be unavailable at the same time on each upgrade of an
autoprovisioned node pool.
.TP 2m
\fB\-\-autoprovisioning\-node\-pool\-soak\-duration\fR=\fIAUTOPROVISIONING_NODE_POOL_SOAK_DURATION\fR
Time in seconds to be spent waiting during blue\-green upgrade before deleting
the blue pool and completing the update. This argument should be used in
conjunction with \f5\-\-enable\-autoprovisioning\-blue\-green\-upgrade\fR to
take effect.
.TP 2m
\fB\-\-autoprovisioning\-standard\-rollout\-policy\fR=[\fIbatch\-node\-count\fR=\fIBATCH_NODE_COUNT\fR,\fIbatch\-percent\fR=\fIBATCH_NODE_PERCENTAGE\fR,\fIbatch\-soak\-duration\fR=\fIBATCH_SOAK_DURATION\fR,...]
Standard rollout policy options for blue\-green upgrade. This argument should be
used in conjunction with
\f5\-\-enable\-autoprovisioning\-blue\-green\-upgrade\fR to take effect.
Batch sizes are specified by one of, batch\-node\-count or batch\-percent. The
duration between batches is specified by batch\-soak\-duration.
Example:
\f5\-\-standard\-rollout\-policy=batch\-node\-count=3,batch\-soak\-duration=60s\fR
\f5\-\-standard\-rollout\-policy=batch\-percent=0.05,batch\-soak\-duration=180s\fR
.TP 2m
Flag group to choose the top level upgrade option:
At most one of these can be specified:
.RS 2m
.TP 2m
\fB\-\-enable\-autoprovisioning\-blue\-green\-upgrade\fR
Whether to use blue\-green upgrade for the autoprovisioned node pool.
.TP 2m
\fB\-\-enable\-autoprovisioning\-surge\-upgrade\fR
Whether to use surge upgrade for the autoprovisioned node pool.
.RE
.RE
.sp
.TP 2m
Flags to specify identity for autoprovisioned nodes:
.RS 2m
.TP 2m
\fB\-\-autoprovisioning\-scopes\fR=[\fISCOPE\fR,...]
The scopes to be used by node instances in autoprovisioned node pools. Multiple
scopes can be specified, separated by commas. For information on defaults, look
at:
https://cloud.google.com/sdk/gcloud/reference/container/clusters/create#\-\-scopes
.TP 2m
\fB\-\-autoprovisioning\-service\-account\fR=\fIAUTOPROVISIONING_SERVICE_ACCOUNT\fR
The Google Cloud Platform Service Account to be used by node VMs in
autoprovisioned node pools. If not specified, the project default service
account is used.
.RE
.sp
.TP 2m
Flags to specify node management settings for autoprovisioned nodes:
.RS 2m
.TP 2m
\fB\-\-enable\-autoprovisioning\-autorepair\fR
Enable node autorepair for autoprovisioned node pools. Use
\-\-no\-enable\-autoprovisioning\-autorepair to disable.
This flag argument must be specified if any of the other arguments in this group
are specified.
.TP 2m
\fB\-\-enable\-autoprovisioning\-autoupgrade\fR
Enable node autoupgrade for autoprovisioned node pools. Use
\-\-no\-enable\-autoprovisioning\-autoupgrade to disable.
This flag argument must be specified if any of the other arguments in this group
are specified.
.RE
.sp
.TP 2m
Arguments to set limits on accelerators:
.RS 2m
.TP 2m
\fB\-\-max\-accelerator\fR=[\fItype\fR=\fITYPE\fR,\fIcount\fR=\fICOUNT\fR,...]
Sets maximum limit for a single type of accelerators (e.g. GPUs) in cluster.
.RS 2m
.TP 2m
\fBtype\fR
(Required) The specific type (e.g. nvidia\-tesla\-t4 for NVIDIA T4) of
accelerator for which the limit is set. Use \f5gcloud compute accelerator\-types
list\fR to learn about all available accelerator types.
.TP 2m
\fBcount\fR
(Required) The maximum number of accelerators to which the cluster can be
scaled.
This flag argument must be specified if any of the other arguments in this group
are specified.
.RE
.sp
.TP 2m
\fB\-\-min\-accelerator\fR=[\fItype\fR=\fITYPE\fR,\fIcount\fR=\fICOUNT\fR,...]
Sets minimum limit for a single type of accelerators (e.g. GPUs) in cluster.
Defaults to 0 for all accelerator types if it isn't set.
.RS 2m
.TP 2m
\fBtype\fR
(Required) The specific type (e.g. nvidia\-tesla\-t4 for NVIDIA T4) of
accelerator for which the limit is set. Use \f5gcloud compute accelerator\-types
list\fR to learn about all available accelerator types.
.TP 2m
\fBcount\fR
(Required) The minimum number of accelerators to which the cluster can be
scaled.
.RE
.RE
.RE
.RE
.RE
.sp
.TP 2m
\fB\-\-enable\-insecure\-binding\-system\-authenticated\fR
Allow using \f5system:authenticated\fR as a subject in ClusterRoleBindings and
RoleBindings. Allowing bindings that reference \f5system:authenticated\fR is a
security risk and is not recommended.
To disallow binding \f5system:authenticated\fR in a cluster, explicitly set the
\f5\-\-no\-enable\-insecure\-binding\-system\-authenticated\fR flag instead.
.TP 2m
\fB\-\-enable\-insecure\-binding\-system\-unauthenticated\fR
Allow using \f5system:unauthenticated\fR and \f5system:anonymous\fR as subjects
in ClusterRoleBindings and RoleBindings. Allowing bindings that reference
\f5system:unauthenticated\fR and \f5system:anonymous\fR are a security risk and
is not recommended.
To disallow binding \f5system:authenticated\fR in a cluster, explicitly set the
\f5\-\-no\-enable\-insecure\-binding\-system\-unauthenticated\fR flag instead.
.TP 2m
\fB\-\-logging\-service\fR=\fILOGGING_SERVICE\fR
(DEPRECATED) Logging service to use for the cluster. Options are:
"logging.googleapis.com/kubernetes" (the Google Cloud Logging service with
Kubernetes\-native resource model enabled), "logging.googleapis.com" (the Google
Cloud Logging service), "none" (logs will not be exported from the cluster)
The \f5\-\-logging\-service\fR flag is deprecated and will be removed in an
upcoming release. Please use \f5\-\-logging\fR instead. For more information,
please read:
https://cloud.google.com/kubernetes\-engine/docs/concepts/about\-logs.
.TP 2m
\fB\-\-monitoring\-service\fR=\fIMONITORING_SERVICE\fR
(DEPRECATED) Monitoring service to use for the cluster. Options are:
"monitoring.googleapis.com/kubernetes" (the Google Cloud Monitoring service with
Kubernetes\-native resource model enabled), "monitoring.googleapis.com" (the
Google Cloud Monitoring service), "none" (no metrics will be exported from the
cluster)
The \f5\-\-monitoring\-service\fR flag is deprecated and will be removed in an
upcoming release. Please use \f5\-\-monitoring\fR instead. For more information,
please read:
https://cloud.google.com/kubernetes\-engine/docs/how\-to/configure\-metrics.
.TP 2m
Flags for Secret Manager configuration:
.RS 2m
.TP 2m
\fB\-\-[no\-]enable\-secret\-manager\fR
Enables the Secret Manager CSI driver provider component. See
https://secrets\-store\-csi\-driver.sigs.k8s.io/introduction
https://github.com/GoogleCloudPlatform/secrets\-store\-csi\-driver\-provider\-gcp.
Use \fB\-\-enable\-secret\-manager\fR to enable and
\fB\-\-no\-enable\-secret\-manager\fR to disable.
.TP 2m
\fB\-\-[no\-]enable\-secret\-manager\-rotation\fR
Enables the rotation of secrets in the Secret Manager CSI driver provider
component. Use \fB\-\-enable\-secret\-manager\-rotation\fR to enable and
\fB\-\-no\-enable\-secret\-manager\-rotation\fR to disable.
.TP 2m
\fB\-\-secret\-manager\-rotation\-interval\fR=\fISECRET_MANAGER_ROTATION_INTERVAL\fR
Set the rotation period for secrets in the Secret Manager CSI driver provider
component. If you don't specify a time interval for the rotation, it will
default to a rotation period of two minutes.
.RE
.sp
.TP 2m
Flags for Secret Sync configuration:
.RS 2m
.TP 2m
\fB\-\-[no\-]enable\-secret\-sync\fR
Enables the Secret Sync component. See
https://cloud.google.com/secret\-manager/docs/sync\-k8\-secrets. Use
\fB\-\-enable\-secret\-sync\fR to enable and \fB\-\-no\-enable\-secret\-sync\fR
to disable.
.TP 2m
\fB\-\-[no\-]enable\-secret\-sync\-rotation\fR
Enables the rotation of secrets in the Secret Sync component. provider
component. Use \fB\-\-enable\-secret\-sync\-rotation\fR to enable and
\fB\-\-no\-enable\-secret\-sync\-rotation\fR to disable.
.TP 2m
\fB\-\-secret\-sync\-rotation\-interval\fR=\fISECRET_SYNC_ROTATION_INTERVAL\fR
Set the rotation period for secrets in the Secret Sync component.
.RE
.sp
.TP 2m
Basic auth
.RS 2m
.TP 2m
\fB\-\-password\fR=\fIPASSWORD\fR
The password to use for cluster auth. Defaults to a server\-specified
randomly\-generated string.
.TP 2m
Options to specify the username.
At most one of these can be specified:
.RS 2m
.TP 2m
\fB\-\-enable\-basic\-auth\fR
Enable basic (username/password) auth for the cluster.
\f5\-\-enable\-basic\-auth\fR is an alias for \f5\-\-username=admin\fR;
\f5\-\-no\-enable\-basic\-auth\fR is an alias for \f5\-\-username=""\fR. Use
\f5\-\-password\fR to specify a password; if not, the server will randomly
generate one. For cluster versions before 1.12, if neither
\f5\-\-enable\-basic\-auth\fR nor \f5\-\-username\fR is specified,
\f5\-\-enable\-basic\-auth\fR will default to \f5true\fR. After 1.12,
\f5\-\-enable\-basic\-auth\fR will default to \f5false\fR.
.TP 2m
\fB\-\-username\fR=\fIUSERNAME\fR, \fB\-u\fR \fIUSERNAME\fR
The user name to use for basic auth for the cluster. Use \f5\-\-password\fR to
specify a password; if not, the server will randomly generate one.
.RE
.RE
.RE
.RE
.sp
.SH "OPTIONAL FLAGS"
.RS 2m
.TP 2m
\fB\-\-async\fR
Return immediately, without waiting for the operation in progress to complete.
.TP 2m
\fB\-\-cloud\-run\-config\fR=[\fIload\-balancer\-type\fR=\fIEXTERNAL\fR,...]
Configurations for Cloud Run addon, requires \f5\-\-addons=CloudRun\fR for
create and \f5\-\-update\-addons=CloudRun=ENABLED\fR for update.
.RS 2m
.TP 2m
\fBload\-balancer\-type\fR
(Optional) Type of load\-balancer\-type EXTERNAL or INTERNAL.
Examples:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-cloud\-run\-config=load\-balancer\-type=INTERNAL
.RE
.RE
.sp
.TP 2m
\fB\-\-istio\-config\fR=[\fIauth\fR=\fIMTLS_PERMISSIVE\fR,...]
(REMOVED) Configurations for Istio addon, requires \-\-addons contains Istio for
create, or \-\-update\-addons Istio=ENABLED for update.
.RS 2m
.TP 2m
\fBauth\fR
(Optional) Type of auth MTLS_PERMISSIVE or MTLS_STRICT.
Examples:
.RS 2m
$ gcloud beta container clusters update example\-cluster \e
\-\-istio\-config=auth=MTLS_PERMISSIVE
.RE
The \f5\-\-istio\-config\fR flag is no longer supported. For more information
and migration, see
https://cloud.google.com/istio/docs/istio\-on\-gke/migrate\-to\-anthos\-service\-mesh.
.RE
.sp
.TP 2m
\fB\-\-node\-pool\fR=\fINODE_POOL\fR
Node pool to be updated.
.TP 2m
At most one of these can be specified:
.RS 2m
.TP 2m
\fB\-\-location\fR=\fILOCATION\fR
Compute zone or region (e.g. us\-central1\-a or us\-central1) for the cluster.
Overrides the default compute/region or compute/zone value for this command
invocation. Prefer using this flag over the \-\-region or \-\-zone flags.
.TP 2m
\fB\-\-region\fR=\fIREGION\fR
Compute region (e.g. us\-central1) for a regional cluster. Overrides the default
compute/region property value for this command invocation.
.TP 2m
\fB\-\-zone\fR=\fIZONE\fR, \fB\-z\fR \fIZONE\fR
Compute zone (e.g. us\-central1\-a) for a zonal cluster. Overrides the default
compute/zone property value for this command invocation.
.RE
.sp
.TP 2m
Cluster autoscaling
.RS 2m
.TP 2m
\fB\-\-location\-policy\fR=\fILOCATION_POLICY\fR
Location policy specifies the algorithm used when scaling\-up the node pool.
.RS 2m
.IP "\(bu" 2m
\f5BALANCED\fR \- Is a best effort policy that aims to balance the sizes of
available zones.
.IP "\(bu" 2m
\f5ANY\fR \- Instructs the cluster autoscaler to prioritize utilization of
unused reservations, and reduces preemption risk for Spot VMs.
.RE
.sp
\fILOCATION_POLICY\fR must be one of: \fBBALANCED\fR, \fBANY\fR.
.TP 2m
\fB\-\-max\-nodes\fR=\fIMAX_NODES\fR
Maximum number of nodes per zone in the node pool.
Maximum number of nodes per zone to which the node pool specified by
\-\-node\-pool (or default node pool if unspecified) can scale. Ignored unless
\-\-enable\-autoscaling is also specified.
.TP 2m
\fB\-\-min\-nodes\fR=\fIMIN_NODES\fR
Minimum number of nodes per zone in the node pool.
Minimum number of nodes per zone to which the node pool specified by
\-\-node\-pool (or default node pool if unspecified) can scale. Ignored unless
\-\-enable\-autoscaling is also specified.
.TP 2m
\fB\-\-total\-max\-nodes\fR=\fITOTAL_MAX_NODES\fR
Maximum number of all nodes in the node pool.
Maximum number of all nodes to which the node pool specified by \-\-node\-pool
(or default node pool if unspecified) can scale. Ignored unless
\-\-enable\-autoscaling is also specified.
.TP 2m
\fB\-\-total\-min\-nodes\fR=\fITOTAL_MIN_NODES\fR
Minimum number of all nodes in the node pool.
Minimum number of all nodes to which the node pool specified by \-\-node\-pool
(or default node pool if unspecified) can scale. Ignored unless
\-\-enable\-autoscaling is also specified.
.RE
.RE
.sp
.SH "GCLOUD WIDE FLAGS"
These flags are available to all commands: \-\-access\-token\-file, \-\-account,
\-\-billing\-project, \-\-configuration, \-\-flags\-file, \-\-flatten,
\-\-format, \-\-help, \-\-impersonate\-service\-account, \-\-log\-http,
\-\-project, \-\-quiet, \-\-trace\-token, \-\-user\-output\-enabled,
\-\-verbosity.
Run \fB$ gcloud help\fR for details.
.SH "NOTES"
This command is currently in beta and might change without notice. These
variants are also available:
.RS 2m
$ gcloud container clusters update
.RE
.RS 2m
$ gcloud alpha container clusters update
.RE