HEX
Server: Apache/2.4.65 (Ubuntu)
System: Linux ielts-store-v2 6.8.0-1036-gcp #38~22.04.1-Ubuntu SMP Thu Aug 14 01:19:18 UTC 2025 x86_64
User: root (0)
PHP: 7.2.34-54+ubuntu20.04.1+deb.sury.org+1
Disabled: pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wifcontinued,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_get_handler,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,pcntl_async_signals,
Upload Files
File: //snap/google-cloud-cli/396/help/man/man1/gcloud_beta_container_node-pools_update.1
.TH "GCLOUD_BETA_CONTAINER_NODE\-POOLS_UPDATE" 1



.SH "NAME"
.HP
gcloud beta container node\-pools update \- updates a node pool in a running cluster



.SH "SYNOPSIS"
.HP
\f5gcloud beta container node\-pools update\fR \fINAME\fR (\fB\-\-accelerator\fR=[\fItype\fR=\fITYPE\fR,[\fIcount\fR=\fICOUNT\fR,\fIgpu\-driver\-version\fR=\fIGPU_DRIVER_VERSION\fR,\fIgpu\-partition\-size\fR=\fIGPU_PARTITION_SIZE\fR,\fIgpu\-sharing\-strategy\fR=\fIGPU_SHARING_STRATEGY\fR,\fImax\-shared\-clients\-per\-gpu\fR=\fIMAX_SHARED_CLIENTS_PER_GPU\fR],...]\ |\ \fB\-\-confidential\-node\-type\fR=\fICONFIDENTIAL_NODE_TYPE\fR\ |\ \fB\-\-containerd\-config\-from\-file\fR=\fIPATH_TO_FILE\fR\ |\ \fB\-\-enable\-confidential\-nodes\fR\ |\ \fB\-\-enable\-gvnic\fR\ |\ \fB\-\-enable\-image\-streaming\fR\ |\ \fB\-\-enable\-insecure\-kubelet\-readonly\-port\fR\ |\ \fB\-\-enable\-private\-nodes\fR\ |\ \fB\-\-enable\-queued\-provisioning\fR\ |\ \fB\-\-flex\-start\fR\ |\ \fB\-\-labels\fR=[\fIKEY\fR=\fIVALUE\fR,...]\ |\ \fB\-\-logging\-variant\fR=\fILOGGING_VARIANT\fR\ |\ \fB\-\-max\-run\-duration\fR=\fIMAX_RUN_DURATION\fR\ |\ \fB\-\-network\-performance\-configs\fR=[\fIPROPERTY\fR=\fIVALUE\fR,...]\ |\ \fB\-\-node\-labels\fR=[\fINODE_LABEL\fR,...]\ |\ \fB\-\-node\-locations\fR=\fIZONE\fR,[\fIZONE\fR,...]\ |\ \fB\-\-node\-taints\fR=[\fINODE_TAINT\fR,...]\ |\ \fB\-\-resource\-manager\-tags\fR=[\fIKEY\fR=\fIVALUE\fR,...]\ |\ \fB\-\-storage\-pools\fR=\fISTORAGE_POOL\fR,[...]\ |\ \fB\-\-system\-config\-from\-file\fR=\fIPATH_TO_FILE\fR\ |\ \fB\-\-tags\fR=[\fITAG\fR,...]\ |\ \fB\-\-windows\-os\-version\fR=\fIWINDOWS_OS_VERSION\fR\ |\ \fB\-\-workload\-metadata\fR=\fIWORKLOAD_METADATA\fR\ |\ \fB\-\-autoscaled\-rollout\-policy\fR=[\fIwait\-for\-drain\-duration\fR=\fIWAIT\-FOR\-DRAIN\-DURATION\fR]\ \fB\-\-enable\-blue\-green\-upgrade\fR\ \fB\-\-enable\-surge\-upgrade\fR\ \fB\-\-max\-surge\-upgrade\fR=\fIMAX_SURGE_UPGRADE\fR\ \fB\-\-max\-unavailable\-upgrade\fR=\fIMAX_UNAVAILABLE_UPGRADE\fR\ \fB\-\-node\-pool\-soak\-duration\fR=\fINODE_POOL_SOAK_DURATION\fR\ \fB\-\-standard\-rollout\-policy\fR=[\fIbatch\-node\-count\fR=\fIBATCH_NODE_COUNT\fR,\fIbatch\-percent\fR=\fIBATCH_NODE_PERCENTAGE\fR,\fIbatch\-soak\-duration\fR=\fIBATCH_SOAK_DURATION\fR,...]\ |\ \fB\-\-boot\-disk\-provisioned\-iops\fR=\fIBOOT_DISK_PROVISIONED_IOPS\fR\ \fB\-\-boot\-disk\-provisioned\-throughput\fR=\fIBOOT_DISK_PROVISIONED_THROUGHPUT\fR\ \fB\-\-disk\-size\fR=\fIDISK_SIZE\fR\ \fB\-\-disk\-type\fR=\fIDISK_TYPE\fR\ \fB\-\-machine\-type\fR=\fIMACHINE_TYPE\fR\ |\ \fB\-\-enable\-autoprovisioning\fR\ \fB\-\-enable\-autoscaling\fR\ \fB\-\-location\-policy\fR=\fILOCATION_POLICY\fR\ \fB\-\-max\-nodes\fR=\fIMAX_NODES\fR\ \fB\-\-min\-nodes\fR=\fIMIN_NODES\fR\ \fB\-\-total\-max\-nodes\fR=\fITOTAL_MAX_NODES\fR\ \fB\-\-total\-min\-nodes\fR=\fITOTAL_MIN_NODES\fR\ |\ \fB\-\-enable\-autorepair\fR\ \fB\-\-enable\-autoupgrade\fR) [\fB\-\-async\fR] [\fB\-\-cluster\fR=\fICLUSTER\fR] [\fB\-\-location\fR=\fILOCATION\fR\ |\ \fB\-\-region\fR=\fIREGION\fR\ |\ \fB\-\-zone\fR=\fIZONE\fR,\ \fB\-z\fR\ \fIZONE\fR] [\fIGCLOUD_WIDE_FLAG\ ...\fR]



.SH "DESCRIPTION"

\fB(BETA)\fR \fBgcloud beta container node\-pools update\fR updates a node pool
in a Google Kubernetes Engine cluster.



.SH "EXAMPLES"

To turn on node autoupgrade in "node\-pool\-1" in the cluster "sample\-cluster",
run:

.RS 2m
$ gcloud beta container node\-pools update node\-pool\-1 \e
    \-\-cluster=sample\-cluster \-\-enable\-autoupgrade
.RE



.SH "POSITIONAL ARGUMENTS"

.RS 2m
.TP 2m
\fINAME\fR

The name of the node pool.


.RE
.sp

.SH "REQUIRED FLAGS"

.RS 2m
.TP 2m

Exactly one of these must be specified:


.RS 2m
.TP 2m
\fB\-\-accelerator\fR=[\fItype\fR=\fITYPE\fR,[\fIcount\fR=\fICOUNT\fR,\fIgpu\-driver\-version\fR=\fIGPU_DRIVER_VERSION\fR,\fIgpu\-partition\-size\fR=\fIGPU_PARTITION_SIZE\fR,\fIgpu\-sharing\-strategy\fR=\fIGPU_SHARING_STRATEGY\fR,\fImax\-shared\-clients\-per\-gpu\fR=\fIMAX_SHARED_CLIENTS_PER_GPU\fR],...]

Attaches accelerators (e.g. GPUs) to all nodes.

.RS 2m
.TP 2m
\fBtype\fR
(Required) The specific type (e.g. nvidia\-tesla\-t4 for NVIDIA T4) of
accelerator to attach to the instances. Use \f5gcloud compute accelerator\-types
list\fR to learn about all available accelerator types.

.TP 2m
\fBcount\fR
(Optional) The number of accelerators to attach to the instances. The default
value is 1.

.TP 2m
\fBgpu\-driver\-version\fR
(Optional) The NVIDIA driver version to install. GPU_DRIVER_VERSION must be one
of:

.RS 2m
`default`: Install the default driver version for this GKE version. For GKE version 1.30.1\-gke.1156000 and later, this is the default option.
.RE

.RS 2m
`latest`: Install the latest driver version available for this GKE version.
Can only be used for nodes that use Container\-Optimized OS.
.RE

.RS 2m
`disabled`: Skip automatic driver installation. You must manually install a
driver after you create the cluster. For GKE version 1.30.1\-gke.1156000 and earlier, this is the default option.
To manually install the GPU driver, refer to https://cloud.google.com/kubernetes\-engine/docs/how\-to/gpus#installing_drivers.
.RE

.TP 2m
\fBgpu\-partition\-size\fR
(Optional) The GPU partition size used when running multi\-instance GPUs. For
information about multi\-instance GPUs, refer to:
https://cloud.google.com/kubernetes\-engine/docs/how\-to/gpus\-multi

.TP 2m
\fBgpu\-sharing\-strategy\fR
(Optional) The GPU sharing strategy (e.g. time\-sharing) to use. For information
about GPU sharing, refer to:
https://cloud.google.com/kubernetes\-engine/docs/concepts/timesharing\-gpus

.TP 2m
\fBmax\-shared\-clients\-per\-gpu\fR
(Optional) The max number of containers allowed to share each GPU on the node.
This field is used together with \f5gpu\-sharing\-strategy\fR.

.RE
.sp
.TP 2m
\fB\-\-confidential\-node\-type\fR=\fICONFIDENTIAL_NODE_TYPE\fR

Recreate all the nodes in the node pool to be confidential VM
https://cloud.google.com/compute/confidential\-vm/docs/about\-cvm.
\fICONFIDENTIAL_NODE_TYPE\fR must be one of: \fBsev\fR, \fBsev_snp\fR,
\fBtdx\fR, \fBdisabled\fR.

.TP 2m
\fB\-\-containerd\-config\-from\-file\fR=\fIPATH_TO_FILE\fR

Path of the YAML file that contains containerd configuration entries like
configuring access to private image registries.

For detailed information on the configuration usage, please refer to
https://cloud.google.com/kubernetes\-engine/docs/how\-to/customize\-containerd\-configuration.

Note: Updating the containerd configuration of an existing cluster or node pool
requires recreation of the existing nodes, which might cause disruptions in
running workloads.

Use a full or relative path to a local file containing the value of
containerd_config.

.TP 2m
\fB\-\-enable\-confidential\-nodes\fR

Recreate all the nodes in the node pool to be confidential VM
https://cloud.google.com/compute/confidential\-vm/docs/about\-cvm.

.TP 2m
\fB\-\-enable\-gvnic\fR

Enable the use of GVNIC for this cluster. Requires re\-creation of nodes using
either a node\-pool upgrade or node\-pool creation.

.TP 2m
\fB\-\-enable\-image\-streaming\fR

Specifies whether to enable image streaming on node pool.

.TP 2m
\fB\-\-enable\-insecure\-kubelet\-readonly\-port\fR

Enables the Kubelet's insecure read only port.

To disable the readonly port on a cluster or node\-pool set the flag to
\f5\-\-no\-enable\-insecure\-kubelet\-readonly\-port\fR.

.TP 2m
\fB\-\-enable\-private\-nodes\fR

Enables provisioning nodes with private IP addresses only.

The control plane still communicates with all nodes through private IP addresses
only, regardless of whether private nodes are enabled or disabled.

.TP 2m
\fB\-\-enable\-queued\-provisioning\fR

Mark the nodepool as Queued only. This means that all new nodes can be obtained
only through queuing via ProvisioningRequest API.

.RS 2m
$ gcloud beta container node\-pools update node\-pool\-1 \e
    \-\-cluster=example\-cluster \-\-enable\-queued\-provisioning
... and other required parameters, for more details see:
https://cloud.google.com/kubernetes\-engine/docs/how\-to/provisioningrequest
.RE

.TP 2m
\fB\-\-flex\-start\fR

Start the node pool with Flex Start provisioning model.

.RS 2m
$ gcloud beta container node\-pools update node\-pool\-1 \e
     \-\-cluster=example\-cluster \-\-flex\-start
and other required parameters, for more details see:
https://cloud.google.com/kubernetes\-engine/docs/how\-to/provisioningrequest
.RE

.TP 2m
\fB\-\-labels\fR=[\fIKEY\fR=\fIVALUE\fR,...]

Labels to apply to the Google Cloud resources of node pools in the Kubernetes
Engine cluster. These are unrelated to Kubernetes labels. Warning: Updating this
label will causes the node(s) to be recreated.

Examples:

.RS 2m
$ gcloud beta container node\-pools update node\-pool\-1 \e
    \-\-cluster=example\-cluster \-\-labels=label1=value1,label2=value2
.RE

.TP 2m
\fB\-\-logging\-variant\fR=\fILOGGING_VARIANT\fR

Specifies the logging variant that will be deployed on all the nodes in the node
pool. If the node pool doesn't specify a logging variant, then the logging
variant specified for the cluster will be deployed on all the nodes in the node
pool. Valid logging variants are \f5MAX_THROUGHPUT\fR, \f5DEFAULT\fR.
\fILOGGING_VARIANT\fR must be one of:

.RS 2m
.TP 2m
\fBDEFAULT\fR
\'DEFAULT' variant requests minimal resources but may not guarantee high
throughput.
.TP 2m
\fBMAX_THROUGHPUT\fR
\'MAX_THROUGHPUT' variant requests more node resources and is able to achieve
logging throughput up to 10MB per sec.
.RE
.sp


.TP 2m
\fB\-\-max\-run\-duration\fR=\fIMAX_RUN_DURATION\fR

Limit the runtime of each node in the node pool to the specified duration.

.RS 2m
$ gcloud beta container node\-pools update node\-pool\-1 \e
    \-\-cluster=example\-cluster \-\-max\-run\-duration=3600s
.RE

.TP 2m
\fB\-\-network\-performance\-configs\fR=[\fIPROPERTY\fR=\fIVALUE\fR,...]

Configures network performance settings for the node pool. If this flag is not
specified, the pool will be created with its default network performance
configuration.

.RS 2m
.TP 2m
\fBtotal\-egress\-bandwidth\-tier\fR
Total egress bandwidth is the available outbound bandwidth from a VM, regardless
of whether the traffic is going to internal IP or external IP destinations. The
following tier values are allowed: [TIER_UNSPECIFIED,TIER_1]

.RE
.sp
.TP 2m
\fB\-\-node\-labels\fR=[\fINODE_LABEL\fR,...]

Replaces all the user specified Kubernetes labels on all nodes in an existing
node pool with the given labels.

Examples:

.RS 2m
$ gcloud beta container node\-pools update node\-pool\-1 \e
    \-\-cluster=example\-cluster \e
    \-\-node\-labels=label1=value1,label2=value2
.RE

Updating the node pool's \-\-node\-labels flag applies the labels to the
Kubernetes Node objects for existing nodes in\-place; it does not re\-create or
replace nodes. New nodes, including ones created by resizing or re\-creating
nodes, will have these labels on the Kubernetes API Node object. The labels can
be used in the \f5nodeSelector\fR field. See
https://kubernetes.io/docs/concepts/scheduling\-eviction/assign\-pod\-node/ for
examples.

Note that Kubernetes labels, intended to associate cluster components and
resources with one another and manage resource lifecycles, are different from
Google Kubernetes Engine labels that are used for the purpose of tracking
billing and usage information.

.TP 2m
\fB\-\-node\-locations\fR=\fIZONE\fR,[\fIZONE\fR,...]

Set of zones in which the node pool's nodes should be located. Changing the
locations for a node pool will result in nodes being either created or removed
from the node pool, depending on whether locations are being added or removed.

Multiple locations can be specified, separated by commas. For example:

.RS 2m
$ gcloud beta container node\-pools update node\-pool\-1 \e
    \-\-cluster=sample\-cluster \e
    \-\-node\-locations=us\-central1\-a,us\-central1\-b
.RE

.TP 2m
\fB\-\-node\-taints\fR=[\fINODE_TAINT\fR,...]

Replaces all the user specified Kubernetes taints on all nodes in an existing
node pool, which can be used with tolerations for pod scheduling.

Examples:

.RS 2m
$ gcloud beta container node\-pools update node\-pool\-1 \e
    \-\-cluster=example\-cluster \e
    \-\-node\-taints=key1=val1:NoSchedule,key2=val2:PreferNoSchedule
.RE

To read more about node\-taints, see
https://cloud.google.com/kubernetes\-engine/docs/node\-taints.

.TP 2m
\fB\-\-resource\-manager\-tags\fR=[\fIKEY\fR=\fIVALUE\fR,...]

Replaces all the user specified resource manager tags on all nodes in an
existing node pool in a Standard cluster with the given comma\-separated
resource manager tags that has the GCE_FIREWALL purpose.

Examples:

.RS 2m
$ gcloud beta container node\-pools update example\-node\-pool \e
    \-\-resource\-manager\-tags=tagKeys/1234=tagValues/2345
$ gcloud beta container node\-pools update example\-node\-pool \e
    \-\-resource\-manager\-tags=my\-project/key1=value1
$ gcloud beta container node\-pools update example\-node\-pool \e
    \-\-resource\-manager\-tags=12345/key1=value1,23456/key2=value2
$ gcloud beta container node\-pools update example\-node\-pool \e
    \-\-resource\-manager\-tags=
.RE

All nodes, including nodes that are resized or re\-created, will have the
specified tags on the corresponding Instance object in the Compute Engine API.
You can reference these tags in network firewall policy rules. For instructions,
see https://cloud.google.com/firewall/docs/use\-tags\-for\-firewalls.

.TP 2m
\fB\-\-storage\-pools\fR=\fISTORAGE_POOL\fR,[...]

A list of storage pools where the node pool's boot disks will be provisioned.
Replaces all the current storage pools of an existing node pool, with the
specified storage pools.

STORAGE_POOL must be in the format
projects/project/zones/zone/storagePools/storagePool

.TP 2m
\fB\-\-system\-config\-from\-file\fR=\fIPATH_TO_FILE\fR

Path of the YAML/JSON file that contains the node configuration, including Linux
kernel parameters (sysctls) and kubelet configs.

Examples:

.RS 2m
kubeletConfig:
  cpuManagerPolicy: static
  memoryManager:
    policy: Static
  topologyManager:
    policy: BestEffort
    scope: pod
linuxConfig:
  sysctl:
    net.core.somaxconn: '2048'
    net.ipv4.tcp_rmem: '4096 87380 6291456'
  hugepageConfig:
    hugepage_size2m: '1024'
    hugepage_size1g: '2'
  swapConfig:
    enabled: true
    bootDiskProfile:
      swapSizeGib: 8
  cgroupMode: 'CGROUP_MODE_V2'
.RE

List of supported kubelet configs in 'kubeletConfig'.


.TS
tab(	);
l(36)B l(90)B
l(36) l(90).
KEY	VALUE
cpuManagerPolicy	either 'static' or 'none'
cpuCFSQuota	true or false (enabled by default)
cpuCFSQuotaPeriod	interval (e.g., '100ms'. The value must be between 1ms and 1 second, inclusive.)
memoryManager	specify memory manager policy
topologyManager	specify topology manager policy and scope
podPidsLimit	integer (The value must be greater than or equal to 1024 and less than 4194304.)
containerLogMaxSize	positive number plus unit suffix (e.g., '100Mi', '0.2Gi'. The value must be between 10Mi and 500Mi, inclusive.)
containerLogMaxFiles	integer (The value must be between [2, 10].)
imageGcLowThresholdPercent	integer (The value must be between [10, 85], and lower than imageGcHighThresholdPercent.)
imageGcHighThresholdPercent	integer (The value must be between [10, 85], and greater than imageGcLowThresholdPercent.)
imageMinimumGcAge	interval (e.g., '100s', '1m'. The value must be less than '2m'.)
imageMaximumGcAge	interval (e.g., '100s', '1m'. The value must be greater than imageMinimumGcAge.)
evictionSoft	specify eviction soft thresholds
evictionSoftGracePeriod	specify eviction soft grace period
evictionMinimumReclaim	specify eviction minimum reclaim thresholds
evictionMaxPodGracePeriodSeconds	integer (Max grace period for pod termination during eviction, in seconds. The value must be between [0, 300].)
allowedUnsafeSysctls	list of sysctls (Allowlisted groups: 'kernel.shm*', 'kernel.msg*', 'kernel.sem', 'fs.mqueue.*', and 'net.*', and sysctls under the groups.)
singleProcessOomKill	true or false
maxParallelImagePulls	integer (The value must be between [2, 5].)
.TE


List of supported keys in memoryManager in 'kubeletConfig'.


.TS
tab(	);
l(42)B l(42)B
l(42) l(42).
KEY	VALUE
policy	either 'Static' or 'None'
.TE

List of supported keys in topologyManager in 'kubeletConfig'.


.TS
tab(	);
l(42)B l(42)B
l(42) l(42).
KEY	VALUE
policy	either 'none' or 'best-effort' or 'single-numa-node' or 'restricted'
scope	either 'pod' or 'container'
.TE

List of supported keys in evictionSoft in 'kubeletConfig'.


.TS
tab(	);
l(25)B l(93)B
l(25) l(93).
KEY	VALUE
memoryAvailable	quantity (e.g., '100Mi', '1Gi'. Represents the amount of memory available before soft eviction. The value must be at least 100Mi and less than 50% of the node's memory.)
nodefsAvailable	percentage (e.g., '20%'. Represents the nodefs available before soft eviction. The value must be between 10% and 50%, inclusive.)
nodefsInodesFree	percentage (e.g., '20%'. Represents the nodefs inodes free before soft eviction. The value must be between 5% and 50%, inclusive.)
imagefsAvailable	percentage (e.g., '20%'. Represents the imagefs available before soft eviction. The value must be between 15% and 50%, inclusive.)
imagefsInodesFree	percentage (e.g., '20%'. Represents the imagefs inodes free before soft eviction. The value must be between 5% and 50%, inclusive.)
pidAvailable	percentage (e.g., '20%'. Represents the pid available before soft eviction. The value must be between 10% and 50%, inclusive.)
.TE

List of supported keys in evictionSoftGracePeriod in 'kubeletConfig'.


.TS
tab(	);
l(25)B l(93)B
l(25) l(93).
KEY	VALUE
memoryAvailable	duration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
nodefsAvailable	duration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
nodefsInodesFree	duration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
imagefsAvailable	duration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
imagefsInodesFree	duration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
pidAvailable	duration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
.TE

List of supported keys in evictionMinimumReclaim in 'kubeletConfig'.


.TS
tab(	);
l(25)B l(93)B
l(25) l(93).
KEY	VALUE
memoryAvailable	percentage (e.g., '5%'. Represents the minimum reclaim threshold for memory available. The value must be positive and no more than 10%.)
nodefsAvailable	percentage (e.g., '5%'. Represents the minimum reclaim threshold for nodefs available. The value must be positive and no more than 10%.)
nodefsInodesFree	percentage (e.g., '5%'. Represents the minimum reclaim threshold for nodefs inodes free. The value must be positive and no more than 10%.)
imagefsAvailable	percentage (e.g., '5%'. Represents the minimum reclaim threshold for imagefs available. The value must be positive and no more than 10%.)
imagefsInodesFree	percentage (e.g., '5%'. Represents the minimum reclaim threshold for imagefs inodes free. The value must be positive and no more than 10%.)
pidAvailable	percentage (e.g., '5%'. Represents the minimum reclaim threshold for pid available. The value must be positive and no more than 10%.)
.TE


List of supported sysctls in 'linuxConfig'.


.TS
tab(	);
l(42)B l(42)B
l(42) l(42).
KEY	VALUE
net.core.netdev_max_backlog	Any positive integer, less than 2147483647
net.core.rmem_default	Must be between [2304, 2147483647]
net.core.rmem_max	Must be between [2304, 2147483647]
net.core.wmem_default	Must be between [4608, 2147483647]
net.core.wmem_max	Must be between [4608, 2147483647]
net.core.optmem_max	Any positive integer, less than 2147483647
net.core.somaxconn	Must be between [128, 2147483647]
net.ipv4.tcp_rmem	Any positive integer tuple
net.ipv4.tcp_wmem	Any positive integer tuple
net.ipv4.tcp_tw_reuse	Must be {0, 1, 2}
net.ipv4.tcp_mtu_probing	Must be {0, 1, 2}
net.ipv4.tcp_max_orphans	Must be between [16384, 262144]
net.ipv4.tcp_max_tw_buckets	Must be between [4096, 2147483647]
net.ipv4.tcp_syn_retries	Must be between [1, 127]
net.ipv4.tcp_ecn	Must be {0, 1, 2}
net.ipv4.tcp_congestion_control	Must be string containing only letters and numbers
net.netfilter.nf_conntrack_max	Must be between [65536, 4194304]
net.netfilter.nf_conntrack_buckets	Must be between [65536, 524288]. Recommend setting: nf_conntrack_max = nf_conntrack_buckets * 4
net.netfilter.nf_conntrack_tcp_timeout_close_wait	Must be between [60, 3600]
net.netfilter.nf_conntrack_tcp_timeout_time_wait	Must be between [1, 600]
net.netfilter.nf_conntrack_tcp_timeout_established	Must be between [600, 86400]
net.netfilter.nf_conntrack_acct	Must be {0, 1}
kernel.shmmni	Must be between [4096, 32768]
kernel.shmmax	Must be between [0, 18446744073692774399]
kernel.shmall	Must be between [0, 18446744073692774399]
kernel.perf_event_paranoid	Must be {-1, 0, 1, 2, 3}
kernel.sched_rt_runtime_us	Must be [-1, 1000000]
kernel.softlockup_panic	Must be {0, 1}
kernel.yama.ptrace_scope	Must be {0, 1, 2, 3}
kernel.kptr_restrict	Must be {0, 1, 2}
kernel.dmesg_restrict	Must be {0, 1}
kernel.sysrq	Must be [0, 511]
fs.aio-max-nr	Must be between [65536, 4194304]
fs.file-max	Must be between [104857, 67108864]
fs.inotify.max_user_instances	Must be between [8192, 1048576]
fs.inotify.max_user_watches	Must be between [8192, 1048576]
fs.nr_open	Must be between [1048576, 2147483584]
vm.dirty_background_ratio	Must be between [1, 100]
vm.dirty_background_bytes	Must be between [0, 68719476736]
vm.dirty_expire_centisecs	Must be between [0, 6000]
vm.dirty_ratio	Must be between [1, 100]
vm.dirty_bytes	Must be between [0, 68719476736]
vm.dirty_writeback_centisecs	Must be between [0, 1000]
vm.max_map_count	Must be between [65536, 2147483647]
vm.overcommit_memory	Must be one of {0, 1, 2}
vm.overcommit_ratio	Must be between [0, 100]
vm.vfs_cache_pressure	Must be between [0, 100]
vm.swappiness	Must be between [0, 200]
vm.watermark_scale_factor	Must be between [10, 3000]
vm.min_free_kbytes	Must be between [67584, 1048576]
.TE

List of supported hugepage size in 'hugepageConfig'.


.TS
tab(	);
l(16)B l(45)B
l(16) l(45).
KEY	VALUE
hugepage_size2m	Number of 2M huge pages, any positive integer
hugepage_size1g	Number of 1G huge pages, any positive integer
.TE

List of supported keys in 'swapConfig' under 'linuxConfig'.


.TS
tab(	);
l(42)B l(42)B
l(42) l(42).
KEY	VALUE
enabled	boolean
encryptionConfig	specify encryption settings for the swap space
bootDiskProfile	specify swap on the node's boot disk
ephemeralLocalSsdProfile	specify swap on the local SSD shared with pod ephemeral storage
dedicatedLocalSsdProfile	specify swap on a new, separate local NVMe SSD exclusively for swap
.TE

List of supported keys in 'encryptionConfig' under 'swapConfig'.


.TS
tab(	);
l(42)B l(42)B
l(42) l(42).
KEY	VALUE
disabled	boolean
.TE

List of supported keys in 'bootDiskProfile' under 'swapConfig'.


.TS
tab(	);
l(42)B l(42)B
l(42) l(42).
KEY	VALUE
swapSizeGib	integer
swapSizePercent	integer
.TE

List of supported keys in 'ephemeralLocalSsdProfile' under 'swapConfig'.


.TS
tab(	);
l(42)B l(42)B
l(42) l(42).
KEY	VALUE
swapSizeGib	integer
swapSizePercent	integer
.TE

List of supported keys in 'dedicatedLocalSsdProfile' under 'swapConfig'.


.TS
tab(	);
l(42)B l(42)B
l(42) l(42).
KEY	VALUE
diskCount	integer
.TE


Allocated hugepage size should not exceed 60% of available memory on the node.
For example, c2d\-highcpu\-4 has 8GB memory, total allocated hugepage of 2m and
1g should not exceed 8GB * 0.6 = 4.8GB.

1G hugepages are only available in following machine familes: c3, m2, c2d, c3d,
h3, m3, a2, a3, g2.

Supported values for 'cgroupMode' under 'linuxConfig'.

.RS 2m
.IP "\(bu" 2m
\f5CGROUP_MODE_V1\fR: Use cgroupv1 on the node pool.
.IP "\(bu" 2m
\f5CGROUP_MODE_V2\fR: Use cgroupv2 on the node pool.
.IP "\(bu" 2m
\f5CGROUP_MODE_UNSPECIFIED\fR: Use the default GKE cgroup configuration.
.RE
.sp

Supported values for 'transparentHugepageEnabled' under 'linuxConfig' which
controls transparent hugepage support for anonymous memory.

.RS 2m
.IP "\(bu" 2m
\f5TRANSPARENT_HUGEPAGE_ENABLED_ALWAYS\fR: Transparent hugepage is enabled
system wide.
.IP "\(bu" 2m
\f5TRANSPARENT_HUGEPAGE_ENABLED_MADVISE\fR: Transparent hugepage is enabled
inside MADV_HUGEPAGE regions. This is the default kernel configuration.
.IP "\(bu" 2m
\f5TRANSPARENT_HUGEPAGE_ENABLED_NEVER\fR: Transparent hugepage is disabled.
.IP "\(bu" 2m
\f5TRANSPARENT_HUGEPAGE_ENABLED_UNSPECIFIED\fR: Default value. GKE will not
modify the kernel configuration.
.RE
.sp

Supported values for 'transparentHugepageDefrag' under 'linuxConfig' which
defines the transparent hugepage defrag configuration on the node.

.RS 2m
.IP "\(bu" 2m
\f5TRANSPARENT_HUGEPAGE_DEFRAG_ALWAYS\fR: It means that an application
requesting THP will stall on allocation failure and directly reclaim pages and
compact memory in an effort to allocate a THP immediately.
.IP "\(bu" 2m
\f5TRANSPARENT_HUGEPAGE_DEFRAG_DEFER\fR: It means that an application will wake
kswapd in the background to reclaim pages and wake kcompactd to compact memory
so that THP is available in the near future. It is the responsibility of
khugepaged to then install the THP pages later.
.IP "\(bu" 2m
\f5TRANSPARENT_HUGEPAGE_DEFRAG_DEFER_WITH_MADVISE\fR: It means that an
application will enter direct reclaim and compaction like always, but only for
regions that have used madvise(MADV_HUGEPAGE); all other regions will wake
kswapd in the background to reclaim pages and wake kcompactd to compact memory
so that THP is available in the near future.
.IP "\(bu" 2m
\f5TRANSPARENT_HUGEPAGE_DEFRAG_MADVISE\fR: It means that an application will
enter direct reclaim and compaction like always, but only for regions that have
used madvise(MADV_HUGEPAGE); all other regions will wake kswapd in the
background to reclaim pages and wake kcompactd to compact memory so that THP is
available in the near future.
.IP "\(bu" 2m
\f5TRANSPARENT_HUGEPAGE_DEFRAG_NEVER\fR: It means that an application will never
enter direct reclaim or compaction.
.IP "\(bu" 2m
\f5TRANSPARENT_HUGEPAGE_DEFRAG_UNSPECIFIED\fR: Default value. GKE will not
modify the kernel configuration.
.RE
.sp

Note, updating the system configuration of an existing node pool requires
recreation of the nodes which which might cause a disruption.

Use a full or relative path to a local file containing the value of
system_config.

.TP 2m
\fB\-\-tags\fR=[\fITAG\fR,...]

Replaces all the user specified Compute Engine tags on all nodes in an existing
node pool with the given tags (comma separated).

Examples:

.RS 2m
$ gcloud beta container node\-pools update node\-pool\-1 \e
    \-\-cluster=example\-cluster \-\-tags=tag1,tag2
.RE

New nodes, including ones created by resize or recreate, will have these tags on
the Compute Engine API instance object and these tags can be used in firewall
rules. See
https://cloud.google.com/sdk/gcloud/reference/compute/firewall\-rules/create for
examples.

.TP 2m
\fB\-\-windows\-os\-version\fR=\fIWINDOWS_OS_VERSION\fR

Specifies the Windows Server Image to use when creating a Windows node pool.
Valid variants can be "ltsc2019", "ltsc2022". It means using LTSC2019 server
image or LTSC2022 server image. If the node pool doesn't specify a Windows
Server Image Os version, then Ltsc2019 will be the default one to use.
\fIWINDOWS_OS_VERSION\fR must be one of: \fBltsc2019\fR, \fBltsc2022\fR.

.TP 2m
\fB\-\-workload\-metadata\fR=\fIWORKLOAD_METADATA\fR

Type of metadata server available to pods running in the node pool.
\fIWORKLOAD_METADATA\fR must be one of:

.RS 2m
.TP 2m
\fBEXPOSED\fR
[DEPRECATED] Pods running in this node pool have access to the node's underlying
Compute Engine Metadata Server.
.TP 2m
\fBGCE_METADATA\fR
Pods running in this node pool have access to the node's underlying Compute
Engine Metadata Server.
.TP 2m
\fBGKE_METADATA\fR
Run the Kubernetes Engine Metadata Server on this node. The Kubernetes Engine
Metadata Server exposes a metadata API to workloads that is compatible with the
V1 Compute Metadata APIs exposed by the Compute Engine and App Engine Metadata
Servers. This feature can only be enabled if Workload Identity is enabled at the
cluster level.
.TP 2m
\fBGKE_METADATA_SERVER\fR
[DEPRECATED] Run the Kubernetes Engine Metadata Server on this node. The
Kubernetes Engine Metadata Server exposes a metadata API to workloads that is
compatible with the V1 Compute Metadata APIs exposed by the Compute Engine and
App Engine Metadata Servers. This feature can only be enabled if Workload
Identity is enabled at the cluster level.
.TP 2m
\fBSECURE\fR
[DEPRECATED] Prevents pods not in hostNetwork from accessing certain VM
metadata, specifically kube\-env, which contains Kubelet credentials, and the
instance identity token. This is a temporary security solution available while
the bootstrapping process for cluster nodes is being redesigned with significant
security improvements. This feature is scheduled to be deprecated in the future
and later removed.
.RE
.sp


.TP 2m

Upgrade settings


.RS 2m
.TP 2m
\fB\-\-autoscaled\-rollout\-policy\fR=[\fIwait\-for\-drain\-duration\fR=\fIWAIT\-FOR\-DRAIN\-DURATION\fR]

Autoscaled rollout policy options for blue\-green upgrade.

.RS 2m
.TP 2m
\fBwait\-for\-drain\-duration\fR
(Optional) Time in seconds to wait after cordoning the blue pool before draining
the nodes.

Examples:

.RS 2m
$ gcloud beta container node\-pools update node\-pool\-1 \e
    \-\-cluster=example\-cluster  \-\-enable\-blue\-green\-upgrade  \e
    \-\-autoscaled\-rollout\-policy=""
.RE

.RS 2m
$ gcloud beta container node\-pools update node\-pool\-1 \e
    \-\-cluster=example\-cluster  \-\-enable\-blue\-green\-upgrade  \e
    \-\-autoscaled\-rollout\-policy=wait\-for\-drain\-duration=7200s
.RE

.RE
.sp
.TP 2m
\fB\-\-enable\-blue\-green\-upgrade\fR

Changes node pool upgrade strategy to blue\-green upgrade.

.TP 2m
\fB\-\-enable\-surge\-upgrade\fR

Changes node pool upgrade strategy to surge upgrade.

.TP 2m
\fB\-\-max\-surge\-upgrade\fR=\fIMAX_SURGE_UPGRADE\fR

Number of extra (surge) nodes to be created on each upgrade of the node pool.

Specifies the number of extra (surge) nodes to be created during this node
pool's upgrades. For example, running the following command will result in
creating an extra node each time the node pool is upgraded:

.RS 2m
$ gcloud beta container node\-pools update node\-pool\-1 \e
    \-\-cluster=example\-cluster \-\-max\-surge\-upgrade=1   \e
    \-\-max\-unavailable\-upgrade=0
.RE

Must be used in conjunction with '\-\-max\-unavailable\-upgrade'.

.TP 2m
\fB\-\-max\-unavailable\-upgrade\fR=\fIMAX_UNAVAILABLE_UPGRADE\fR

Number of nodes that can be unavailable at the same time on each upgrade of the
node pool.

Specifies the number of nodes that can be unavailable at the same time during
this node pool's upgrades. For example, assume the node pool has 5 nodes,
running the following command will result in having 3 nodes being upgraded in
parallel (1 + 2), but keeping always at least 3 (5 \- 2) available each time the
node pool is upgraded:

.RS 2m
$ gcloud beta container node\-pools update node\-pool\-1 \e
    \-\-cluster=example\-cluster \-\-max\-surge\-upgrade=1   \e
    \-\-max\-unavailable\-upgrade=2
.RE

Must be used in conjunction with '\-\-max\-surge\-upgrade'.

.TP 2m
\fB\-\-node\-pool\-soak\-duration\fR=\fINODE_POOL_SOAK_DURATION\fR

Time in seconds to be spent waiting during blue\-green upgrade before deleting
the blue pool and completing the upgrade.

.RS 2m
$ gcloud beta container node\-pools update node\-pool\-1 \e
    \-\-cluster=example\-cluster  \-\-node\-pool\-soak\-duration=600s
.RE

.TP 2m
\fB\-\-standard\-rollout\-policy\fR=[\fIbatch\-node\-count\fR=\fIBATCH_NODE_COUNT\fR,\fIbatch\-percent\fR=\fIBATCH_NODE_PERCENTAGE\fR,\fIbatch\-soak\-duration\fR=\fIBATCH_SOAK_DURATION\fR,...]

Standard rollout policy options for blue\-green upgrade.

Batch sizes are specified by one of, batch\-node\-count or batch\-percent. The
duration between batches is specified by batch\-soak\-duration.

.RS 2m
$ gcloud beta container node\-pools update node\-pool\-1 \e
    \-\-cluster=example\-cluster  \e
    \-\-standard\-rollout\-policy=batch\-node\-count=3,\e
batch\-soak\-duration=60s
.RE

.RS 2m
$ gcloud beta container node\-pools update node\-pool\-1 \e
    \-\-cluster=example\-cluster  \e
    \-\-standard\-rollout\-policy=batch\-percent=0.3,\e
batch\-soak\-duration=60s
.RE

.RE
.sp
.TP 2m

Node config


.RS 2m
.TP 2m
\fB\-\-boot\-disk\-provisioned\-iops\fR=\fIBOOT_DISK_PROVISIONED_IOPS\fR

Configure the Provisioned IOPS for the node pool boot disks. Only valid for
hyperdisk\-balanced boot disks.

.TP 2m
\fB\-\-boot\-disk\-provisioned\-throughput\fR=\fIBOOT_DISK_PROVISIONED_THROUGHPUT\fR

Configure the Provisioned Throughput for the node pool boot disks. Only valid
for hyperdisk\-balanced boot disks.

.TP 2m
\fB\-\-disk\-size\fR=\fIDISK_SIZE\fR

Size for node VM boot disks in GB. Defaults to 100GB.

.TP 2m
\fB\-\-disk\-type\fR=\fIDISK_TYPE\fR

Type of the node VM boot disk. For version 1.24 and later, defaults to
pd\-balanced. For versions earlier than 1.24, defaults to pd\-standard.
\fIDISK_TYPE\fR must be one of: \fBpd\-standard\fR, \fBpd\-ssd\fR,
\fBpd\-balanced\fR, \fBhyperdisk\-balanced\fR, \fBhyperdisk\-extreme\fR,
\fBhyperdisk\-throughput\fR.

.TP 2m
\fB\-\-machine\-type\fR=\fIMACHINE_TYPE\fR

The type of machine to use for nodes. Defaults to e2\-medium. The list of
predefined machine types is available using the following command:

.RS 2m
$ gcloud compute machine\-types list
.RE

You can also specify custom machine types by providing a string with the format
"custom\-CPUS\-RAM" where "CPUS" is the number of virtual CPUs and "RAM" is the
amount of RAM in MiB.

For example, to create a node pool using custom machines with 2 vCPUs and 12 GB
of RAM:

.RS 2m
$ gcloud beta container node\-pools update high\-mem\-pool \e
    \-\-machine\-type=custom\-2\-12288
.RE

.RE
.sp
.TP 2m

Cluster autoscaling


.RS 2m
.TP 2m
\fB\-\-enable\-autoprovisioning\fR

Enables Cluster Autoscaler to treat the node pool as if it was autoprovisioned.

Cluster Autoscaler will be able to delete the node pool if it's unneeded.

.TP 2m
\fB\-\-enable\-autoscaling\fR

Enables autoscaling for a node pool.

Enables autoscaling in the node pool specified by \-\-node\-pool or the default
node pool if \-\-node\-pool is not provided. If not already, \-\-max\-nodes or
\-\-total\-max\-nodes must also be set.

.TP 2m
\fB\-\-location\-policy\fR=\fILOCATION_POLICY\fR

Location policy specifies the algorithm used when scaling\-up the node pool.

.RS 2m
.IP "\(em" 2m
\f5BALANCED\fR \- Is a best effort policy that aims to balance the sizes of
available zones.
.IP "\(em" 2m
\f5ANY\fR \- Instructs the cluster autoscaler to prioritize utilization of
unused reservations, and reduces preemption risk for Spot VMs.
.RE
.sp

\fILOCATION_POLICY\fR must be one of: \fBBALANCED\fR, \fBANY\fR.

.TP 2m
\fB\-\-max\-nodes\fR=\fIMAX_NODES\fR

Maximum number of nodes per zone in the node pool.

Maximum number of nodes per zone to which the node pool specified by
\-\-node\-pool (or default node pool if unspecified) can scale. Ignored unless
\-\-enable\-autoscaling is also specified.

.TP 2m
\fB\-\-min\-nodes\fR=\fIMIN_NODES\fR

Minimum number of nodes per zone in the node pool.

Minimum number of nodes per zone to which the node pool specified by
\-\-node\-pool (or default node pool if unspecified) can scale. Ignored unless
\-\-enable\-autoscaling is also specified.

.TP 2m
\fB\-\-total\-max\-nodes\fR=\fITOTAL_MAX_NODES\fR

Maximum number of all nodes in the node pool.

Maximum number of all nodes to which the node pool specified by \-\-node\-pool
(or default node pool if unspecified) can scale. Ignored unless
\-\-enable\-autoscaling is also specified.

.TP 2m
\fB\-\-total\-min\-nodes\fR=\fITOTAL_MIN_NODES\fR

Minimum number of all nodes in the node pool.

Minimum number of all nodes to which the node pool specified by \-\-node\-pool
(or default node pool if unspecified) can scale. Ignored unless
\-\-enable\-autoscaling is also specified.

.RE
.sp
.TP 2m

Node management


.RS 2m
.TP 2m
\fB\-\-enable\-autorepair\fR

Enable node autorepair feature for a node pool.

.RS 2m
$ gcloud beta container node\-pools update node\-pool\-1 \e
    \-\-cluster=example\-cluster \-\-enable\-autorepair
.RE

See https://cloud.google.com/kubernetes\-engine/docs/how\-to/node\-auto\-repair
for more info.

.TP 2m
\fB\-\-enable\-autoupgrade\fR

Sets autoupgrade feature for a node pool.

.RS 2m
$ gcloud beta container node\-pools update node\-pool\-1 \e
    \-\-cluster=example\-cluster \-\-enable\-autoupgrade
.RE

See https://cloud.google.com/kubernetes\-engine/docs/node\-auto\-upgrades for
more info.


.RE
.RE
.RE
.sp

.SH "OPTIONAL FLAGS"

.RS 2m
.TP 2m
\fB\-\-async\fR

Return immediately, without waiting for the operation in progress to complete.

.TP 2m
\fB\-\-cluster\fR=\fICLUSTER\fR

The name of the cluster. Overrides the default \fBcontainer/cluster\fR property
value for this command invocation.

.TP 2m

At most one of these can be specified:


.RS 2m
.TP 2m
\fB\-\-location\fR=\fILOCATION\fR

Compute zone or region (e.g. us\-central1\-a or us\-central1) for the cluster.
Overrides the default compute/region or compute/zone value for this command
invocation. Prefer using this flag over the \-\-region or \-\-zone flags.

.TP 2m
\fB\-\-region\fR=\fIREGION\fR

Compute region (e.g. us\-central1) for a regional cluster. Overrides the default
compute/region property value for this command invocation.

.TP 2m
\fB\-\-zone\fR=\fIZONE\fR, \fB\-z\fR \fIZONE\fR

Compute zone (e.g. us\-central1\-a) for a zonal cluster. Overrides the default
compute/zone property value for this command invocation.


.RE
.RE
.sp

.SH "GCLOUD WIDE FLAGS"

These flags are available to all commands: \-\-access\-token\-file, \-\-account,
\-\-billing\-project, \-\-configuration, \-\-flags\-file, \-\-flatten,
\-\-format, \-\-help, \-\-impersonate\-service\-account, \-\-log\-http,
\-\-project, \-\-quiet, \-\-trace\-token, \-\-user\-output\-enabled,
\-\-verbosity.

Run \fB$ gcloud help\fR for details.



.SH "NOTES"

This command is currently in beta and might change without notice. These
variants are also available:

.RS 2m
$ gcloud container node\-pools update
.RE

.RS 2m
$ gcloud alpha container node\-pools update
.RE