HEX
Server: Apache/2.4.65 (Ubuntu)
System: Linux ielts-store-v2 6.8.0-1036-gcp #38~22.04.1-Ubuntu SMP Thu Aug 14 01:19:18 UTC 2025 x86_64
User: root (0)
PHP: 7.2.34-54+ubuntu20.04.1+deb.sury.org+1
Disabled: pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wifcontinued,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_get_handler,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,pcntl_async_signals,
Upload Files
File: //snap/google-cloud-cli/current/help/man/man1/gcloud_alpha_container_clusters_create.1
.TH "GCLOUD_ALPHA_CONTAINER_CLUSTERS_CREATE" 1



.SH "NAME"
.HP
gcloud alpha container clusters create \- create a cluster for running containers



.SH "SYNOPSIS"
.HP
\f5gcloud alpha container clusters create\fR \fINAME\fR [\fB\-\-accelerator\fR=[\fItype\fR=\fITYPE\fR,[\fIcount\fR=\fICOUNT\fR,\fIgpu\-driver\-version\fR=\fIGPU_DRIVER_VERSION\fR,\fIgpu\-partition\-size\fR=\fIGPU_PARTITION_SIZE\fR,\fIgpu\-sharing\-strategy\fR=\fIGPU_SHARING_STRATEGY\fR,\fImax\-shared\-clients\-per\-gpu\fR=\fIMAX_SHARED_CLIENTS_PER_GPU\fR],...]] [\fB\-\-addons\fR=[\fIADDON\fR[=\fIENABLED\fR|\fIDISABLED\fR],...]] [\fB\-\-allow\-route\-overlap\fR] [\fB\-\-alpha\-cluster\-feature\-gates\fR=[\fIFEATURE\fR=\fItrue\fR|\fIfalse\fR,...]] [\fB\-\-anonymous\-authentication\-config\fR=\fIANONYMOUS_AUTHENTICATION_CONFIG\fR] [\fB\-\-async\fR] [\fB\-\-auto\-monitoring\-scope\fR=\fIAUTO_MONITORING_SCOPE\fR] [\fB\-\-autopilot\-workload\-policies\fR=\fIWORKLOAD_POLICIES\fR] [\fB\-\-autoprovisioning\-enable\-insecure\-kubelet\-readonly\-port\fR] [\fB\-\-autoprovisioning\-network\-tags\fR=\fITAGS\fR,[\fITAGS\fR,...]] [\fB\-\-autoprovisioning\-resource\-manager\-tags\fR=[\fIKEY\fR=\fIVALUE\fR,...]] [\fB\-\-autoscaling\-profile\fR=\fIAUTOSCALING_PROFILE\fR] [\fB\-\-boot\-disk\-kms\-key\fR=\fIBOOT_DISK_KMS_KEY\fR] [\fB\-\-cloud\-run\-config\fR=[\fIload\-balancer\-type\fR=\fIEXTERNAL\fR,...]] [\fB\-\-cluster\-ipv4\-cidr\fR=\fICLUSTER_IPV4_CIDR\fR] [\fB\-\-cluster\-secondary\-range\-name\fR=\fINAME\fR] [\fB\-\-cluster\-version\fR=\fICLUSTER_VERSION\fR] [\fB\-\-confidential\-node\-type\fR=\fICONFIDENTIAL_NODE_TYPE\fR] [\fB\-\-containerd\-config\-from\-file\fR=\fIPATH_TO_FILE\fR] [\fB\-\-create\-subnetwork\fR=[\fIKEY\fR=\fIVALUE\fR,...]] [\fB\-\-data\-cache\-count\fR=\fIDATA_CACHE_COUNT\fR] [\fB\-\-database\-encryption\-key\fR=\fIDATABASE_ENCRYPTION_KEY\fR] [\fB\-\-default\-max\-pods\-per\-node\fR=\fIDEFAULT_MAX_PODS_PER_NODE\fR] [\fB\-\-disable\-default\-snat\fR] [\fB\-\-disable\-l4\-lb\-firewall\-reconciliation\fR] [\fB\-\-disable\-pod\-cidr\-overprovision\fR] [\fB\-\-disk\-size\fR=\fIDISK_SIZE\fR] [\fB\-\-disk\-type\fR=\fIDISK_TYPE\fR] [\fB\-\-enable\-authorized\-networks\-on\-private\-endpoint\fR] [\fB\-\-enable\-auto\-ipam\fR] [\fB\-\-enable\-autorepair\fR] [\fB\-\-no\-enable\-autoupgrade\fR] [\fB\-\-enable\-cilium\-clusterwide\-network\-policy\fR] [\fB\-\-enable\-cloud\-logging\fR] [\fB\-\-enable\-cloud\-monitoring\fR] [\fB\-\-enable\-cloud\-run\-alpha\fR] [\fB\-\-enable\-confidential\-nodes\fR] [\fB\-\-enable\-confidential\-storage\fR] [\fB\-\-enable\-cost\-allocation\fR] [\fB\-\-enable\-dataplane\-v2\fR] [\fB\-\-enable\-default\-compute\-class\fR] [\fB\-\-enable\-dns\-access\fR] [\fB\-\-enable\-fleet\fR] [\fB\-\-enable\-fqdn\-network\-policy\fR] [\fB\-\-enable\-gke\-oidc\fR] [\fB\-\-enable\-google\-cloud\-access\fR] [\fB\-\-enable\-gvnic\fR] [\fB\-\-enable\-identity\-service\fR] [\fB\-\-enable\-image\-streaming\fR] [\fB\-\-enable\-insecure\-kubelet\-readonly\-port\fR] [\fB\-\-enable\-intra\-node\-visibility\fR] [\fB\-\-enable\-ip\-access\fR] [\fB\-\-enable\-ip\-alias\fR] [\fB\-\-enable\-k8s\-certs\-via\-dns\fR] [\fB\-\-enable\-k8s\-tokens\-via\-dns\fR] [\fB\-\-enable\-kubernetes\-alpha\fR] [\fB\-\-enable\-kubernetes\-unstable\-apis\fR=\fIAPI\fR,[\fIAPI\fR,...]] [\fB\-\-enable\-l4\-ilb\-subsetting\fR] [\fB\-\-enable\-legacy\-authorization\fR] [\fB\-\-enable\-legacy\-lustre\-port\fR] [\fB\-\-enable\-logging\-monitoring\-system\-only\fR] [\fB\-\-enable\-managed\-prometheus\fR] [\fB\-\-enable\-master\-global\-access\fR] [\fB\-\-enable\-multi\-networking\fR] [\fB\-\-enable\-nested\-virtualization\fR] [\fB\-\-enable\-network\-policy\fR] [\fB\-\-enable\-pod\-security\-policy\fR] [\fB\-\-enable\-ray\-cluster\-logging\fR] [\fB\-\-enable\-ray\-cluster\-monitoring\fR] [\fB\-\-enable\-service\-externalips\fR] [\fB\-\-enable\-shielded\-nodes\fR] [\fB\-\-enable\-stackdriver\-kubernetes\fR] [\fB\-\-enable\-vertical\-pod\-autoscaling\fR] [\fB\-\-fleet\-project\fR=\fIPROJECT_ID_OR_NUMBER\fR] [\fB\-\-gateway\-api\fR=\fIGATEWAY_API\fR] [\fB\-\-hpa\-profile\fR=\fIHPA_PROFILE\fR] [\fB\-\-identity\-provider\fR=\fIIDENTITY_PROVIDER\fR] [\fB\-\-image\-type\fR=\fIIMAGE_TYPE\fR] [\fB\-\-in\-transit\-encryption\fR=\fIIN_TRANSIT_ENCRYPTION\fR] [\fB\-\-ipv6\-access\-type\fR=\fIIPV6_ACCESS_TYPE\fR] [\fB\-\-issue\-client\-certificate\fR] [\fB\-\-istio\-config\fR=[\fIauth\fR=\fIMTLS_PERMISSIVE\fR,...]] [\fB\-\-labels\fR=[\fIKEY\fR=\fIVALUE\fR,...]] [\fB\-\-linux\-sysctls\fR=\fIKEY\fR=\fIVALUE\fR,[\fIKEY\fR=\fIVALUE\fR,...]] [\fB\-\-local\-ssd\-encryption\-mode\fR=\fILOCAL_SSD_ENCRYPTION_MODE\fR] [\fB\-\-logging\fR=[\fICOMPONENT\fR,...]] [\fB\-\-logging\-variant\fR=\fILOGGING_VARIANT\fR] [\fB\-\-machine\-type\fR=\fIMACHINE_TYPE\fR,\ \fB\-m\fR\ \fIMACHINE_TYPE\fR] [\fB\-\-max\-nodes\-per\-pool\fR=\fIMAX_NODES_PER_POOL\fR] [\fB\-\-max\-pods\-per\-node\fR=\fIMAX_PODS_PER_NODE\fR] [\fB\-\-max\-surge\-upgrade\fR=\fIMAX_SURGE_UPGRADE\fR;\ default=1] [\fB\-\-max\-unavailable\-upgrade\fR=\fIMAX_UNAVAILABLE_UPGRADE\fR] [\fB\-\-membership\-type\fR=\fIMEMBERSHIP_TYPE\fR] [\fB\-\-metadata\fR=\fIKEY\fR=\fIVALUE\fR,[\fIKEY\fR=\fIVALUE\fR,...]] [\fB\-\-metadata\-from\-file\fR=\fIKEY\fR=\fILOCAL_FILE_PATH\fR,[...]] [\fB\-\-min\-cpu\-platform\fR=\fIPLATFORM\fR] [\fB\-\-monitoring\fR=[\fICOMPONENT\fR,...]] [\fB\-\-network\fR=\fINETWORK\fR] [\fB\-\-network\-performance\-configs\fR=[\fIPROPERTY1\fR=\fIVALUE1\fR,...]] [\fB\-\-node\-labels\fR=[\fINODE_LABEL\fR,...]] [\fB\-\-node\-pool\-name\fR=\fINODE_POOL_NAME\fR] [\fB\-\-node\-taints\fR=[\fINODE_TAINT\fR,...]] [\fB\-\-node\-version\fR=\fINODE_VERSION\fR] [\fB\-\-notification\-config\fR=[\fIpubsub\fR=\fIENABLED\fR|\fIDISABLED\fR,\fIpubsub\-topic\fR=\fITOPIC\fR,...]] [\fB\-\-num\-nodes\fR=\fINUM_NODES\fR;\ default=3] [\fB\-\-patch\-update\fR=[\fIPATCH_UPDATE\fR]] [\fB\-\-performance\-monitoring\-unit\fR=\fIPERFORMANCE_MONITORING_UNIT\fR] [\fB\-\-placement\-policy\fR=\fIPLACEMENT_POLICY\fR] [\fB\-\-placement\-type\fR=\fIPLACEMENT_TYPE\fR] [\fB\-\-preemptible\fR] [\fB\-\-private\-endpoint\-subnetwork\fR=\fINAME\fR] [\fB\-\-private\-ipv6\-google\-access\-type\fR=\fIPRIVATE_IPV6_GOOGLE_ACCESS_TYPE\fR] [\fB\-\-release\-channel\fR=\fICHANNEL\fR] [\fB\-\-resource\-manager\-tags\fR=[\fIKEY\fR=\fIVALUE\fR,...]] [\fB\-\-security\-group\fR=\fISECURITY_GROUP\fR] [\fB\-\-security\-posture\fR=\fISECURITY_POSTURE\fR] [\fB\-\-services\-ipv4\-cidr\fR=\fICIDR\fR] [\fB\-\-services\-secondary\-range\-name\fR=\fINAME\fR] [\fB\-\-shielded\-integrity\-monitoring\fR] [\fB\-\-shielded\-secure\-boot\fR] [\fB\-\-spot\fR] [\fB\-\-stack\-type\fR=\fISTACK_TYPE\fR] [\fB\-\-storage\-pools\fR=\fISTORAGE_POOL\fR,[...]] [\fB\-\-subnetwork\fR=\fISUBNETWORK\fR] [\fB\-\-system\-config\-from\-file\fR=\fIPATH_TO_FILE\fR] [\fB\-\-tags\fR=\fITAG\fR,[\fITAG\fR,...]] [\fB\-\-threads\-per\-core\fR=\fITHREADS_PER_CORE\fR] [\fB\-\-tier\fR=\fITIER\fR] [\fB\-\-workload\-metadata\fR=\fIWORKLOAD_METADATA\fR] [\fB\-\-workload\-pool\fR=\fIWORKLOAD_POOL\fR] [\fB\-\-workload\-vulnerability\-scanning\fR=\fIWORKLOAD_VULNERABILITY_SCANNING\fR] [\fB\-\-additional\-zones\fR=\fIZONE\fR,[\fIZONE\fR,...]\ |\ \fB\-\-node\-locations\fR=\fIZONE\fR,[\fIZONE\fR,...]] [\fB\-\-aggregation\-ca\fR=\fICA_POOL_PATH\fR\ \fB\-\-cluster\-ca\fR=\fICA_POOL_PATH\fR\ \fB\-\-control\-plane\-disk\-encryption\-key\fR=\fIKEY\fR\ \fB\-\-etcd\-api\-ca\fR=\fICA_POOL_PATH\fR\ \fB\-\-etcd\-peer\-ca\fR=\fICA_POOL_PATH\fR\ \fB\-\-gkeops\-etcd\-backup\-encryption\-key\fR=\fIKEY\fR\ \fB\-\-service\-account\-signing\-keys\fR=\fIKEY_VERSION\fR,[\fIKEY_VERSION\fR,...]\ \fB\-\-service\-account\-verification\-keys\fR=\fIKEY_VERSION\fR,[\fIKEY_VERSION\fR,...]] [\fB\-\-binauthz\-policy\-bindings\fR=[\fIname\fR=\fIBINAUTHZ_POLICY\fR,\fIenforcement\-mode\fR=\fIENFORCEMENT_MODE\fR,...]\ \fB\-\-binauthz\-evaluation\-mode\fR=\fIBINAUTHZ_EVALUATION_MODE\fR\ |\ \fB\-\-enable\-binauthz\fR] [\fB\-\-boot\-disk\-provisioned\-iops\fR=\fIBOOT_DISK_PROVISIONED_IOPS\fR\ \fB\-\-boot\-disk\-provisioned\-throughput\fR=\fIBOOT_DISK_PROVISIONED_THROUGHPUT\fR] [\fB\-\-cluster\-dns\fR=\fICLUSTER_DNS\fR\ \fB\-\-cluster\-dns\-domain\fR=\fICLUSTER_DNS_DOMAIN\fR\ \fB\-\-cluster\-dns\-scope\fR=\fICLUSTER_DNS_SCOPE\fR\ \fB\-\-additive\-vpc\-scope\-dns\-domain\fR=\fIADDITIVE_VPC_SCOPE_DNS_DOMAIN\fR\ |\ \fB\-\-disable\-additive\-vpc\-scope\fR] [\fB\-\-dataplane\-v2\-observability\-mode\fR=\fIDATAPLANE_V2_OBSERVABILITY_MODE\fR\ |\ \fB\-\-disable\-dataplane\-v2\-flow\-observability\fR\ |\ \fB\-\-enable\-dataplane\-v2\-flow\-observability\fR] [\fB\-\-disable\-dataplane\-v2\-metrics\fR\ |\ \fB\-\-enable\-dataplane\-v2\-metrics\fR] [\fB\-\-enable\-autoprovisioning\fR\ \fB\-\-autoprovisioning\-config\-file\fR=\fIPATH_TO_FILE\fR\ |\ \fB\-\-autoprovisioning\-image\-type\fR=\fIAUTOPROVISIONING_IMAGE_TYPE\fR\ \fB\-\-autoprovisioning\-locations\fR=\fIZONE\fR,[\fIZONE\fR,...]\ \fB\-\-autoprovisioning\-min\-cpu\-platform\fR=\fIPLATFORM\fR\ \fB\-\-max\-cpu\fR=\fIMAX_CPU\fR\ \fB\-\-max\-memory\fR=\fIMAX_MEMORY\fR\ \fB\-\-min\-cpu\fR=\fIMIN_CPU\fR\ \fB\-\-min\-memory\fR=\fIMIN_MEMORY\fR\ \fB\-\-autoprovisioning\-max\-surge\-upgrade\fR=\fIAUTOPROVISIONING_MAX_SURGE_UPGRADE\fR\ \fB\-\-autoprovisioning\-max\-unavailable\-upgrade\fR=\fIAUTOPROVISIONING_MAX_UNAVAILABLE_UPGRADE\fR\ \fB\-\-autoprovisioning\-node\-pool\-soak\-duration\fR=\fIAUTOPROVISIONING_NODE_POOL_SOAK_DURATION\fR\ \fB\-\-autoprovisioning\-standard\-rollout\-policy\fR=[\fIbatch\-node\-count\fR=\fIBATCH_NODE_COUNT\fR,\fIbatch\-percent\fR=\fIBATCH_NODE_PERCENTAGE\fR,\fIbatch\-soak\-duration\fR=\fIBATCH_SOAK_DURATION\fR,...]\ \fB\-\-enable\-autoprovisioning\-blue\-green\-upgrade\fR\ |\ \fB\-\-enable\-autoprovisioning\-surge\-upgrade\fR\ \fB\-\-autoprovisioning\-scopes\fR=[\fISCOPE\fR,...]\ \fB\-\-autoprovisioning\-service\-account\fR=\fIAUTOPROVISIONING_SERVICE_ACCOUNT\fR\ \fB\-\-enable\-autoprovisioning\-autorepair\fR\ \fB\-\-enable\-autoprovisioning\-autoupgrade\fR\ [\fB\-\-max\-accelerator\fR=[\fItype\fR=\fITYPE\fR,\fIcount\fR=\fICOUNT\fR,...]\ :\ \fB\-\-min\-accelerator\fR=[\fItype\fR=\fITYPE\fR,\fIcount\fR=\fICOUNT\fR,...]]] [\fB\-\-enable\-autoscaling\fR\ \fB\-\-location\-policy\fR=\fILOCATION_POLICY\fR\ \fB\-\-max\-nodes\fR=\fIMAX_NODES\fR\ \fB\-\-min\-nodes\fR=\fIMIN_NODES\fR\ \fB\-\-total\-max\-nodes\fR=\fITOTAL_MAX_NODES\fR\ \fB\-\-total\-min\-nodes\fR=\fITOTAL_MIN_NODES\fR] [\fB\-\-enable\-insecure\-binding\-system\-authenticated\fR\ \fB\-\-enable\-insecure\-binding\-system\-unauthenticated\fR] [\fB\-\-enable\-master\-authorized\-networks\fR\ \fB\-\-master\-authorized\-networks\fR=\fINETWORK\fR,[\fINETWORK\fR,...]] [\fB\-\-enable\-network\-egress\-metering\fR\ \fB\-\-enable\-resource\-consumption\-metering\fR\ \fB\-\-resource\-usage\-bigquery\-dataset\fR=\fIRESOURCE_USAGE_BIGQUERY_DATASET\fR] [\fB\-\-enable\-private\-endpoint\fR\ \fB\-\-enable\-private\-nodes\fR\ \fB\-\-master\-ipv4\-cidr\fR=\fIMASTER_IPV4_CIDR\fR\ \fB\-\-private\-cluster\fR] [\fB\-\-enable\-secret\-manager\fR\ \fB\-\-enable\-secret\-manager\-rotation\fR\ \fB\-\-secret\-manager\-rotation\-interval\fR=\fISECRET_MANAGER_ROTATION_INTERVAL\fR] [\fB\-\-enable\-secret\-sync\fR\ \fB\-\-enable\-secret\-sync\-rotation\fR\ \fB\-\-secret\-sync\-rotation\-interval\fR=\fISECRET_SYNC_ROTATION_INTERVAL\fR] [\fB\-\-ephemeral\-storage\fR[=[\fIlocal\-ssd\-count\fR=\fILOCAL\-SSD\-COUNT\fR]]\ |\ \fB\-\-ephemeral\-storage\-local\-ssd\fR[=[\fIcount\fR=\fICOUNT\fR]]\ |\ \fB\-\-local\-nvme\-ssd\-block\fR[=[\fIcount\fR=\fICOUNT\fR]]\ |\ \fB\-\-local\-ssd\-count\fR=\fILOCAL_SSD_COUNT\fR\ |\ \fB\-\-local\-ssd\-volumes\fR=[[\fIcount\fR=\fICOUNT\fR],[\fItype\fR=\fITYPE\fR],[\fIformat\fR=\fIFORMAT\fR],...]] [\fB\-\-location\fR=\fILOCATION\fR\ |\ \fB\-\-region\fR=\fIREGION\fR\ |\ \fB\-\-zone\fR=\fIZONE\fR,\ \fB\-z\fR\ \fIZONE\fR] [\fB\-\-maintenance\-window\fR=\fISTART_TIME\fR\ |\ \fB\-\-maintenance\-window\-end\fR=\fITIME_STAMP\fR\ \fB\-\-maintenance\-window\-recurrence\fR=\fIRRULE\fR\ \fB\-\-maintenance\-window\-start\fR=\fITIME_STAMP\fR] [\fB\-\-password\fR=\fIPASSWORD\fR\ \fB\-\-enable\-basic\-auth\fR\ |\ \fB\-\-username\fR=\fIUSERNAME\fR,\ \fB\-u\fR\ \fIUSERNAME\fR] [\fB\-\-reservation\fR=\fIRESERVATION\fR\ \fB\-\-reservation\-affinity\fR=\fIRESERVATION_AFFINITY\fR] [\fB\-\-scopes\fR=[\fISCOPE\fR,...];\ default="gke\-default"\ \fB\-\-service\-account\fR=\fISERVICE_ACCOUNT\fR] [\fB\-\-security\-profile\fR=\fISECURITY_PROFILE\fR\ \fB\-\-no\-security\-profile\-runtime\-rules\fR] [\fIGCLOUD_WIDE_FLAG\ ...\fR]



.SH "DESCRIPTION"

\fB(ALPHA)\fR Create a cluster for running containers.



.SH "EXAMPLES"

To create a cluster with the default configuration, run:

.RS 2m
$ gcloud alpha container clusters create sample\-cluster
.RE



.SH "POSITIONAL ARGUMENTS"

.RS 2m
.TP 2m
\fINAME\fR

The name of the cluster to create.

The name may contain only lowercase alphanumerics and '\-', must start with a
letter and end with an alphanumeric, and must be no longer than 40 characters.


.RE
.sp

.SH "FLAGS"

.RS 2m
.TP 2m
\fB\-\-accelerator\fR=[\fItype\fR=\fITYPE\fR,[\fIcount\fR=\fICOUNT\fR,\fIgpu\-driver\-version\fR=\fIGPU_DRIVER_VERSION\fR,\fIgpu\-partition\-size\fR=\fIGPU_PARTITION_SIZE\fR,\fIgpu\-sharing\-strategy\fR=\fIGPU_SHARING_STRATEGY\fR,\fImax\-shared\-clients\-per\-gpu\fR=\fIMAX_SHARED_CLIENTS_PER_GPU\fR],...]

Attaches accelerators (e.g. GPUs) to all nodes.

.RS 2m
.TP 2m
\fBtype\fR
(Required) The specific type (e.g. nvidia\-tesla\-t4 for NVIDIA T4) of
accelerator to attach to the instances. Use \f5gcloud compute accelerator\-types
list\fR to learn about all available accelerator types.

.TP 2m
\fBcount\fR
(Optional) The number of accelerators to attach to the instances. The default
value is 1.

.TP 2m
\fBgpu\-driver\-version\fR
(Optional) The NVIDIA driver version to install. GPU_DRIVER_VERSION must be one
of:

.RS 2m
`default`: Install the default driver version for this GKE version. For GKE version 1.30.1\-gke.1156000 and later, this is the default option.
.RE

.RS 2m
`latest`: Install the latest driver version available for this GKE version.
Can only be used for nodes that use Container\-Optimized OS.
.RE

.RS 2m
`disabled`: Skip automatic driver installation. You must manually install a
driver after you create the cluster. For GKE version 1.30.1\-gke.1156000 and earlier, this is the default option.
To manually install the GPU driver, refer to https://cloud.google.com/kubernetes\-engine/docs/how\-to/gpus#installing_drivers.
.RE

.TP 2m
\fBgpu\-partition\-size\fR
(Optional) The GPU partition size used when running multi\-instance GPUs. For
information about multi\-instance GPUs, refer to:
https://cloud.google.com/kubernetes\-engine/docs/how\-to/gpus\-multi

.TP 2m
\fBgpu\-sharing\-strategy\fR
(Optional) The GPU sharing strategy (e.g. time\-sharing) to use. For information
about GPU sharing, refer to:
https://cloud.google.com/kubernetes\-engine/docs/concepts/timesharing\-gpus

.TP 2m
\fBmax\-shared\-clients\-per\-gpu\fR
(Optional) The max number of containers allowed to share each GPU on the node.
This field is used together with \f5gpu\-sharing\-strategy\fR.

.RE
.sp
.TP 2m
\fB\-\-addons\fR=[\fIADDON\fR[=\fIENABLED\fR|\fIDISABLED\fR],...]

Addons
(https://cloud.google.com/kubernetes\-engine/docs/reference/rest/v1/projects.locations.clusters#Cluster.AddonsConfig)
are additional Kubernetes cluster components. Addons specified by this flag will
be enabled. The others will be disabled. Default addons: HttpLoadBalancing,
HorizontalPodAutoscaling. The Istio addon is deprecated and removed. For more
information and migration, see
https://cloud.google.com/istio/docs/istio\-on\-gke/migrate\-to\-anthos\-service\-mesh.
ADDON must be one of: HttpLoadBalancing, HorizontalPodAutoscaling,
KubernetesDashboard, NetworkPolicy, NodeLocalDNS, ConfigConnector,
GcePersistentDiskCsiDriver, GcpFilestoreCsiDriver, BackupRestore,
GcsFuseCsiDriver, ParallelstoreCsiDriver, HighScaleCheckpointing,
LustreCsiDriver, RayOperator, Istio, CloudBuild, CloudRun.

.TP 2m
\fB\-\-allow\-route\-overlap\fR

Allows the provided cluster CIDRs to overlap with existing routes that are less
specific and do not terminate at a VM.

When enabled, \f5\-\-cluster\-ipv4\-cidr\fR must be fully specified (e.g.
\f510.96.0.0/14\fR , but not \f5/14\fR). If \f5\-\-enable\-ip\-alias\fR is also
specified, both \f5\-\-cluster\-ipv4\-cidr\fR and \f5\-\-services\-ipv4\-cidr\fR
must be fully specified.

Must be used in conjunction with '\-\-enable\-ip\-alias' or
\'\-\-no\-enable\-ip\-alias'.

.TP 2m
\fB\-\-alpha\-cluster\-feature\-gates\fR=[\fIFEATURE\fR=\fItrue\fR|\fIfalse\fR,...]

Selectively enable or disable Kubernetes alpha and beta kubernetesfeature gates
on alpha GKE cluster. Alpha clusters are not covered by the Kubernetes Engine
SLA and should not be used for production workloads.

.TP 2m
\fB\-\-anonymous\-authentication\-config\fR=\fIANONYMOUS_AUTHENTICATION_CONFIG\fR

Enable or restrict anonymous access to the cluster. When enabled, anonymous
users will be authenticated as system:anonymous with the group
system:unauthenticated. Limiting access restricts anonymous access to only the
health check endpoints /readyz, /livez, and /healthz.

\fIANONYMOUS_AUTHENTICATION_CONFIG\fR must be one of:

.RS 2m
.TP 2m
\fBENABLED\fR
\'ENABLED' enables anonymous calls.
.TP 2m
\fBLIMITED\fR
\'LIMITED' restricts anonymous access to the cluster. Only calls to the health
check endpoints are allowed anonymously, all other calls will be rejected.
.RE
.sp


.TP 2m
\fB\-\-async\fR

Return immediately, without waiting for the operation in progress to complete.

.TP 2m
\fB\-\-auto\-monitoring\-scope\fR=\fIAUTO_MONITORING_SCOPE\fR

Enables Auto\-Monitoring for a specific scope within the cluster. ALL: Enables
Auto\-Monitoring for all supported workloads within the cluster. NONE: Disables
Auto\-Monitoring. \fIAUTO_MONITORING_SCOPE\fR must be one of: \fBALL\fR,
\fBNONE\fR.

.TP 2m
\fB\-\-autopilot\-workload\-policies\fR=\fIWORKLOAD_POLICIES\fR

Add Autopilot workload policies to the cluster.

Examples:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-autopilot\-workload\-policies=allow\-net\-admin
.RE

The only supported workload policy is 'allow\-net\-admin'.

.TP 2m
\fB\-\-autoprovisioning\-enable\-insecure\-kubelet\-readonly\-port\fR

Enables the Kubelet's insecure read only port for Autoprovisioned Node Pools.

If not set, the value from nodePoolDefaults.nodeConfigDefaults will be used.

To disable the readonly port
\f5\-\-no\-autoprovisioning\-enable\-insecure\-kubelet\-readonly\-port\fR.

.TP 2m
\fB\-\-autoprovisioning\-network\-tags\fR=\fITAGS\fR,[\fITAGS\fR,...]

Applies the given Compute Engine tags (comma separated) on all nodes in the
auto\-provisioned node pools of the new Standard cluster or the new Autopilot
cluster.

Examples:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-autoprovisioning\-network\-tags=tag1,tag2
.RE

New nodes in auto\-provisioned node pools, including ones created by resize or
recreate, will have these tags on the Compute Engine API instance object and can
be used in firewall rules. See
https://cloud.google.com/sdk/gcloud/reference/compute/firewall\-rules/create for
examples.

.TP 2m
\fB\-\-autoprovisioning\-resource\-manager\-tags\fR=[\fIKEY\fR=\fIVALUE\fR,...]

Applies the specified comma\-separated resource manager tags that has the
GCE_FIREWALL purpose to all nodes in the new Autopilot cluster or all
auto\-provisioned nodes in the new Standard cluster.

Examples:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-autoprovisioning\-resource\-manager\-tags=tagKeys/\e
1234=tagValues/2345
$ gcloud alpha container clusters create example\-cluster \e
    \-\-autoprovisioning\-resource\-manager\-tags=my\-project/key1=value1
$ gcloud alpha container clusters create example\-cluster \e
    \-\-autoprovisioning\-resource\-manager\-tags=12345/key1=value1,\e
23456/key2=value2
$ gcloud alpha container clusters create example\-cluster \e
    \-\-autoprovisioning\-resource\-manager\-tags=
.RE

All nodes in an Autopilot cluster or all auto\-provisioned nodes in a Standard
cluster, including nodes that are resized or re\-created, will have the
specified tags on the corresponding Instance object in the Compute Engine API.
You can reference these tags in network firewall policy rules. For instructions,
see https://cloud.google.com/firewall/docs/use\-tags\-for\-firewalls.

.TP 2m
\fB\-\-autoscaling\-profile\fR=\fIAUTOSCALING_PROFILE\fR

Set autoscaling behaviour, choices are 'optimize\-utilization' and 'balanced'.
Default is 'balanced'.

.TP 2m
\fB\-\-boot\-disk\-kms\-key\fR=\fIBOOT_DISK_KMS_KEY\fR

The Customer Managed Encryption Key used to encrypt the boot disk attached to
each node in the node pool. This should be of the form
projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME].
For more information about protecting resources with Cloud KMS Keys please see:
https://cloud.google.com/compute/docs/disks/customer\-managed\-encryption

.TP 2m
\fB\-\-cloud\-run\-config\fR=[\fIload\-balancer\-type\fR=\fIEXTERNAL\fR,...]

Configurations for Cloud Run addon, requires \f5\-\-addons=CloudRun\fR for
create and \f5\-\-update\-addons=CloudRun=ENABLED\fR for update.

.RS 2m
.TP 2m
\fBload\-balancer\-type\fR
(Optional) Type of load\-balancer\-type EXTERNAL or INTERNAL.

Examples:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-cloud\-run\-config=load\-balancer\-type=INTERNAL
.RE

.RE
.sp
.TP 2m
\fB\-\-cluster\-ipv4\-cidr\fR=\fICLUSTER_IPV4_CIDR\fR

The IP address range for the pods in this cluster in CIDR notation (e.g.
10.0.0.0/14). Prior to Kubernetes version 1.7.0 this must be a subset of
10.0.0.0/8; however, starting with version 1.7.0 can be any RFC 1918 IP range.

If you omit this option, a range is chosen automatically. The automatically
chosen range is randomly selected from 10.0.0.0/8 and will not include IP
address ranges allocated to VMs, existing routes, or ranges allocated to other
clusters. The automatically chosen range might conflict with reserved IP
addresses, dynamic routes, or routes within VPCs that peer with this cluster.
You should specify \f5\-\-cluster\-ipv4\-cidr\fR to prevent conflicts.

This field is not applicable in a Shared VPC setup where the IP address range
for the pods must be specified with \f5\-\-cluster\-secondary\-range\-name\fR

.TP 2m
\fB\-\-cluster\-secondary\-range\-name\fR=\fINAME\fR

Set the secondary range to be used as the source for pod IPs. Alias ranges will
be allocated from this secondary range. NAME must be the name of an existing
secondary range in the cluster subnetwork.


Cannot be specified unless '\-\-enable\-ip\-alias' option is also specified.
Cannot be used with '\-\-create\-subnetwork' option.

.TP 2m
\fB\-\-cluster\-version\fR=\fICLUSTER_VERSION\fR

The Kubernetes version to use for the master and nodes. Defaults to
server\-specified.

The default Kubernetes version is available using the following command.

.RS 2m
$ gcloud container get\-server\-config
.RE

.TP 2m
\fB\-\-confidential\-node\-type\fR=\fICONFIDENTIAL_NODE_TYPE\fR

Enable confidential nodes for the cluster. Enabling Confidential Nodes will
create nodes using Confidential VM
https://cloud.google.com/compute/confidential\-vm/docs/about\-cvm.
\fICONFIDENTIAL_NODE_TYPE\fR must be one of: \fBsev\fR, \fBsev_snp\fR,
\fBtdx\fR.

.TP 2m
\fB\-\-containerd\-config\-from\-file\fR=\fIPATH_TO_FILE\fR

Path of the YAML file that contains containerd configuration entries like
configuring access to private image registries.

For detailed information on the configuration usage, please refer to
https://cloud.google.com/kubernetes\-engine/docs/how\-to/customize\-containerd\-configuration.

Note: Updating the containerd configuration of an existing cluster or node pool
requires recreation of the existing nodes, which might cause disruptions in
running workloads.

Use a full or relative path to a local file containing the value of
containerd_config.

.TP 2m
\fB\-\-create\-subnetwork\fR=[\fIKEY\fR=\fIVALUE\fR,...]

Create a new subnetwork for the cluster. The name and range of the subnetwork
can be customized via optional 'name' and 'range' key\-value pairs.

\'name' specifies the name of the subnetwork to be created.

\'range' specifies the IP range for the new subnetwork. This can either be a
netmask size (e.g. '/20') or a CIDR range (e.g. '10.0.0.0/20'). If a netmask
size is specified, the IP is automatically taken from the free space in the
cluster's network.

Examples:

Create a new subnetwork with a default name and size.

.RS 2m
$ gcloud alpha container clusters create \-\-create\-subnetwork ""
.RE

Create a new subnetwork named "my\-subnet" with netmask of size 21.

.RS 2m
$ gcloud alpha container clusters create \e
    \-\-create\-subnetwork name=my\-subnet,range=/21
.RE

Create a new subnetwork with a default name with the primary range of
10.100.0.0/16.

.RS 2m
$ gcloud alpha container clusters create \e
    \-\-create\-subnetwork range=10.100.0.0/16
.RE

Create a new subnetwork with the name "my\-subnet" with a default range.

.RS 2m
$ gcloud alpha container clusters create \e
    \-\-create\-subnetwork name=my\-subnet
.RE


Cannot be specified unless '\-\-enable\-ip\-alias' option is also specified.
Cannot be used in conjunction with '\-\-subnetwork' option.

.TP 2m
\fB\-\-data\-cache\-count\fR=\fIDATA_CACHE_COUNT\fR

Specifies the number of local SSDs to be utilized for GKE Data Cache in the
cluster.

.TP 2m
\fB\-\-database\-encryption\-key\fR=\fIDATABASE_ENCRYPTION_KEY\fR

Enable Database Encryption.

Enable database encryption that will be used to encrypt Kubernetes Secrets at
the application layer. The key provided should be the resource ID in the format
of
\f5projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]\fR.
For more information, see
https://cloud.google.com/kubernetes\-engine/docs/how\-to/encrypting\-secrets.

.TP 2m
\fB\-\-default\-max\-pods\-per\-node\fR=\fIDEFAULT_MAX_PODS_PER_NODE\fR

The default max number of pods per node for node pools in the cluster.

This flag sets the default max\-pods\-per\-node for node pools in the cluster.
If \-\-max\-pods\-per\-node is not specified explicitly for a node pool, this
flag value will be used.

Must be used in conjunction with '\-\-enable\-ip\-alias'.

.TP 2m
\fB\-\-disable\-default\-snat\fR

Disable default source NAT rules applied in cluster nodes.

By default, cluster nodes perform source network address translation (SNAT) for
packets sent from Pod IP address sources to destination IP addresses that are
not in the non\-masquerade CIDRs list. For more details about SNAT and IP
masquerading, see:
https://cloud.google.com/kubernetes\-engine/docs/how\-to/ip\-masquerade\-agent#how_ipmasq_works
SNAT changes the packet's source IP address to the node's internal IP address.

When this flag is set, GKE does not perform SNAT for packets sent to any
destination. You must set this flag if the cluster uses privately reused public
IPs.

The \-\-disable\-default\-snat flag is only applicable to private GKE clusters,
which are inherently VPC\-native. Thus, \-\-disable\-default\-snat requires that
you also set \-\-enable\-ip\-alias and \-\-enable\-private\-nodes.

.TP 2m
\fB\-\-disable\-l4\-lb\-firewall\-reconciliation\fR

Disable reconciliation on the cluster for L4 Load Balancer VPC firewalls
targeting ingress traffic.

.TP 2m
\fB\-\-disable\-pod\-cidr\-overprovision\fR

Disables pod cidr overprovision on nodes. Pod cidr overprovisioning is enabled
by default.

.TP 2m
\fB\-\-disk\-size\fR=\fIDISK_SIZE\fR

Size for node VM boot disks in GB. Defaults to 100GB.

.TP 2m
\fB\-\-disk\-type\fR=\fIDISK_TYPE\fR

Type of the node VM boot disk. For version 1.24 and later, defaults to
pd\-balanced. For versions earlier than 1.24, defaults to pd\-standard.
\fIDISK_TYPE\fR must be one of: \fBpd\-standard\fR, \fBpd\-ssd\fR,
\fBpd\-balanced\fR, \fBhyperdisk\-balanced\fR, \fBhyperdisk\-extreme\fR,
\fBhyperdisk\-throughput\fR.

.TP 2m
\fB\-\-enable\-authorized\-networks\-on\-private\-endpoint\fR

Enable enforcement of \-\-master\-authorized\-networks CIDR ranges for traffic
reaching cluster's control plane via private IP.

.TP 2m
\fB\-\-enable\-auto\-ipam\fR

Enable the Auto IP Address Management (Auto IPAM) feature for the cluster.

.TP 2m
\fB\-\-enable\-autorepair\fR

Enable node autorepair feature for a cluster's default node pool(s).

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-enable\-autorepair
.RE

Node autorepair is enabled by default for clusters using COS, COS_CONTAINERD,
UBUNTU or UBUNTU_CONTAINERD as a base image, use \-\-no\-enable\-autorepair to
disable.

See https://cloud.google.com/kubernetes\-engine/docs/how\-to/node\-auto\-repair
for more info.

.TP 2m
\fB\-\-enable\-autoupgrade\fR

Sets autoupgrade feature for a cluster's default node pool(s).

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-enable\-autoupgrade
.RE

See https://cloud.google.com/kubernetes\-engine/docs/node\-auto\-upgrades for
more info.

Enabled by default, use \fB\-\-no\-enable\-autoupgrade\fR to disable.

.TP 2m
\fB\-\-enable\-cilium\-clusterwide\-network\-policy\fR

Enable Cilium Clusterwide Network Policies on the cluster. Disabled by default.

.TP 2m
\fB\-\-enable\-cloud\-logging\fR

(DEPRECATED) Automatically send logs from the cluster to the Google Cloud
Logging API.

Legacy Logging and Monitoring is deprecated. Thus, flag
\f5\-\-enable\-cloud\-logging\fR is also deprecated and will be removed in an
upcoming release. Please use \f5\-\-logging\fR (optionally with
\f5\-\-monitoring\fR). For more details, please read:
https://cloud.google.com/kubernetes\-engine/docs/concepts/about\-logs and
https://cloud.google.com/kubernetes\-engine/docs/how\-to/configure\-metrics.

.TP 2m
\fB\-\-enable\-cloud\-monitoring\fR

(DEPRECATED) Automatically send metrics from pods in the cluster to the Google
Cloud Monitoring API. VM metrics will be collected by Google Compute Engine
regardless of this setting.

Legacy Logging and Monitoring is deprecated. Thus, flag
\f5\-\-enable\-cloud\-monitoring\fR is also deprecated. Please use
\f5\-\-monitoring\fR (optionally with \f5\-\-logging\fR). For more details,
please read:
https://cloud.google.com/kubernetes\-engine/docs/how\-to/configure\-metrics and
https://cloud.google.com/kubernetes\-engine/docs/concepts/about\-logs.

.TP 2m
\fB\-\-enable\-cloud\-run\-alpha\fR

Enable Cloud Run alpha features on this cluster. Selecting this option will
result in the cluster having all Cloud Run alpha API groups and features turned
on.

Cloud Run alpha clusters are not covered by the Cloud Run SLA and should not be
used for production workloads.

.TP 2m
\fB\-\-enable\-confidential\-nodes\fR

Enable confidential nodes for the cluster. Enabling Confidential Nodes will
create nodes using Confidential VM
https://cloud.google.com/compute/confidential\-vm/docs/about\-cvm.

.TP 2m
\fB\-\-enable\-confidential\-storage\fR

Enable confidential storage for the cluster. Enabling Confidential Storage will
create boot disk with confidential mode

.TP 2m
\fB\-\-enable\-cost\-allocation\fR

Enable the cost management feature.

When enabled, you can get informational GKE cost breakdowns by cluster,
namespace and label in your billing data exported to BigQuery
(https://cloud.google.com/billing/docs/how\-to/export\-data\-bigquery).

.TP 2m
\fB\-\-enable\-dataplane\-v2\fR

Enables the new eBPF dataplane for GKE clusters that is required for network
security, scalability and visibility features.

.TP 2m
\fB\-\-enable\-default\-compute\-class\fR

Enable the default compute class to use for the cluster.

To disable Default Compute Class in an existing cluster, explicitly set flag
\f5\-\-no\-enable\-default\-compute\-class\fR.

.TP 2m
\fB\-\-enable\-dns\-access\fR

Enable access to the cluster's control plane over DNS\-based endpoint.

DNS\-based control plane access is recommended.

.TP 2m
\fB\-\-enable\-fleet\fR

Set cluster project as the fleet host project. This will register the cluster to
the same project. To register the cluster to a fleet in a different project,
please use \f5\-\-fleet\-project=FLEET_HOST_PROJECT\fR. Example: $ gcloud alpha
container clusters create \-\-enable\-fleet

.TP 2m
\fB\-\-enable\-fqdn\-network\-policy\fR

Enable FQDN Network Policies on the cluster. FQDN Network Policies are disabled
by default.

.TP 2m
\fB\-\-enable\-gke\-oidc\fR

(DEPRECATED) Enable GKE OIDC authentication on the cluster.

When enabled, users would be able to authenticate to Kubernetes cluster after
properly setting OIDC config.

GKE OIDC is by default disabled when creating a new cluster. To disable GKE OIDC
in an existing cluster, explicitly set flag \f5\-\-no\-enable\-gke\-oidc\fR.

GKE OIDC is being replaced by Identity Service across Anthos and GKE. Thus, flag
\f5\-\-enable\-gke\-oidc\fR is also deprecated. Please use
\f5\-\-enable\-identity\-service\fR to enable the Identity Service component

.TP 2m
\fB\-\-enable\-google\-cloud\-access\fR

When you enable Google Cloud Access, any public IP addresses owned by Google
Cloud can reach the public control plane endpoint of your cluster.

.TP 2m
\fB\-\-enable\-gvnic\fR

Enable the use of GVNIC for this cluster. Requires re\-creation of nodes using
either a node\-pool upgrade or node\-pool creation.

.TP 2m
\fB\-\-enable\-identity\-service\fR

Enable Identity Service component on the cluster.

When enabled, users can authenticate to Kubernetes cluster with external
identity providers.

Identity Service is by default disabled when creating a new cluster. To disable
Identity Service in an existing cluster, explicitly set flag
\f5\-\-no\-enable\-identity\-service\fR.

.TP 2m
\fB\-\-enable\-image\-streaming\fR

Specifies whether to enable image streaming on cluster.

.TP 2m
\fB\-\-enable\-insecure\-kubelet\-readonly\-port\fR

Enables the Kubelet's insecure read only port.

To disable the readonly port on a cluster or node\-pool set the flag to
\f5\-\-no\-enable\-insecure\-kubelet\-readonly\-port\fR.

.TP 2m
\fB\-\-enable\-intra\-node\-visibility\fR

Enable Intra\-node visibility for this cluster.

Enabling intra\-node visibility makes your intra\-node pod\-to\-pod traffic
visible to the networking fabric. With this feature, you can use VPC flow
logging or other VPC features for intra\-node traffic.

Enabling it on an existing cluster causes the cluster master and the cluster
nodes to restart, which might cause a disruption.

.TP 2m
\fB\-\-enable\-ip\-access\fR

Enable access to the cluster's control plane over private IP and public IP if
\-\-enable\-private\-endpoint is not enabled.

.TP 2m
\fB\-\-enable\-ip\-alias\fR

\-\-enable\-ip\-alias creates a VPC\-native cluster. If you set this option, you
can optionally specify the IP address ranges to use for Pods and Services. For
instructions, see
https://cloud.google.com/kubernetes\-engine/docs/how\-to/alias\-ips.

\-\-no\-enable\-ip\-alias creates a routes\-based cluster. This type of cluster
routes traffic between Pods using Google Cloud Routes. This option is not
recommended; use the default VPC\-native cluster type instead. For instructions,
see
https://cloud.google.com/kubernetes\-engine/docs/how\-to/routes\-based\-cluster

Note: For IPv6\-only clusters, these flags are a no\-op as IP Aliases do not
apply, and any specified IP address ranges for Pods and Services will be
ignored.

You can't specify both \-\-enable\-ip\-alias and \-\-no\-enable\-ip\-alias. If
you omit both \-\-enable\-ip\-alias and \-\-no\-enable\-ip\-alias, the default
is a VPC\-native cluster.

.TP 2m
\fB\-\-enable\-k8s\-certs\-via\-dns\fR

Enable K8s client certificates Authentication to the cluster's control plane
over DNS\-based endpoint.

.TP 2m
\fB\-\-enable\-k8s\-tokens\-via\-dns\fR

Enable K8s Service Account tokens Authentication to the cluster's control plane
over DNS\-based endpoint.

.TP 2m
\fB\-\-enable\-kubernetes\-alpha\fR

Enable Kubernetes alpha features on this cluster. Selecting this option will
result in the cluster having all Kubernetes alpha API groups and features turned
on. Cluster upgrades (both manual and automatic) will be disabled and the
cluster will be automatically deleted after 30 days.

Alpha clusters are not covered by the Kubernetes Engine SLA and should not be
used for production workloads.

.TP 2m
\fB\-\-enable\-kubernetes\-unstable\-apis\fR=\fIAPI\fR,[\fIAPI\fR,...]

Enable Kubernetes beta API features on this cluster. Beta APIs are not expected
to be production ready and should be avoided in production\-grade environments.

.TP 2m
\fB\-\-enable\-l4\-ilb\-subsetting\fR

Enable Subsetting for L4 ILB services created on this cluster.

.TP 2m
\fB\-\-enable\-legacy\-authorization\fR

Enables the legacy ABAC authentication for the cluster. User rights are granted
through the use of policies which combine attributes together. For a detailed
look at these properties and related formats, see
https://kubernetes.io/docs/admin/authorization/abac/. To use RBAC permissions
instead, create or update your cluster with the option
\f5\-\-no\-enable\-legacy\-authorization\fR.

.TP 2m
\fB\-\-enable\-legacy\-lustre\-port\fR

Allow the Lustre CSI driver to initialize LNet (the virtual network layer for
Lustre kernel module) using port 6988. This flag is required to workaround a
port conflict with the gke\-metadata\-server on GKE nodes.

.TP 2m
\fB\-\-enable\-logging\-monitoring\-system\-only\fR

(DEPRECATED) Enable Cloud Operations system\-only monitoring and logging.

The \f5\-\-enable\-logging\-monitoring\-system\-only\fR flag is deprecated and
will be removed in an upcoming release. Please use \f5\-\-logging\fR and
\f5\-\-monitoring\fR instead. For more information, please read:
https://cloud.google.com/kubernetes\-engine/docs/concepts/about\-logs and
https://cloud.google.com/kubernetes\-engine/docs/how\-to/configure\-metrics.

.TP 2m
\fB\-\-enable\-managed\-prometheus\fR

Enables managed collection for Managed Service for Prometheus in the cluster.

See
https://cloud.google.com/stackdriver/docs/managed\-prometheus/setup\-managed#enable\-mgdcoll\-gke
for more info.

Enabled by default for cluster versions 1.27 or greater, use
\-\-no\-enable\-managed\-prometheus to disable.

.TP 2m
\fB\-\-enable\-master\-global\-access\fR

Use with private clusters to allow access to the master's private endpoint from
any Google Cloud region or on\-premises environment regardless of the private
cluster's region.

.TP 2m
\fB\-\-enable\-multi\-networking\fR

Enables multi\-networking on the cluster. Multi\-networking is disabled by
default.

.TP 2m
\fB\-\-enable\-nested\-virtualization\fR

Enables the use of nested virtualization on the default initial node pool.
Defaults to \f5false\fR. Can only be enabled on UBUNTU_CONTAINERD base image or
COS_CONTAINERD base image with version 1.28.4\-gke.1083000 and above.

.TP 2m
\fB\-\-enable\-network\-policy\fR

Enable network policy enforcement for this cluster. If you are enabling network
policy on an existing cluster the network policy addon must first be enabled on
the master by using \-\-update\-addons=NetworkPolicy=ENABLED flag.

.TP 2m
\fB\-\-enable\-pod\-security\-policy\fR

Enables the pod security policy admission controller for the cluster. The pod
security policy admission controller adds fine\-grained pod create and update
authorization controls through the PodSecurityPolicy API objects. For more
information, see
https://cloud.google.com/kubernetes\-engine/docs/how\-to/pod\-security\-policies.

.TP 2m
\fB\-\-enable\-ray\-cluster\-logging\fR

Enable automatic log processing sidecar for Ray clusters.

.TP 2m
\fB\-\-enable\-ray\-cluster\-monitoring\fR

Enable automatic metrics collection for Ray clusters.

.TP 2m
\fB\-\-enable\-service\-externalips\fR

Enables use of services with externalIPs field.

.TP 2m
\fB\-\-enable\-shielded\-nodes\fR

Enable Shielded Nodes for this cluster. Enabling Shielded Nodes will enable a
more secure Node credential bootstrapping implementation. Starting with version
1.18, clusters will have Shielded GKE nodes by default.

.TP 2m
\fB\-\-enable\-stackdriver\-kubernetes\fR

(DEPRECATED) Enable Cloud Operations for GKE.

The \f5\-\-enable\-stackdriver\-kubernetes\fR flag is deprecated and will be
removed in an upcoming release. Please use \f5\-\-logging\fR and
\f5\-\-monitoring\fR instead. For more information, please read:
https://cloud.google.com/kubernetes\-engine/docs/concepts/about\-logs and
https://cloud.google.com/kubernetes\-engine/docs/how\-to/configure\-metrics.

.TP 2m

Flags for vertical pod autoscaling:

At most one of these can be specified:


.RS 2m
.TP 2m
\fB\-\-enable\-vertical\-pod\-autoscaling\fR

Enable vertical pod autoscaling for a cluster.

.RE
.sp
.TP 2m
\fB\-\-fleet\-project\fR=\fIPROJECT_ID_OR_NUMBER\fR

Sets fleet host project for the cluster. If specified, the current cluster will
be registered as a fleet membership under the fleet host project.

Example: $ gcloud alpha container clusters create \-\-fleet\-project=my\-project

.TP 2m
\fB\-\-gateway\-api\fR=\fIGATEWAY_API\fR

Enables GKE Gateway controller in this cluster. The value of the flag specifies
which Open Source Gateway API release channel will be used to define Gateway
resources. \fIGATEWAY_API\fR must be one of:

.RS 2m
.TP 2m
\fBdisabled\fR
Gateway controller will be disabled in the cluster.

.TP 2m
\fBstandard\fR
Gateway controller will be enabled in the cluster. Resource definitions from the
\f5standard\fR OSS Gateway API release channel will be installed.
.RE
.sp


.TP 2m
\fB\-\-hpa\-profile\fR=\fIHPA_PROFILE\fR

Set Horizontal Pod Autoscaler behavior. Accepted values are: none, performance.
For more information, see
https://cloud.google.com/kubernetes\-engine/docs/how\-to/horizontal\-pod\-autoscaling#hpa\-profile.

.TP 2m
\fB\-\-identity\-provider\fR=\fIIDENTITY_PROVIDER\fR

Enable 3P identity provider on the cluster.

.TP 2m
\fB\-\-image\-type\fR=\fIIMAGE_TYPE\fR

The image type to use for the cluster. Defaults to server\-specified.

Image Type specifies the base OS that the nodes in the cluster will run on. If
an image type is specified, that will be assigned to the cluster and all future
upgrades will use the specified image type. If it is not specified the server
will pick the default image type.

The default image type and the list of valid image types are available using the
following command.

.RS 2m
$ gcloud container get\-server\-config
.RE

.TP 2m
\fB\-\-in\-transit\-encryption\fR=\fIIN_TRANSIT_ENCRYPTION\fR

Enable Dataplane V2 in\-transit encryption. Dataplane v2 in\-transit encryption
is disabled by default. \fIIN_TRANSIT_ENCRYPTION\fR must be one of:
\fBinter\-node\-transparent\fR, \fBnone\fR.

.TP 2m
\fB\-\-ipv6\-access\-type\fR=\fIIPV6_ACCESS_TYPE\fR

IPv6 access type of the subnetwork. Defaults to 'external'.
\fIIPV6_ACCESS_TYPE\fR must be one of: \fBexternal\fR, \fBinternal\fR.

.TP 2m
\fB\-\-issue\-client\-certificate\fR

Issue a TLS client certificate with admin permissions.

When enabled, the certificate and private key pair will be present in MasterAuth
field of the Cluster object. For cluster versions before 1.12, a client
certificate will be issued by default. As of 1.12, client certificates are
disabled by default.

.TP 2m
\fB\-\-istio\-config\fR=[\fIauth\fR=\fIMTLS_PERMISSIVE\fR,...]

(REMOVED) Configurations for Istio addon, requires \-\-addons contains Istio for
create, or \-\-update\-addons Istio=ENABLED for update.

.RS 2m
.TP 2m
\fBauth\fR
(Optional) Type of auth MTLS_PERMISSIVE or MTLS_STRICT.

Examples:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-istio\-config=auth=MTLS_PERMISSIVE
.RE

The \f5\-\-istio\-config\fR flag is no longer supported. For more information
and migration, see
https://cloud.google.com/istio/docs/istio\-on\-gke/migrate\-to\-anthos\-service\-mesh.

.RE
.sp
.TP 2m
\fB\-\-labels\fR=[\fIKEY\fR=\fIVALUE\fR,...]

Labels to apply to the Google Cloud resources in use by the Kubernetes Engine
cluster. These are unrelated to Kubernetes labels.

Examples:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-labels=label_a=value1,label_b=,label_c=value3
.RE

.TP 2m
\fB\-\-linux\-sysctls\fR=\fIKEY\fR=\fIVALUE\fR,[\fIKEY\fR=\fIVALUE\fR,...]

(DEPRECATED) Linux kernel parameters to be applied to all nodes in the new
cluster's default node pool as well as the pods running on the nodes.

Examples:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-linux\-sysctls="net.core.somaxconn=1024,net.ipv4.tcp_rmem=4096 \e
87380 6291456"
.RE

The \f5\-\-linux\-sysctls\fR flag is deprecated. Please use
\f5\-\-system\-config\-from\-file\fR instead.

.TP 2m
\fB\-\-local\-ssd\-encryption\-mode\fR=\fILOCAL_SSD_ENCRYPTION_MODE\fR

Encryption mode for Local SSDs on the cluster. \fILOCAL_SSD_ENCRYPTION_MODE\fR
must be one of: \fBSTANDARD_ENCRYPTION\fR, \fBEPHEMERAL_KEY_ENCRYPTION\fR.

.TP 2m
\fB\-\-logging\fR=[\fICOMPONENT\fR,...]

Set the components that have logging enabled. Valid component values are:
\f5SYSTEM\fR, \f5WORKLOAD\fR, \f5API_SERVER\fR, \f5CONTROLLER_MANAGER\fR,
\f5SCHEDULER\fR, \f5NONE\fR

For more information, see
https://cloud.google.com/kubernetes\-engine/docs/concepts/about\-logs#available\-logs

Examples:

.RS 2m
$ gcloud alpha container clusters create \-\-logging=SYSTEM
$ gcloud alpha container clusters create \e
    \-\-logging=SYSTEM,API_SERVER,WORKLOAD
$ gcloud alpha container clusters create \-\-logging=NONE
.RE

.TP 2m
\fB\-\-logging\-variant\fR=\fILOGGING_VARIANT\fR

Specifies the logging variant that will be deployed on all the nodes in the
cluster. Valid logging variants are \f5MAX_THROUGHPUT\fR, \f5DEFAULT\fR. If no
value is specified, DEFAULT is used. \fILOGGING_VARIANT\fR must be one of:

.RS 2m
.TP 2m
\fBDEFAULT\fR
\'DEFAULT' variant requests minimal resources but may not guarantee high
throughput.
.TP 2m
\fBMAX_THROUGHPUT\fR
\'MAX_THROUGHPUT' variant requests more node resources and is able to achieve
logging throughput up to 10MB per sec.
.RE
.sp


.TP 2m
\fB\-\-machine\-type\fR=\fIMACHINE_TYPE\fR, \fB\-m\fR \fIMACHINE_TYPE\fR

The type of machine to use for nodes. Defaults to e2\-medium. The list of
predefined machine types is available using the following command:

.RS 2m
$ gcloud compute machine\-types list
.RE

You can also specify custom machine types by providing a string with the format
"custom\-CPUS\-RAM" where "CPUS" is the number of virtual CPUs and "RAM" is the
amount of RAM in MiB.

For example, to create a node pool using custom machines with 2 vCPUs and 12 GB
of RAM:

.RS 2m
$ gcloud alpha container clusters create high\-mem\-pool \e
    \-\-machine\-type=custom\-2\-12288
.RE

.TP 2m
\fB\-\-max\-nodes\-per\-pool\fR=\fIMAX_NODES_PER_POOL\fR

The maximum number of nodes to allocate per default initial node pool.
Kubernetes Engine will automatically create enough nodes pools such that each
node pool contains less than \f5\-\-max\-nodes\-per\-pool\fR nodes. Defaults to
1000 nodes, but can be set as low as 100 nodes per pool on initial create.

.TP 2m
\fB\-\-max\-pods\-per\-node\fR=\fIMAX_PODS_PER_NODE\fR

The max number of pods per node for this node pool.

This flag sets the maximum number of pods that can be run at the same time on a
node. This will override the value given with \-\-default\-max\-pods\-per\-node
flag set at the cluster level.

Must be used in conjunction with '\-\-enable\-ip\-alias'.

.TP 2m
\fB\-\-max\-surge\-upgrade\fR=\fIMAX_SURGE_UPGRADE\fR; default=1

Number of extra (surge) nodes to be created on each upgrade of a node pool.

Specifies the number of extra (surge) nodes to be created during this node
pool's upgrades. For example, running the following command will result in
creating an extra node each time the node pool is upgraded:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-max\-surge\-upgrade=1 \-\-max\-unavailable\-upgrade=0
.RE

Must be used in conjunction with '\-\-max\-unavailable\-upgrade'.

.TP 2m
\fB\-\-max\-unavailable\-upgrade\fR=\fIMAX_UNAVAILABLE_UPGRADE\fR

Number of nodes that can be unavailable at the same time on each upgrade of a
node pool.

Specifies the number of nodes that can be unavailable at the same time while
this node pool is being upgraded. For example, running the following command
will result in having 3 nodes being upgraded in parallel (1 + 2), but keeping
always at least 3 (5 \- 2) available each time the node pool is upgraded:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
   \-\-num\-nodes=5 \-\-max\-surge\-upgrade=1      \e
   \-\-max\-unavailable\-upgrade=2
.RE

Must be used in conjunction with '\-\-max\-surge\-upgrade'.

.TP 2m
\fB\-\-membership\-type\fR=\fIMEMBERSHIP_TYPE\fR

Specify a membership type for the cluster's fleet membership. Example: $ gcloud
alpha container clusters create \e \-\-membership\-type=LIGHTWEIGHT.
\fIMEMBERSHIP_TYPE\fR must be (only \e one value is supported):

.RS 2m
.TP 2m
\fBLIGHTWEIGHT\fR
Fleet membership representing this cluster will be lightweight.

.RE
.sp


.TP 2m
\fB\-\-metadata\fR=\fIKEY\fR=\fIVALUE\fR,[\fIKEY\fR=\fIVALUE\fR,...]

Compute Engine metadata to be made available to the guest operating system
running on nodes within the node pool.

Each metadata entry is a key/value pair separated by an equals sign. Metadata
keys must be unique and less than 128 bytes in length. Values must be less than
or equal to 32,768 bytes in length. The total size of all keys and values must
be less than 512 KB. Multiple arguments can be passed to this flag. For example:

\f5\fI\-\-metadata key\-1=value\-1,key\-2=value\-2,key\-3=value\-3\fR\fR

Additionally, the following keys are reserved for use by Kubernetes Engine:

.RS 2m
.IP "\(em" 2m
\f5\fIcluster\-location\fR\fR
.IP "\(em" 2m
\f5\fIcluster\-name\fR\fR
.IP "\(em" 2m
\f5\fIcluster\-uid\fR\fR
.IP "\(em" 2m
\f5\fIconfigure\-sh\fR\fR
.IP "\(em" 2m
\f5\fIenable\-os\-login\fR\fR
.IP "\(em" 2m
\f5\fIgci\-update\-strategy\fR\fR
.IP "\(em" 2m
\f5\fIgci\-ensure\-gke\-docker\fR\fR
.IP "\(em" 2m
\f5\fIinstance\-template\fR\fR
.IP "\(em" 2m
\f5\fIkube\-env\fR\fR
.IP "\(em" 2m
\f5\fIstartup\-script\fR\fR
.IP "\(em" 2m
\f5\fIuser\-data\fR\fR
.RE
.sp

Google Kubernetes Engine sets the following keys by default:

.RS 2m
.IP "\(em" 2m
\f5\fIserial\-port\-logging\-enable\fR\fR
.RE
.sp

See also Compute Engine's documentation
(https://cloud.google.com/compute/docs/storing\-retrieving\-metadata) on storing
and retrieving instance metadata.

.TP 2m
\fB\-\-metadata\-from\-file\fR=\fIKEY\fR=\fILOCAL_FILE_PATH\fR,[...]

Same as \f5\fI\-\-metadata\fR\fR except that the value for the entry will be
read from a local file.

.TP 2m
\fB\-\-min\-cpu\-platform\fR=\fIPLATFORM\fR

When specified, the nodes for the new cluster's default node pool will be
scheduled on host with specified CPU architecture or a newer one.

Examples:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-min\-cpu\-platform=PLATFORM
.RE

To list available CPU platforms in given zone, run:

.RS 2m
$ gcloud beta compute zones describe ZONE \e
    \-\-format="value(availableCpuPlatforms)"
.RE

CPU platform selection is available only in selected zones.

.TP 2m
\fB\-\-monitoring\fR=[\fICOMPONENT\fR,...]

Set the components that have monitoring enabled. Valid component values are:
\f5SYSTEM\fR, \f5WORKLOAD\fR (Deprecated), \f5NONE\fR, \f5API_SERVER\fR,
\f5CONTROLLER_MANAGER\fR, \f5SCHEDULER\fR, \f5DAEMONSET\fR, \f5DEPLOYMENT\fR,
\f5HPA\fR, \f5POD\fR, \f5STATEFULSET\fR, \f5STORAGE\fR, \f5CADVISOR\fR,
\f5KUBELET\fR, \f5DCGM\fR, \f5JOBSET\fR

For more information, see
https://cloud.google.com/kubernetes\-engine/docs/how\-to/configure\-metrics#available\-metrics

Examples:

.RS 2m
$ gcloud alpha container clusters create \e
    \-\-monitoring=SYSTEM,API_SERVER,POD
$ gcloud alpha container clusters create \-\-monitoring=NONE
.RE

.TP 2m
\fB\-\-network\fR=\fINETWORK\fR

The Compute Engine Network that the cluster will connect to. Google Kubernetes
Engine will use this network when creating routes and firewalls for the
clusters. Defaults to the 'default' network.

.TP 2m
\fB\-\-network\-performance\-configs\fR=[\fIPROPERTY1\fR=\fIVALUE1\fR,...]

Configures network performance settings for the cluster. Node pools can override
with their own settings.

.RS 2m
.TP 2m
\fBtotal\-egress\-bandwidth\-tier\fR
Total egress bandwidth is the available outbound bandwidth from a VM, regardless
of whether the traffic is going to internal IP or external IP destinations. The
following tier values are allowed: [TIER_UNSPECIFIED,TIER_1].

See
https://cloud.google.com/compute/docs/networking/configure\-vm\-with\-high\-bandwidth\-configuration
for more information.

.RE
.sp
.TP 2m
\fB\-\-node\-labels\fR=[\fINODE_LABEL\fR,...]

Applies the given Kubernetes labels on all nodes in the new node pool.

Examples:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-node\-labels=label\-a=value1,label\-2=value2
.RE

Updating the node pool's \-\-node\-labels flag applies the labels to the
Kubernetes Node objects for existing nodes in\-place; it does not re\-create or
replace nodes. New nodes, including ones created by resizing or re\-creating
nodes, will have these labels on the Kubernetes API Node object. The labels can
be used in the \f5nodeSelector\fR field. See
https://kubernetes.io/docs/concepts/scheduling\-eviction/assign\-pod\-node/ for
examples.

Note that Kubernetes labels, intended to associate cluster components and
resources with one another and manage resource lifecycles, are different from
Google Kubernetes Engine labels that are used for the purpose of tracking
billing and usage information.

.TP 2m
\fB\-\-node\-pool\-name\fR=\fINODE_POOL_NAME\fR

Name of the initial node pool that will be created for the cluster.

Specifies the name to use for the initial node pool that will be created with
the cluster. If the settings specified require multiple node pools to be
created, the name for each pool will be prefixed by this name. For example
running the following will result in three node pools being created,
example\-node\-pool\-0, example\-node\-pool\-1 and example\-node\-pool\-2:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-num\-nodes 9 \-\-max\-nodes\-per\-pool 3     \e
    \-\-node\-pool\-name example\-node\-pool
.RE

.TP 2m
\fB\-\-node\-taints\fR=[\fINODE_TAINT\fR,...]

Applies the given kubernetes taints on all nodes in default node pool(s) in new
cluster, which can be used with tolerations for pod scheduling.

Examples:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-node\-taints=key1=val1:NoSchedule,key2=val2:PreferNoSchedule
.RE

To read more about node\-taints, see
https://cloud.google.com/kubernetes\-engine/docs/node\-taints.

.TP 2m
\fB\-\-node\-version\fR=\fINODE_VERSION\fR

The Kubernetes version to use for nodes. Defaults to server\-specified.

The default Kubernetes version is available using the following command.

.RS 2m
$ gcloud container get\-server\-config
.RE

.TP 2m
\fB\-\-notification\-config\fR=[\fIpubsub\fR=\fIENABLED\fR|\fIDISABLED\fR,\fIpubsub\-topic\fR=\fITOPIC\fR,...]

The notification configuration of the cluster. GKE supports publishing cluster
upgrade notifications to any Pub/Sub topic you created in the same project.
Create a subscription for the topic specified to receive notification messages.
See https://cloud.google.com/pubsub/docs/admin on how to manage Pub/Sub topics
and subscriptions. You can also use the filter option to specify which event
types you'd like to receive from the following options: SecurityBulletinEvent,
UpgradeEvent, UpgradeInfoEvent, UpgradeAvailableEvent.

Examples:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-notification\-config=pubsub=ENABLED,pubsub\-topic=projects/\e
{project}/topics/{topic\-name}
$ gcloud alpha container clusters create example\-cluster \e
    \-\-notification\-config=pubsub=ENABLED,pubsub\-topic=projects/\e
{project}/topics/{topic\-name},\e
filter="SecurityBulletinEvent|UpgradeEvent"
.RE

The project of the Pub/Sub topic must be the same one as the cluster. It can be
either the project ID or the project number.

.TP 2m
\fB\-\-num\-nodes\fR=\fINUM_NODES\fR; default=3

The number of nodes to be created in each of the cluster's zones.

.TP 2m
\fB\-\-patch\-update\fR=[\fIPATCH_UPDATE\fR]

The patch update to use for the cluster.

Setting to 'accelerated' automatically upgrades the cluster to the latest patch
available within the cluster's current minor version and release channel.
Setting to 'default' automatically upgrades the cluster to the default patch
upgrade targetversion available within the cluster's current minor version and
release channel.

\fIPATCH_UPDATE\fR must be one of: \fBaccelerated\fR, \fBdefault\fR.

.TP 2m
\fB\-\-performance\-monitoring\-unit\fR=\fIPERFORMANCE_MONITORING_UNIT\fR

Sets the Performance Monitoring Unit level. Valid values are
\f5architectural\fR, \f5standard\fR and \f5enhanced\fR.
\fIPERFORMANCE_MONITORING_UNIT\fR must be one of:

.RS 2m
.TP 2m
\fBarchitectural\fR
Enables architectural PMU events tied to non last level cache (LLC) events.
.TP 2m
\fBenhanced\fR
Enables most documented core/L2 and LLC PMU events.
.TP 2m
\fBstandard\fR
Enables most documented core/L2 PMU events.
.RE
.sp


.TP 2m
\fB\-\-placement\-policy\fR=\fIPLACEMENT_POLICY\fR

Indicates the desired resource policy to use.

.RS 2m
$ gcloud alpha container clusters create node\-pool\-1 \e
    \-\-cluster=example\-cluster \-\-placement\-policy my\-placement
.RE

.TP 2m
\fB\-\-placement\-type\fR=\fIPLACEMENT_TYPE\fR

Placement type allows to define the type of node placement within the default
node pool of this cluster.

\f5UNSPECIFIED\fR \- No requirements on the placement of nodes. This is the
default option.

\f5COMPACT\fR \- GKE will attempt to place the nodes in a close proximity to
each other. This helps to reduce the communication latency between the nodes,
but imposes additional limitations on the node pool size.

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-placement\-type=COMPACT
.RE

\fIPLACEMENT_TYPE\fR must be one of: \fBUNSPECIFIED\fR, \fBCOMPACT\fR.

.TP 2m
\fB\-\-preemptible\fR

Create nodes using preemptible VM instances in the new cluster.

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-preemptible
.RE

New nodes, including ones created by resize or recreate, will use preemptible VM
instances. See https://cloud.google.com/kubernetes\-engine/docs/preemptible\-vm
for more information on how to use Preemptible VMs with Kubernetes Engine.

.TP 2m
\fB\-\-private\-endpoint\-subnetwork\fR=\fINAME\fR

Sets the subnetwork GKE uses to provision the control plane's private endpoint.

.TP 2m
\fB\-\-private\-ipv6\-google\-access\-type\fR=\fIPRIVATE_IPV6_GOOGLE_ACCESS_TYPE\fR

Sets the type of private access to Google services over IPv6.

PRIVATE_IPV6_GOOGLE_ACCESS_TYPE must be one of:

.RS 2m
bidirectional
  Allows Google services to initiate connections to GKE pods in this
  cluster. This is not intended for common use, and requires previous
  integration with Google services.
.RE

.RS 2m
disabled
  Default value. Disables private access to Google services over IPv6.
.RE

.RS 2m
outbound\-only
  Allows GKE pods to make fast, secure requests to Google services
  over IPv6. This is the most common use of private IPv6 access.
.RE

.RS 2m
$ gcloud alpha container clusters create       \e
    \-\-private\-ipv6\-google\-access\-type=disabled
$ gcloud alpha container clusters create       \e
    \-\-private\-ipv6\-google\-access\-type=outbound\-only
$ gcloud alpha container clusters create       \e
    \-\-private\-ipv6\-google\-access\-type=bidirectional
.RE

\fIPRIVATE_IPV6_GOOGLE_ACCESS_TYPE\fR must be one of: \fBbidirectional\fR,
\fBdisabled\fR, \fBoutbound\-only\fR.

.TP 2m
\fB\-\-release\-channel\fR=\fICHANNEL\fR

Release channel a cluster is subscribed to.

If left unspecified and a version is specified, the cluster is enrolled in the
most mature release channel where the version is available (first checking
STABLE, then REGULAR, and finally RAPID). Otherwise, if no release channel and
no version is specified, the cluster is enrolled in the REGULAR channel with its
default version. When a cluster is subscribed to a release channel, Google
maintains both the master version and the node version. Node auto\-upgrade is
enabled by default for release channel clusters and can be controlled via
upgrade\-scope exclusions
(https://cloud.google.com/kubernetes\-engine/docs/concepts/maintenance\-windows\-and\-exclusions#scope_of_maintenance_to_exclude).

\fICHANNEL\fR must be one of:

.RS 2m
.TP 2m
\fBNone\fR
Use 'None' to opt\-out of any release channel.

.TP 2m
\fBextended\fR
Clusters subscribed to 'extended' can remain on a minor version for 24 months
from when the minor version is made available in the Regular channel.

.TP 2m
\fBrapid\fR
\'rapid' channel is offered on an early access basis for customers who want to
test new releases.

WARNING: Versions available in the 'rapid' channel may be subject to unresolved
issues with no known workaround and are not subject to any SLAs.

.TP 2m
\fBregular\fR
Clusters subscribed to 'regular' receive versions that are considered GA
quality. 'regular' is intended for production users who want to take advantage
of new features.

.TP 2m
\fBstable\fR
Clusters subscribed to 'stable' receive versions that are known to be stable and
reliable in production.

.RE
.sp


.TP 2m
\fB\-\-resource\-manager\-tags\fR=[\fIKEY\fR=\fIVALUE\fR,...]

Applies the specified comma\-separated resource manager tags that has the
GCE_FIREWALL purpose to all nodes in the new default node pool(s) of a new
cluster.

Examples:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-resource\-manager\-tags=tagKeys/1234=tagValues/2345
$ gcloud alpha container clusters create example\-cluster \e
    \-\-resource\-manager\-tags=my\-project/key1=value1
$ gcloud alpha container clusters create example\-cluster \e
    \-\-resource\-manager\-tags=12345/key1=value1,23456/key2=value2
$ gcloud alpha container clusters create example\-cluster \e
    \-\-resource\-manager\-tags=
.RE

All nodes, including nodes that are resized or re\-created, will have the
specified tags on the corresponding Instance object in the Compute Engine API.
You can reference these tags in network firewall policy rules. For instructions,
see https://cloud.google.com/firewall/docs/use\-tags\-for\-firewalls.

.TP 2m
\fB\-\-security\-group\fR=\fISECURITY_GROUP\fR

The name of the RBAC security group for use with Google security groups in
Kubernetes RBAC
(https://kubernetes.io/docs/reference/access\-authn\-authz/rbac/).

To include group membership as part of the claims issued by Google during
authentication, a group must be designated as a security group by including it
as a direct member of this group.

If unspecified, no groups will be returned for use with RBAC.

.TP 2m
\fB\-\-security\-posture\fR=\fISECURITY_POSTURE\fR

Sets the mode of the Kubernetes security posture API's off\-cluster features.

To enable advanced mode explicitly set the flag to
\f5\-\-security\-posture=enterprise\fR.

To enable in standard mode explicitly set the flag to
\f5\-\-security\-posture=standard\fR

To disable in an existing cluster, explicitly set the flag to
\f5\-\-security\-posture=disabled\fR.

For more information on enablement, see
https://cloud.google.com/kubernetes\-engine/docs/concepts/about\-security\-posture\-dashboard#feature\-enablement.

\fISECURITY_POSTURE\fR must be one of: \fBdisabled\fR, \fBstandard\fR,
\fBenterprise\fR.

.TP 2m
\fB\-\-services\-ipv4\-cidr\fR=\fICIDR\fR

Set the IP range for the services IPs.

Can be specified as a netmask size (e.g. '/20') or as in CIDR notion (e.g.
\'10.100.0.0/20'). If given as a netmask size, the IP range will be chosen
automatically from the available space in the network.

If unspecified, the services CIDR range will be chosen with a default mask size.

Cannot be specified unless '\-\-enable\-ip\-alias' option is also specified.

.TP 2m
\fB\-\-services\-secondary\-range\-name\fR=\fINAME\fR

Set the secondary range to be used for services (e.g. ClusterIPs). NAME must be
the name of an existing secondary range in the cluster subnetwork.


Cannot be specified unless '\-\-enable\-ip\-alias' option is also specified.
Cannot be used with '\-\-create\-subnetwork' option.

.TP 2m
\fB\-\-shielded\-integrity\-monitoring\fR

Enables monitoring and attestation of the boot integrity of the instance. The
attestation is performed against the integrity policy baseline. This baseline is
initially derived from the implicitly trusted boot image when the instance is
created.

.TP 2m
\fB\-\-shielded\-secure\-boot\fR

The instance will boot with secure boot enabled.

.TP 2m
\fB\-\-spot\fR

Create nodes using spot VM instances in the new cluster.

.RS 2m
$ gcloud alpha container clusters create example\-cluster \-\-spot
.RE

New nodes, including ones created by resize or recreate, will use spot VM
instances.

.TP 2m
\fB\-\-stack\-type\fR=\fISTACK_TYPE\fR

IP stack type of the cluster nodes. \fISTACK_TYPE\fR must be one of: \fBipv4\fR,
\fBipv4\-ipv6\fR, \fBipv6\fR.

.TP 2m
\fB\-\-storage\-pools\fR=\fISTORAGE_POOL\fR,[...]

A list of storage pools where the cluster's boot disks will be provisioned.

STORAGE_POOL must be in the format
projects/project/zones/zone/storagePools/storagePool

.TP 2m
\fB\-\-subnetwork\fR=\fISUBNETWORK\fR

The Google Compute Engine subnetwork
(https://cloud.google.com/compute/docs/subnetworks) to which the cluster is
connected. The subnetwork must belong to the network specified by \-\-network.

Cannot be used with the "\-\-create\-subnetwork" option.

.TP 2m
\fB\-\-system\-config\-from\-file\fR=\fIPATH_TO_FILE\fR

Path of the YAML/JSON file that contains the node configuration, including Linux
kernel parameters (sysctls) and kubelet configs.

Examples:

.RS 2m
kubeletConfig:
  cpuManagerPolicy: static
  memoryManager:
    policy: Static
  topologyManager:
    policy: BestEffort
    scope: pod
linuxConfig:
  sysctl:
    net.core.somaxconn: '2048'
    net.ipv4.tcp_rmem: '4096 87380 6291456'
  hugepageConfig:
    hugepage_size2m: '1024'
    hugepage_size1g: '2'
  swapConfig:
    enabled: true
    bootDiskProfile:
      swapSizeGib: 8
  cgroupMode: 'CGROUP_MODE_V2'
.RE

List of supported kubelet configs in 'kubeletConfig'.


.TS
tab(	);
l(36)B l(90)B
l(36) l(90).
KEY	VALUE
cpuManagerPolicy	either 'static' or 'none'
cpuCFSQuota	true or false (enabled by default)
cpuCFSQuotaPeriod	interval (e.g., '100ms'. The value must be between 1ms and 1 second, inclusive.)
memoryManager	specify memory manager policy
topologyManager	specify topology manager policy and scope
podPidsLimit	integer (The value must be greater than or equal to 1024 and less than 4194304.)
containerLogMaxSize	positive number plus unit suffix (e.g., '100Mi', '0.2Gi'. The value must be between 10Mi and 500Mi, inclusive.)
containerLogMaxFiles	integer (The value must be between [2, 10].)
imageGcLowThresholdPercent	integer (The value must be between [10, 85], and lower than imageGcHighThresholdPercent.)
imageGcHighThresholdPercent	integer (The value must be between [10, 85], and greater than imageGcLowThresholdPercent.)
imageMinimumGcAge	interval (e.g., '100s', '1m'. The value must be less than '2m'.)
imageMaximumGcAge	interval (e.g., '100s', '1m'. The value must be greater than imageMinimumGcAge.)
evictionSoft	specify eviction soft thresholds
evictionSoftGracePeriod	specify eviction soft grace period
evictionMinimumReclaim	specify eviction minimum reclaim thresholds
evictionMaxPodGracePeriodSeconds	integer (Max grace period for pod termination during eviction, in seconds. The value must be between [0, 300].)
allowedUnsafeSysctls	list of sysctls (Allowlisted groups: 'kernel.shm*', 'kernel.msg*', 'kernel.sem', 'fs.mqueue.*', and 'net.*', and sysctls under the groups.)
singleProcessOomKill	true or false
maxParallelImagePulls	integer (The value must be between [2, 5].)
.TE


List of supported keys in memoryManager in 'kubeletConfig'.


.TS
tab(	);
l(42)B l(42)B
l(42) l(42).
KEY	VALUE
policy	either 'Static' or 'None'
.TE

List of supported keys in topologyManager in 'kubeletConfig'.


.TS
tab(	);
l(42)B l(42)B
l(42) l(42).
KEY	VALUE
policy	either 'none' or 'best-effort' or 'single-numa-node' or 'restricted'
scope	either 'pod' or 'container'
.TE

List of supported keys in evictionSoft in 'kubeletConfig'.


.TS
tab(	);
l(25)B l(93)B
l(25) l(93).
KEY	VALUE
memoryAvailable	quantity (e.g., '100Mi', '1Gi'. Represents the amount of memory available before soft eviction. The value must be at least 100Mi and less than 50% of the node's memory.)
nodefsAvailable	percentage (e.g., '20%'. Represents the nodefs available before soft eviction. The value must be between 10% and 50%, inclusive.)
nodefsInodesFree	percentage (e.g., '20%'. Represents the nodefs inodes free before soft eviction. The value must be between 5% and 50%, inclusive.)
imagefsAvailable	percentage (e.g., '20%'. Represents the imagefs available before soft eviction. The value must be between 15% and 50%, inclusive.)
imagefsInodesFree	percentage (e.g., '20%'. Represents the imagefs inodes free before soft eviction. The value must be between 5% and 50%, inclusive.)
pidAvailable	percentage (e.g., '20%'. Represents the pid available before soft eviction. The value must be between 10% and 50%, inclusive.)
.TE

List of supported keys in evictionSoftGracePeriod in 'kubeletConfig'.


.TS
tab(	);
l(25)B l(93)B
l(25) l(93).
KEY	VALUE
memoryAvailable	duration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
nodefsAvailable	duration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
nodefsInodesFree	duration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
imagefsAvailable	duration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
imagefsInodesFree	duration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
pidAvailable	duration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
.TE

List of supported keys in evictionMinimumReclaim in 'kubeletConfig'.


.TS
tab(	);
l(25)B l(93)B
l(25) l(93).
KEY	VALUE
memoryAvailable	percentage (e.g., '5%'. Represents the minimum reclaim threshold for memory available. The value must be positive and no more than 10%.)
nodefsAvailable	percentage (e.g., '5%'. Represents the minimum reclaim threshold for nodefs available. The value must be positive and no more than 10%.)
nodefsInodesFree	percentage (e.g., '5%'. Represents the minimum reclaim threshold for nodefs inodes free. The value must be positive and no more than 10%.)
imagefsAvailable	percentage (e.g., '5%'. Represents the minimum reclaim threshold for imagefs available. The value must be positive and no more than 10%.)
imagefsInodesFree	percentage (e.g., '5%'. Represents the minimum reclaim threshold for imagefs inodes free. The value must be positive and no more than 10%.)
pidAvailable	percentage (e.g., '5%'. Represents the minimum reclaim threshold for pid available. The value must be positive and no more than 10%.)
.TE


List of supported sysctls in 'linuxConfig'.


.TS
tab(	);
l(42)B l(42)B
l(42) l(42).
KEY	VALUE
net.core.netdev_max_backlog	Any positive integer, less than 2147483647
net.core.rmem_default	Must be between [2304, 2147483647]
net.core.rmem_max	Must be between [2304, 2147483647]
net.core.wmem_default	Must be between [4608, 2147483647]
net.core.wmem_max	Must be between [4608, 2147483647]
net.core.optmem_max	Any positive integer, less than 2147483647
net.core.somaxconn	Must be between [128, 2147483647]
net.ipv4.tcp_rmem	Any positive integer tuple
net.ipv4.tcp_wmem	Any positive integer tuple
net.ipv4.tcp_tw_reuse	Must be {0, 1, 2}
net.ipv4.tcp_mtu_probing	Must be {0, 1, 2}
net.ipv4.tcp_max_orphans	Must be between [16384, 262144]
net.ipv4.tcp_max_tw_buckets	Must be between [4096, 2147483647]
net.ipv4.tcp_syn_retries	Must be between [1, 127]
net.ipv4.tcp_ecn	Must be {0, 1, 2}
net.ipv4.tcp_congestion_control	Must be string containing only letters and numbers
net.netfilter.nf_conntrack_max	Must be between [65536, 4194304]
net.netfilter.nf_conntrack_buckets	Must be between [65536, 524288]. Recommend setting: nf_conntrack_max = nf_conntrack_buckets * 4
net.netfilter.nf_conntrack_tcp_timeout_close_wait	Must be between [60, 3600]
net.netfilter.nf_conntrack_tcp_timeout_time_wait	Must be between [1, 600]
net.netfilter.nf_conntrack_tcp_timeout_established	Must be between [600, 86400]
net.netfilter.nf_conntrack_acct	Must be {0, 1}
kernel.shmmni	Must be between [4096, 32768]
kernel.shmmax	Must be between [0, 18446744073692774399]
kernel.shmall	Must be between [0, 18446744073692774399]
kernel.perf_event_paranoid	Must be {-1, 0, 1, 2, 3}
kernel.sched_rt_runtime_us	Must be [-1, 1000000]
kernel.softlockup_panic	Must be {0, 1}
kernel.yama.ptrace_scope	Must be {0, 1, 2, 3}
kernel.kptr_restrict	Must be {0, 1, 2}
kernel.dmesg_restrict	Must be {0, 1}
kernel.sysrq	Must be [0, 511]
fs.aio-max-nr	Must be between [65536, 4194304]
fs.file-max	Must be between [104857, 67108864]
fs.inotify.max_user_instances	Must be between [8192, 1048576]
fs.inotify.max_user_watches	Must be between [8192, 1048576]
fs.nr_open	Must be between [1048576, 2147483584]
vm.dirty_background_ratio	Must be between [1, 100]
vm.dirty_background_bytes	Must be between [0, 68719476736]
vm.dirty_expire_centisecs	Must be between [0, 6000]
vm.dirty_ratio	Must be between [1, 100]
vm.dirty_bytes	Must be between [0, 68719476736]
vm.dirty_writeback_centisecs	Must be between [0, 1000]
vm.max_map_count	Must be between [65536, 2147483647]
vm.overcommit_memory	Must be one of {0, 1, 2}
vm.overcommit_ratio	Must be between [0, 100]
vm.vfs_cache_pressure	Must be between [0, 100]
vm.swappiness	Must be between [0, 200]
vm.watermark_scale_factor	Must be between [10, 3000]
vm.min_free_kbytes	Must be between [67584, 1048576]
.TE

List of supported hugepage size in 'hugepageConfig'.


.TS
tab(	);
l(16)B l(45)B
l(16) l(45).
KEY	VALUE
hugepage_size2m	Number of 2M huge pages, any positive integer
hugepage_size1g	Number of 1G huge pages, any positive integer
.TE

List of supported keys in 'swapConfig' under 'linuxConfig'.


.TS
tab(	);
l(42)B l(42)B
l(42) l(42).
KEY	VALUE
enabled	boolean
encryptionConfig	specify encryption settings for the swap space
bootDiskProfile	specify swap on the node's boot disk
ephemeralLocalSsdProfile	specify swap on the local SSD shared with pod ephemeral storage
dedicatedLocalSsdProfile	specify swap on a new, separate local NVMe SSD exclusively for swap
.TE

List of supported keys in 'encryptionConfig' under 'swapConfig'.


.TS
tab(	);
l(42)B l(42)B
l(42) l(42).
KEY	VALUE
disabled	boolean
.TE

List of supported keys in 'bootDiskProfile' under 'swapConfig'.


.TS
tab(	);
l(42)B l(42)B
l(42) l(42).
KEY	VALUE
swapSizeGib	integer
swapSizePercent	integer
.TE

List of supported keys in 'ephemeralLocalSsdProfile' under 'swapConfig'.


.TS
tab(	);
l(42)B l(42)B
l(42) l(42).
KEY	VALUE
swapSizeGib	integer
swapSizePercent	integer
.TE

List of supported keys in 'dedicatedLocalSsdProfile' under 'swapConfig'.


.TS
tab(	);
l(42)B l(42)B
l(42) l(42).
KEY	VALUE
diskCount	integer
.TE


Allocated hugepage size should not exceed 60% of available memory on the node.
For example, c2d\-highcpu\-4 has 8GB memory, total allocated hugepage of 2m and
1g should not exceed 8GB * 0.6 = 4.8GB.

1G hugepages are only available in following machine familes: c3, m2, c2d, c3d,
h3, m3, a2, a3, g2.

Supported values for 'cgroupMode' under 'linuxConfig'.

.RS 2m
.IP "\(em" 2m
\f5CGROUP_MODE_V1\fR: Use cgroupv1 on the node pool.
.IP "\(em" 2m
\f5CGROUP_MODE_V2\fR: Use cgroupv2 on the node pool.
.IP "\(em" 2m
\f5CGROUP_MODE_UNSPECIFIED\fR: Use the default GKE cgroup configuration.
.RE
.sp

Supported values for 'transparentHugepageEnabled' under 'linuxConfig' which
controls transparent hugepage support for anonymous memory.

.RS 2m
.IP "\(em" 2m
\f5TRANSPARENT_HUGEPAGE_ENABLED_ALWAYS\fR: Transparent hugepage is enabled
system wide.
.IP "\(em" 2m
\f5TRANSPARENT_HUGEPAGE_ENABLED_MADVISE\fR: Transparent hugepage is enabled
inside MADV_HUGEPAGE regions. This is the default kernel configuration.
.IP "\(em" 2m
\f5TRANSPARENT_HUGEPAGE_ENABLED_NEVER\fR: Transparent hugepage is disabled.
.IP "\(em" 2m
\f5TRANSPARENT_HUGEPAGE_ENABLED_UNSPECIFIED\fR: Default value. GKE will not
modify the kernel configuration.
.RE
.sp

Supported values for 'transparentHugepageDefrag' under 'linuxConfig' which
defines the transparent hugepage defrag configuration on the node.

.RS 2m
.IP "\(em" 2m
\f5TRANSPARENT_HUGEPAGE_DEFRAG_ALWAYS\fR: It means that an application
requesting THP will stall on allocation failure and directly reclaim pages and
compact memory in an effort to allocate a THP immediately.
.IP "\(em" 2m
\f5TRANSPARENT_HUGEPAGE_DEFRAG_DEFER\fR: It means that an application will wake
kswapd in the background to reclaim pages and wake kcompactd to compact memory
so that THP is available in the near future. It is the responsibility of
khugepaged to then install the THP pages later.
.IP "\(em" 2m
\f5TRANSPARENT_HUGEPAGE_DEFRAG_DEFER_WITH_MADVISE\fR: It means that an
application will enter direct reclaim and compaction like always, but only for
regions that have used madvise(MADV_HUGEPAGE); all other regions will wake
kswapd in the background to reclaim pages and wake kcompactd to compact memory
so that THP is available in the near future.
.IP "\(em" 2m
\f5TRANSPARENT_HUGEPAGE_DEFRAG_MADVISE\fR: It means that an application will
enter direct reclaim and compaction like always, but only for regions that have
used madvise(MADV_HUGEPAGE); all other regions will wake kswapd in the
background to reclaim pages and wake kcompactd to compact memory so that THP is
available in the near future.
.IP "\(em" 2m
\f5TRANSPARENT_HUGEPAGE_DEFRAG_NEVER\fR: It means that an application will never
enter direct reclaim or compaction.
.IP "\(em" 2m
\f5TRANSPARENT_HUGEPAGE_DEFRAG_UNSPECIFIED\fR: Default value. GKE will not
modify the kernel configuration.
.RE
.sp

Note, updating the system configuration of an existing node pool requires
recreation of the nodes which which might cause a disruption.

Use a full or relative path to a local file containing the value of
system_config.

.TP 2m
\fB\-\-tags\fR=\fITAG\fR,[\fITAG\fR,...]

Applies the given Compute Engine tags (comma separated) on all nodes in the new
node\-pool.

Examples:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-tags=tag1,tag2
.RE

New nodes, including ones created by resize or recreate, will have these tags on
the Compute Engine API instance object and can be used in firewall rules. See
https://cloud.google.com/sdk/gcloud/reference/compute/firewall\-rules/create for
examples.

.TP 2m
\fB\-\-threads\-per\-core\fR=\fITHREADS_PER_CORE\fR

The number of visible threads per physical core for each node. To disable
simultaneous multithreading (SMT) set this to 1.

.TP 2m
\fB\-\-tier\fR=\fITIER\fR

(DEPRECATED) Set the desired tier for the cluster.

The \f5\-\-tier\fR flag is deprecated. More info:
https://cloud.google.com/kubernetes\-engine/docs/release\-notes#September_02_2025.
\fITIER\fR must be one of: \fBstandard\fR, \fBenterprise\fR.

.TP 2m
\fB\-\-workload\-metadata\fR=\fIWORKLOAD_METADATA\fR

Type of metadata server available to pods running in the node pool.
\fIWORKLOAD_METADATA\fR must be one of:

.RS 2m
.TP 2m
\fBEXPOSED\fR
[DEPRECATED] Pods running in this node pool have access to the node's underlying
Compute Engine Metadata Server.
.TP 2m
\fBGCE_METADATA\fR
Pods running in this node pool have access to the node's underlying Compute
Engine Metadata Server.
.TP 2m
\fBGKE_METADATA\fR
Run the Kubernetes Engine Metadata Server on this node. The Kubernetes Engine
Metadata Server exposes a metadata API to workloads that is compatible with the
V1 Compute Metadata APIs exposed by the Compute Engine and App Engine Metadata
Servers. This feature can only be enabled if Workload Identity is enabled at the
cluster level.
.TP 2m
\fBGKE_METADATA_SERVER\fR
[DEPRECATED] Run the Kubernetes Engine Metadata Server on this node. The
Kubernetes Engine Metadata Server exposes a metadata API to workloads that is
compatible with the V1 Compute Metadata APIs exposed by the Compute Engine and
App Engine Metadata Servers. This feature can only be enabled if Workload
Identity is enabled at the cluster level.
.TP 2m
\fBSECURE\fR
[DEPRECATED] Prevents pods not in hostNetwork from accessing certain VM
metadata, specifically kube\-env, which contains Kubelet credentials, and the
instance identity token. This is a temporary security solution available while
the bootstrapping process for cluster nodes is being redesigned with significant
security improvements. This feature is scheduled to be deprecated in the future
and later removed.
.RE
.sp


.TP 2m
\fB\-\-workload\-pool\fR=\fIWORKLOAD_POOL\fR

Enable Workload Identity on the cluster.

When enabled, Kubernetes service accounts will be able to act as Cloud IAM
Service Accounts, through the provided workload pool.

Currently, the only accepted workload pool is the workload pool of the Cloud
project containing the cluster, \f5PROJECT_ID.svc.id.goog\fR.

For more information on Workload Identity, see

.RS 2m
https://cloud.google.com/kubernetes\-engine/docs/how\-to/workload\-identity
.RE

.TP 2m
\fB\-\-workload\-vulnerability\-scanning\fR=\fIWORKLOAD_VULNERABILITY_SCANNING\fR

Sets the mode of the Kubernetes security posture API's workload vulnerability
scanning.

To enable Advanced vulnerability insights mode explicitly set the flag to
\f5\-\-workload\-vulnerability\-scanning=enterprise\fR.

To enable in standard mode explicitly set the flag to
\f5\-\-workload\-vulnerability\-scanning=standard\fR.

To disable in an existing cluster, explicitly set the flag to
\f5\-\-workload\-vulnerability\-scanning=disabled\fR.

For more information on enablement, see
https://cloud.google.com/kubernetes\-engine/docs/concepts/about\-security\-posture\-dashboard#feature\-enablement.

\fIWORKLOAD_VULNERABILITY_SCANNING\fR must be one of: \fBdisabled\fR,
\fBstandard\fR, \fBenterprise\fR.

.TP 2m

At most one of these can be specified:


.RS 2m
.TP 2m
\fB\-\-additional\-zones\fR=\fIZONE\fR,[\fIZONE\fR,...]

(DEPRECATED) The set of additional zones in which the specified node footprint
should be replicated. All zones must be in the same region as the cluster's
primary zone. If additional\-zones is not specified, all nodes will be in the
cluster's primary zone.

Note that \f5NUM_NODES\fR nodes will be created in each zone, such that if you
specify \f5\-\-num\-nodes=4\fR and choose one additional zone, 8 nodes will be
created.

Multiple locations can be specified, separated by commas. For example:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-zone us\-central1\-a \e
    \-\-additional\-zones us\-central1\-b,us\-central1\-c
.RE

This flag is deprecated. Use \-\-node\-locations=PRIMARY_ZONE,[ZONE,...]
instead.

.TP 2m
\fB\-\-node\-locations\fR=\fIZONE\fR,[\fIZONE\fR,...]

The set of zones in which the specified node footprint should be replicated. All
zones must be in the same region as the cluster's master(s), specified by the
\f5\-location\fR, \f5\-\-zone\fR, or \f5\-\-region\fR flag. Additionally, for
zonal clusters, \f5\-\-node\-locations\fR must contain the cluster's primary
zone. If not specified, all nodes will be in the cluster's primary zone (for
zonal clusters) or spread across three randomly chosen zones within the
cluster's region (for regional clusters).

Note that \f5NUM_NODES\fR nodes will be created in each zone, such that if you
specify \f5\-\-num\-nodes=4\fR and choose two locations, 8 nodes will be
created.

Multiple locations can be specified, separated by commas. For example:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-location us\-central1\-a \e
    \-\-node\-locations us\-central1\-a,us\-central1\-b
.RE

.RE
.sp
.TP 2m

Control Plane Keys


.RS 2m
.TP 2m
\fB\-\-aggregation\-ca\fR=\fICA_POOL_PATH\fR

The Certificate Authority Service caPool that will back the aggregation CA

.TP 2m
\fB\-\-cluster\-ca\fR=\fICA_POOL_PATH\fR

The Certificate Authority Service caPool that will back the cluster CA

.TP 2m
\fB\-\-control\-plane\-disk\-encryption\-key\fR=\fIKEY\fR

The Cloud KMS symmetric encryption cryptoKey that will be used to encrypt the
control plane disks

.TP 2m
\fB\-\-etcd\-api\-ca\fR=\fICA_POOL_PATH\fR

The Certificate Authority Service caPool that will back the etcd API CA

.TP 2m
\fB\-\-etcd\-peer\-ca\fR=\fICA_POOL_PATH\fR

The Certificate Authority Service caPool that will back the etcd peer CA

.TP 2m
\fB\-\-gkeops\-etcd\-backup\-encryption\-key\fR=\fIKEY\fR

The Cloud KMS symmetric encryption cryptoKey that will be used to encrypt the
disaster recovery etcd backups for the cluster

.TP 2m
\fB\-\-service\-account\-signing\-keys\fR=\fIKEY_VERSION\fR,[\fIKEY_VERSION\fR,...]

A Cloud KMS asymmetric signing cryptoKeyVersion that will be used to sign
service account tokens

.TP 2m
\fB\-\-service\-account\-verification\-keys\fR=\fIKEY_VERSION\fR,[\fIKEY_VERSION\fR,...]

A Cloud KMS asymmetric signing cryptoKeyVersion that will be used to verify
service account tokens. Maybe specified multiple times.

.RE
.sp
.TP 2m

Flags for Binary Authorization:


.RS 2m
.TP 2m
\fB\-\-binauthz\-policy\-bindings\fR=[\fIname\fR=\fIBINAUTHZ_POLICY\fR,\fIenforcement\-mode\fR=\fIENFORCEMENT_MODE\fR,...]

Binds a Binary Authorization policy to the cluster.

.RS 2m
.TP 2m
\fBname\fR
(Required) The relative resource name of the Binary Authorization policy to
audit and/or enforce. GKE policies have the following format:
\f5projects/{project_number}/platforms/gke/policies/{policy_id}\fR.

.TP 2m
\fBenforcement\-mode\fR
(Optional) The mode of enforcement for the policy. Must be one of: \fBaudit\fR,
\fBaudit\-and\-enforce\fR, \fBaudit\-and\-dryrun\fR. Defaults to \fBaudit\fR, if
unset.

.RE
.sp
.TP 2m

At most one of these can be specified:


.RS 2m
.TP 2m
\fB\-\-binauthz\-evaluation\-mode\fR=\fIBINAUTHZ_EVALUATION_MODE\fR

Enable Binary Authorization for this cluster. \fIBINAUTHZ_EVALUATION_MODE\fR
must be one of: \fBdisabled\fR, \fBpolicy\-bindings\fR,
\fBpolicy\-bindings\-and\-project\-singleton\-policy\-enforce\fR,
\fBproject\-singleton\-policy\-enforce\fR.

.TP 2m
\fB\-\-enable\-binauthz\fR

(DEPRECATED) Enable Binary Authorization for this cluster.

The \f5\-\-enable\-binauthz\fR flag is deprecated. Please use
\f5\-\-binauthz\-evaluation\-mode\fR instead.

.RE
.RE
.sp
.TP 2m

Configure boot disk options.


.RS 2m
.TP 2m
\fB\-\-boot\-disk\-provisioned\-iops\fR=\fIBOOT_DISK_PROVISIONED_IOPS\fR

Configure the Provisioned IOPS for the node pool boot disks. Only valid for
hyperdisk\-balanced boot disks.

.TP 2m
\fB\-\-boot\-disk\-provisioned\-throughput\fR=\fIBOOT_DISK_PROVISIONED_THROUGHPUT\fR

Configure the Provisioned Throughput for the node pool boot disks. Only valid
for hyperdisk\-balanced boot disks.

.RE
.sp
.TP 2m

ClusterDNS


.RS 2m
.TP 2m
\fB\-\-cluster\-dns\fR=\fICLUSTER_DNS\fR

DNS provider to use for this cluster. \fICLUSTER_DNS\fR must be one of:

.RS 2m
.TP 2m
\fBclouddns\fR
Selects Cloud DNS as the DNS provider for the cluster.
.TP 2m
\fBdefault\fR
Selects the default DNS provider (kube\-dns) for the cluster.
.TP 2m
\fBkubedns\fR
Selects Kube DNS as the DNS provider for the cluster.
.RE
.sp


.TP 2m
\fB\-\-cluster\-dns\-domain\fR=\fICLUSTER_DNS_DOMAIN\fR

DNS domain for this cluster. The default value is \f5cluster.local\fR. This is
configurable when \f5\-\-cluster\-dns=clouddns\fR and
\f5\-\-cluster\-dns\-scope=vpc\fR are set. The value must be a valid DNS
subdomain as defined in RFC 1123.

.TP 2m
\fB\-\-cluster\-dns\-scope\fR=\fICLUSTER_DNS_SCOPE\fR

DNS scope for the Cloud DNS zone created \- valid only with
\f5\-\-cluster\-dns=clouddns\fR. Defaults to cluster.

\fICLUSTER_DNS_SCOPE\fR must be one of:

.RS 2m
.TP 2m
\fBcluster\fR
Configures the Cloud DNS zone to be private to the cluster.
.TP 2m
\fBvpc\fR
Configures the Cloud DNS zone to be private to the VPC Network.
.RE
.sp


.TP 2m

At most one of these can be specified:


.RS 2m
.TP 2m
\fB\-\-additive\-vpc\-scope\-dns\-domain\fR=\fIADDITIVE_VPC_SCOPE_DNS_DOMAIN\fR

The domain used in Additive VPC scope. Only works with Cluster Scope.

.TP 2m
\fB\-\-disable\-additive\-vpc\-scope\fR

Disables Additive VPC Scope.

.RE
.RE
.sp
.TP 2m

At most one of these can be specified:


.RS 2m
.TP 2m
\fB\-\-dataplane\-v2\-observability\-mode\fR=\fIDATAPLANE_V2_OBSERVABILITY_MODE\fR

(REMOVED) Select Advanced Datapath Observability mode for the cluster. Defaults
to \f5DISABLED\fR.

Advanced Datapath Observability allows for a real\-time view into pod\-to\-pod
traffic within your cluster.

Examples:

.RS 2m
$ gcloud alpha container clusters create \e
    \-\-dataplane\-v2\-observability\-mode=DISABLED
.RE

.RS 2m
$ gcloud alpha container clusters create \e
    \-\-dataplane\-v2\-observability\-mode=INTERNAL_VPC_LB
.RE

.RS 2m
$ gcloud alpha container clusters create \e
    \-\-dataplane\-v2\-observability\-mode=EXTERNAL_LB
.RE

Flag \-\-dataplane\-v2\-observability\-mode has been removed.

\fIDATAPLANE_V2_OBSERVABILITY_MODE\fR must be one of:

.RS 2m
.TP 2m
\fBDISABLED\fR
Disables Advanced Datapath Observability.
.TP 2m
\fBEXTERNAL_LB\fR
Makes Advanced Datapath Observability available to the external network.
.TP 2m
\fBINTERNAL_VPC_LB\fR
Makes Advanced Datapath Observability available from the VPC network.
.RE
.sp


.TP 2m
\fB\-\-disable\-dataplane\-v2\-flow\-observability\fR

Disables Advanced Datapath Observability.

.TP 2m
\fB\-\-enable\-dataplane\-v2\-flow\-observability\fR

Enables Advanced Datapath Observability which allows for a real\-time view into
pod\-to\-pod traffic within your cluster.

.RE
.sp
.TP 2m

At most one of these can be specified:


.RS 2m
.TP 2m
\fB\-\-disable\-dataplane\-v2\-metrics\fR

Stops exposing advanced datapath flow metrics on node port.

.TP 2m
\fB\-\-enable\-dataplane\-v2\-metrics\fR

Exposes advanced datapath flow metrics on node port.

.RE
.sp
.TP 2m

Node autoprovisioning


.RS 2m
.TP 2m
\fB\-\-enable\-autoprovisioning\fR

Enables node autoprovisioning for a cluster.

Cluster Autoscaler will be able to create new node pools. Requires maximum CPU
and memory limits to be specified.

.TP 2m

At most one of these can be specified:


.RS 2m
.TP 2m
\fB\-\-autoprovisioning\-config\-file\fR=\fIPATH_TO_FILE\fR

Path of the JSON/YAML file which contains information about the cluster's node
autoprovisioning configuration. Currently it contains a list of resource limits,
identity defaults for autoprovisioning, node upgrade settings, node management
settings, minimum cpu platform, image type, node locations for autoprovisioning,
disk type and size configuration, Shielded instance settings, and
customer\-managed encryption keys settings.

Resource limits are specified in the field 'resourceLimits'. Each resource
limits definition contains three fields: resourceType, maximum and minimum.
Resource type can be "cpu", "memory" or an accelerator (e.g. "nvidia\-tesla\-t4"
for NVIDIA T4). Use gcloud compute accelerator\-types list to learn about
available accelerator types. Maximum is the maximum allowed amount with the unit
of the resource. Minimum is the minimum allowed amount with the unit of the
resource.

Identity default contains at most one of the below fields: serviceAccount: The
Google Cloud Platform Service Account to be used by node VMs in autoprovisioned
node pools. If not specified, the project's default service account is used.
scopes: A list of scopes to be used by node instances in autoprovisioned node
pools. Multiple scopes can be specified, separated by commas. For information on
defaults, look at:
https://cloud.google.com/sdk/gcloud/reference/container/clusters/create#\-\-scopes

Node Upgrade settings are specified under the field 'upgradeSettings', which has
the following fields: maxSurgeUpgrade: Number of extra (surge) nodes to be
created on each upgrade of an autoprovisioned node pool. maxUnavailableUpgrade:
Number of nodes that can be unavailable at the same time on each upgrade of an
autoprovisioned node pool.

Node Management settings are specified under the field 'management', which has
the following fields: autoUpgrade: A boolean field that indicates if node
autoupgrade is enabled for autoprovisioned node pools. autoRepair: A boolean
field that indicates if node autorepair is enabled for autoprovisioned node
pools.

minCpuPlatform (deprecated): If specified, new autoprovisioned nodes will be
scheduled on host with specified CPU architecture or a newer one. Note: Min CPU
platform can only be specified in Beta and Alpha.

Autoprovisioned node image is specified under the 'imageType' field. If not
specified the default value will be applied.

Autoprovisioning locations is a set of zones where new node pools can be created
by Autoprovisioning. Autoprovisioning locations are specified in the field
\'autoprovisioningLocations'. All zones must be in the same region as the
cluster's master(s).

Disk type and size are specified under the 'diskType' and 'diskSizeGb' fields,
respectively. If specified, new autoprovisioned nodes will be created with
custom boot disks configured by these settings.

Shielded instance settings are specified under the 'shieldedInstanceConfig'
field, which has the following fields: enableSecureBoot: A boolean field that
indicates if secure boot is enabled for autoprovisioned nodes.
enableIntegrityMonitoring: A boolean field that indicates if integrity
monitoring is enabled for autoprovisioned nodes.

Customer Managed Encryption Keys (CMEK) used by new auto\-provisioned node pools
can be specified in the 'bootDiskKmsKey' field.

Use a full or relative path to a local file containing the value of
autoprovisioning_config_file.

.TP 2m

Flags to configure autoprovisioned nodes


.RS 2m
.TP 2m
\fB\-\-autoprovisioning\-image\-type\fR=\fIAUTOPROVISIONING_IMAGE_TYPE\fR

Node Autoprovisioning will create new nodes with the specified image type

.TP 2m
\fB\-\-autoprovisioning\-locations\fR=\fIZONE\fR,[\fIZONE\fR,...]

Set of zones where new node pools can be created by autoprovisioning. All zones
must be in the same region as the cluster's master(s). Multiple locations can be
specified, separated by commas.

.TP 2m
\fB\-\-autoprovisioning\-min\-cpu\-platform\fR=\fIPLATFORM\fR

(DEPRECATED) If specified, new autoprovisioned nodes will be scheduled on host
with specified CPU architecture or a newer one.

The \f5\-\-autoprovisioning\-min\-cpu\-platform\fR flag is deprecated and will
be removed in an upcoming release. More info:
https://cloud.google.com/kubernetes\-engine/docs/release\-notes#March_08_2022

.TP 2m
\fB\-\-max\-cpu\fR=\fIMAX_CPU\fR

Maximum number of cores in the cluster.

Maximum number of cores to which the cluster can scale.

.TP 2m
\fB\-\-max\-memory\fR=\fIMAX_MEMORY\fR

Maximum memory in the cluster.

Maximum number of gigabytes of memory to which the cluster can scale.

.TP 2m
\fB\-\-min\-cpu\fR=\fIMIN_CPU\fR

Minimum number of cores in the cluster.

Minimum number of cores to which the cluster can scale.

.TP 2m
\fB\-\-min\-memory\fR=\fIMIN_MEMORY\fR

Minimum memory in the cluster.

Minimum number of gigabytes of memory to which the cluster can scale.

.TP 2m

Flags to specify upgrade settings for autoprovisioned nodes:


.RS 2m
.TP 2m
\fB\-\-autoprovisioning\-max\-surge\-upgrade\fR=\fIAUTOPROVISIONING_MAX_SURGE_UPGRADE\fR

Number of extra (surge) nodes to be created on each upgrade of an
autoprovisioned node pool.

.TP 2m
\fB\-\-autoprovisioning\-max\-unavailable\-upgrade\fR=\fIAUTOPROVISIONING_MAX_UNAVAILABLE_UPGRADE\fR

Number of nodes that can be unavailable at the same time on each upgrade of an
autoprovisioned node pool.

.TP 2m
\fB\-\-autoprovisioning\-node\-pool\-soak\-duration\fR=\fIAUTOPROVISIONING_NODE_POOL_SOAK_DURATION\fR

Time in seconds to be spent waiting during blue\-green upgrade before deleting
the blue pool and completing the update. This argument should be used in
conjunction with \f5\-\-enable\-autoprovisioning\-blue\-green\-upgrade\fR to
take effect.

.TP 2m
\fB\-\-autoprovisioning\-standard\-rollout\-policy\fR=[\fIbatch\-node\-count\fR=\fIBATCH_NODE_COUNT\fR,\fIbatch\-percent\fR=\fIBATCH_NODE_PERCENTAGE\fR,\fIbatch\-soak\-duration\fR=\fIBATCH_SOAK_DURATION\fR,...]

Standard rollout policy options for blue\-green upgrade. This argument should be
used in conjunction with
\f5\-\-enable\-autoprovisioning\-blue\-green\-upgrade\fR to take effect.

Batch sizes are specified by one of, batch\-node\-count or batch\-percent. The
duration between batches is specified by batch\-soak\-duration.

Example:
\f5\-\-standard\-rollout\-policy=batch\-node\-count=3,batch\-soak\-duration=60s\fR
\f5\-\-standard\-rollout\-policy=batch\-percent=0.05,batch\-soak\-duration=180s\fR

.TP 2m

Flag group to choose the top level upgrade option:

At most one of these can be specified:


.RS 2m
.TP 2m
\fB\-\-enable\-autoprovisioning\-blue\-green\-upgrade\fR

Whether to use blue\-green upgrade for the autoprovisioned node pool.

.TP 2m
\fB\-\-enable\-autoprovisioning\-surge\-upgrade\fR

Whether to use surge upgrade for the autoprovisioned node pool.

.RE
.RE
.sp
.TP 2m

Flags to specify identity for autoprovisioned nodes:


.RS 2m
.TP 2m
\fB\-\-autoprovisioning\-scopes\fR=[\fISCOPE\fR,...]

The scopes to be used by node instances in autoprovisioned node pools. Multiple
scopes can be specified, separated by commas. For information on defaults, look
at:
https://cloud.google.com/sdk/gcloud/reference/container/clusters/create#\-\-scopes

.TP 2m
\fB\-\-autoprovisioning\-service\-account\fR=\fIAUTOPROVISIONING_SERVICE_ACCOUNT\fR

The Google Cloud Platform Service Account to be used by node VMs in
autoprovisioned node pools. If not specified, the project default service
account is used.

.RE
.sp
.TP 2m

Flags to specify node management settings for autoprovisioned nodes:


.RS 2m
.TP 2m
\fB\-\-enable\-autoprovisioning\-autorepair\fR

Enable node autorepair for autoprovisioned node pools. Use
\-\-no\-enable\-autoprovisioning\-autorepair to disable.

This flag argument must be specified if any of the other arguments in this group
are specified.

.TP 2m
\fB\-\-enable\-autoprovisioning\-autoupgrade\fR

Enable node autoupgrade for autoprovisioned node pools. Use
\-\-no\-enable\-autoprovisioning\-autoupgrade to disable.

This flag argument must be specified if any of the other arguments in this group
are specified.

.RE
.sp
.TP 2m

Arguments to set limits on accelerators:


.RS 2m
.TP 2m
\fB\-\-max\-accelerator\fR=[\fItype\fR=\fITYPE\fR,\fIcount\fR=\fICOUNT\fR,...]

Sets maximum limit for a single type of accelerators (e.g. GPUs) in cluster.

.RS 2m
.TP 2m
\fBtype\fR
(Required) The specific type (e.g. nvidia\-tesla\-t4 for NVIDIA T4) of
accelerator for which the limit is set. Use \f5gcloud compute accelerator\-types
list\fR to learn about all available accelerator types.

.TP 2m
\fBcount\fR
(Required) The maximum number of accelerators to which the cluster can be
scaled.

This flag argument must be specified if any of the other arguments in this group
are specified.

.RE
.sp
.TP 2m
\fB\-\-min\-accelerator\fR=[\fItype\fR=\fITYPE\fR,\fIcount\fR=\fICOUNT\fR,...]

Sets minimum limit for a single type of accelerators (e.g. GPUs) in cluster.
Defaults to 0 for all accelerator types if it isn't set.

.RS 2m
.TP 2m
\fBtype\fR
(Required) The specific type (e.g. nvidia\-tesla\-t4 for NVIDIA T4) of
accelerator for which the limit is set. Use \f5gcloud compute accelerator\-types
list\fR to learn about all available accelerator types.

.TP 2m
\fBcount\fR
(Required) The minimum number of accelerators to which the cluster can be
scaled.

.RE
.RE
.RE
.RE
.RE
.sp
.TP 2m

Cluster autoscaling


.RS 2m
.TP 2m
\fB\-\-enable\-autoscaling\fR

Enables autoscaling for a node pool.

Enables autoscaling in the node pool specified by \-\-node\-pool or the default
node pool if \-\-node\-pool is not provided. If not already, \-\-max\-nodes or
\-\-total\-max\-nodes must also be set.

.TP 2m
\fB\-\-location\-policy\fR=\fILOCATION_POLICY\fR

Location policy specifies the algorithm used when scaling\-up the node pool.

.RS 2m
.IP "\(bu" 2m
\f5BALANCED\fR \- Is a best effort policy that aims to balance the sizes of
available zones.
.IP "\(bu" 2m
\f5ANY\fR \- Instructs the cluster autoscaler to prioritize utilization of
unused reservations, and reduces preemption risk for Spot VMs.
.RE
.sp

\fILOCATION_POLICY\fR must be one of: \fBBALANCED\fR, \fBANY\fR.

.TP 2m
\fB\-\-max\-nodes\fR=\fIMAX_NODES\fR

Maximum number of nodes per zone in the node pool.

Maximum number of nodes per zone to which the node pool specified by
\-\-node\-pool (or default node pool if unspecified) can scale. Ignored unless
\-\-enable\-autoscaling is also specified.

.TP 2m
\fB\-\-min\-nodes\fR=\fIMIN_NODES\fR

Minimum number of nodes per zone in the node pool.

Minimum number of nodes per zone to which the node pool specified by
\-\-node\-pool (or default node pool if unspecified) can scale. Ignored unless
\-\-enable\-autoscaling is also specified.

.TP 2m
\fB\-\-total\-max\-nodes\fR=\fITOTAL_MAX_NODES\fR

Maximum number of all nodes in the node pool.

Maximum number of all nodes to which the node pool specified by \-\-node\-pool
(or default node pool if unspecified) can scale. Ignored unless
\-\-enable\-autoscaling is also specified.

.TP 2m
\fB\-\-total\-min\-nodes\fR=\fITOTAL_MIN_NODES\fR

Minimum number of all nodes in the node pool.

Minimum number of all nodes to which the node pool specified by \-\-node\-pool
(or default node pool if unspecified) can scale. Ignored unless
\-\-enable\-autoscaling is also specified.

.RE
.sp
.TP 2m
\fB\-\-enable\-insecure\-binding\-system\-authenticated\fR

Allow using \f5system:authenticated\fR as a subject in ClusterRoleBindings and
RoleBindings. Allowing bindings that reference \f5system:authenticated\fR is a
security risk and is not recommended.

To disallow binding \f5system:authenticated\fR in a cluster, explicitly set the
\f5\-\-no\-enable\-insecure\-binding\-system\-authenticated\fR flag instead.

.TP 2m
\fB\-\-enable\-insecure\-binding\-system\-unauthenticated\fR

Allow using \f5system:unauthenticated\fR and \f5system:anonymous\fR as subjects
in ClusterRoleBindings and RoleBindings. Allowing bindings that reference
\f5system:unauthenticated\fR and \f5system:anonymous\fR are a security risk and
is not recommended.

To disallow binding \f5system:authenticated\fR in a cluster, explicitly set the
\f5\-\-no\-enable\-insecure\-binding\-system\-unauthenticated\fR flag instead.

.TP 2m

Master Authorized Networks


.RS 2m
.TP 2m
\fB\-\-enable\-master\-authorized\-networks\fR

Allow only specified set of CIDR blocks (specified by the
\f5\-\-master\-authorized\-networks\fR flag) to connect to Kubernetes master
through HTTPS. Besides these blocks, the following have access as well:

.RS 2m
1) The private network the cluster connects to if
`\-\-enable\-private\-nodes` is specified.
2) Google Compute Engine Public IPs if `\-\-enable\-private\-nodes` is not
specified.
.RE

Use \f5\-\-no\-enable\-master\-authorized\-networks\fR to disable. When
disabled, public internet (0.0.0.0/0) is allowed to connect to Kubernetes master
through HTTPS.

.TP 2m
\fB\-\-master\-authorized\-networks\fR=\fINETWORK\fR,[\fINETWORK\fR,...]

The list of CIDR blocks (up to 100 for private cluster, 50 for public cluster)
that are allowed to connect to Kubernetes master through HTTPS. Specified in
CIDR notation (e.g. 1.2.3.4/30). Cannot be specified unless
\f5\-\-enable\-master\-authorized\-networks\fR is also specified.

.RE
.sp
.TP 2m

Exports cluster's usage of cloud resources


.RS 2m
.TP 2m
\fB\-\-enable\-network\-egress\-metering\fR

Enable network egress metering on this cluster.

When enabled, a DaemonSet is deployed into the cluster. Each DaemonSet pod
meters network egress traffic by collecting data from the conntrack table, and
exports the metered metrics to the specified destination.

Network egress metering is disabled if this flag is omitted, or when
\f5\-\-no\-enable\-network\-egress\-metering\fR is set.

.TP 2m
\fB\-\-enable\-resource\-consumption\-metering\fR

Enable resource consumption metering on this cluster.

When enabled, a table will be created in the specified BigQuery dataset to store
resource consumption data. The resulting table can be joined with the resource
usage table or with BigQuery billing export.

Resource consumption metering is enabled unless \f5\-\-no\-enable\-resource\-
consumption\-metering\fR is set.

.TP 2m
\fB\-\-resource\-usage\-bigquery\-dataset\fR=\fIRESOURCE_USAGE_BIGQUERY_DATASET\fR

The name of the BigQuery dataset to which the cluster's usage of cloud resources
is exported. A table will be created in the specified dataset to store cluster
resource usage. The resulting table can be joined with BigQuery Billing Export
to produce a fine\-grained cost breakdown.

Examples:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-resource\-usage\-bigquery\-dataset=example_bigquery_dataset_name
.RE

.RE
.sp
.TP 2m

Private Clusters


.RS 2m
.TP 2m
\fB\-\-enable\-private\-endpoint\fR

Cluster is managed using the private IP address of the master API endpoint.

.TP 2m
\fB\-\-enable\-private\-nodes\fR

Cluster is created with no public IP addresses on the cluster nodes.

.TP 2m
\fB\-\-master\-ipv4\-cidr\fR=\fIMASTER_IPV4_CIDR\fR

IPv4 CIDR range to use for the master network. This should have a netmask of
size /28 and should be used in conjunction with the \-\-enable\-private\-nodes
flag.

.TP 2m
\fB\-\-private\-cluster\fR

(DEPRECATED) Cluster is created with no public IP addresses on the cluster
nodes.

The \-\-private\-cluster flag is deprecated and will be removed in a future
release. Use \-\-enable\-private\-nodes instead.

.RE
.sp
.TP 2m

Flags for Secret Manager configuration:


.RS 2m
.TP 2m
\fB\-\-enable\-secret\-manager\fR

Enables the Secret Manager CSI driver provider component. See
https://secrets\-store\-csi\-driver.sigs.k8s.io/introduction
https://github.com/GoogleCloudPlatform/secrets\-store\-csi\-driver\-provider\-gcp

.TP 2m
\fB\-\-enable\-secret\-manager\-rotation\fR

Enables the rotation of secrets in the Secret Manager CSI driver provider
component.

.TP 2m
\fB\-\-secret\-manager\-rotation\-interval\fR=\fISECRET_MANAGER_ROTATION_INTERVAL\fR

Set the rotation period for secrets in the Secret Manager CSI driver provider
component. If you don't specify a time interval for the rotation, it will
default to a rotation period of two minutes.

.RE
.sp
.TP 2m

Flags for Secret Sync configuration:


.RS 2m
.TP 2m
\fB\-\-enable\-secret\-sync\fR

Enables the Secret Sync component. See
https://cloud.google.com/secret\-manager/docs/sync\-k8\-secrets

.TP 2m
\fB\-\-enable\-secret\-sync\-rotation\fR

Enables the rotation of secrets in the Secret Sync component. provider
component.

.TP 2m
\fB\-\-secret\-sync\-rotation\-interval\fR=\fISECRET_SYNC_ROTATION_INTERVAL\fR

Set the rotation period for secrets in the Secret Sync component.

.RE
.sp
.TP 2m

At most one of these can be specified:


.RS 2m
.TP 2m
\fB\-\-ephemeral\-storage\fR[=[\fIlocal\-ssd\-count\fR=\fILOCAL\-SSD\-COUNT\fR]]

Parameters for the ephemeral storage filesystem. If unspecified, ephemeral
storage is backed by the boot disk.

Examples:

.RS 2m
$ gcloud alpha container clusters create example_cluster \e
    \-\-ephemeral\-storage local\-ssd\-count=2
.RE

\'local\-ssd\-count' specifies the number of local SSDs to use to back ephemeral
storage. Local SDDs use NVMe interfaces. For first\- and second\-generation
machine types, a nonzero count field is required for local ssd to be configured.
For third\-generation machine types, the count field is optional because the
count is inferred from the machine type.

See https://cloud.google.com/compute/docs/disks/local\-ssd for more information.

.TP 2m
\fB\-\-ephemeral\-storage\-local\-ssd\fR[=[\fIcount\fR=\fICOUNT\fR]]

Parameters for the ephemeral storage filesystem. If unspecified, ephemeral
storage is backed by the boot disk.

Examples:

.RS 2m
$ gcloud alpha container clusters create example_cluster \e
    \-\-ephemeral\-storage\-local\-ssd count=2
.RE

\'count' specifies the number of local SSDs to use to back ephemeral storage.
Local SDDs use NVMe interfaces. For first\- and second\-generation machine
types, a nonzero count field is required for local ssd to be configured. For
third\-generation machine types, the count field is optional because the count
is inferred from the machine type.

See https://cloud.google.com/compute/docs/disks/local\-ssd for more information.

.TP 2m
\fB\-\-local\-nvme\-ssd\-block\fR[=[\fIcount\fR=\fICOUNT\fR]]

Adds the requested local SSDs on all nodes in default node pool(s) in the new
cluster.

Examples:

.RS 2m
$ gcloud alpha container clusters create example_cluster \e
    \-\-local\-nvme\-ssd\-block count=2
.RE

\'count' must be between 1\-8


New nodes, including ones created by resize or recreate, will have these local
SSDs.

For first\- and second\-generation machine types, a nonzero count field is
required for local ssd to be configured. For third\-generation machine types,
the count field is optional because the count is inferred from the machine type.

See https://cloud.google.com/compute/docs/disks/local\-ssd for more information.

.TP 2m
\fB\-\-local\-ssd\-count\fR=\fILOCAL_SSD_COUNT\fR

\-\-local\-ssd\-count is the equivalent of using \-\-local\-ssd\-volumes with
type=scsi,format=fs

The number of local SSD disks to provision on each node, formatted and mounted
in the filesystem.

Local SSDs have a fixed 375 GB capacity per device. The number of disks that can
be attached to an instance is limited by the maximum number of disks available
on a machine, which differs by compute zone. See
https://cloud.google.com/compute/docs/disks/local\-ssd for more information.

.TP 2m
\fB\-\-local\-ssd\-volumes\fR=[[\fIcount\fR=\fICOUNT\fR],[\fItype\fR=\fITYPE\fR],[\fIformat\fR=\fIFORMAT\fR],...]

Adds the requested local SSDs on all nodes in default node pool(s) in the new
cluster.

Examples:

.RS 2m
$ gcloud alpha container clusters create example_cluster \e
    \-\-local\-ssd\-volumes count=2,type=nvme,format=fs
.RE

\'count' must be between 1\-8

\'type' must be either scsi or nvme

\'format' must be either fs or block

New nodes, including ones created by resize or recreate, will have these local
SSDs.

Local SSDs have a fixed 375 GB capacity per device. The number of disks that can
be attached to an instance is limited by the maximum number of disks available
on a machine, which differs by compute zone. See
https://cloud.google.com/compute/docs/disks/local\-ssd for more information.

.RE
.sp
.TP 2m

At most one of these can be specified:


.RS 2m
.TP 2m
\fB\-\-location\fR=\fILOCATION\fR

Compute zone or region (e.g. us\-central1\-a or us\-central1) for the cluster.
Overrides the default compute/region or compute/zone value for this command
invocation. Prefer using this flag over the \-\-region or \-\-zone flags.

.TP 2m
\fB\-\-region\fR=\fIREGION\fR

Compute region (e.g. us\-central1) for a regional cluster. Overrides the default
compute/region property value for this command invocation.

.TP 2m
\fB\-\-zone\fR=\fIZONE\fR, \fB\-z\fR \fIZONE\fR

Compute zone (e.g. us\-central1\-a) for a zonal cluster. Overrides the default
compute/zone property value for this command invocation.

.RE
.sp
.TP 2m

One of either maintenance\-window or the group of maintenance\-window flags can
be set.


At most one of these can be specified:


.RS 2m
.TP 2m
\fB\-\-maintenance\-window\fR=\fISTART_TIME\fR

Set a time of day when you prefer maintenance to start on this cluster. For
example:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-maintenance\-window=12:43
.RE

The time corresponds to the UTC time zone, and must be in HH:MM format.

Non\-emergency maintenance will occur in the 4 hour block starting at the
specified time.

This is mutually exclusive with the recurring maintenance windows and will
overwrite any existing window. Compatible with maintenance exclusions.

.TP 2m

Set a flexible maintenance window by specifying a window that recurs per an RFC
5545 RRULE. Non\-emergency maintenance will occur in the recurring windows.

Examples:

For a 9\-5 Mon\-Wed UTC\-4 maintenance window:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-maintenance\-window\-start=2000\-01\-01T09:00:00\-04:00 \e
    \-\-maintenance\-window\-end=2000\-01\-01T17:00:00\-04:00 \e
    \-\-maintenance\-window\-recurrence='FREQ=WEEKLY;BYDAY=MO,TU,WE'
.RE

For a daily window from 22:00 \- 04:00 UTC:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-maintenance\-window\-start=2000\-01\-01T22:00:00Z \e
    \-\-maintenance\-window\-end=2000\-01\-02T04:00:00Z \e
    \-\-maintenance\-window\-recurrence=FREQ=DAILY
.RE



.RS 2m
.TP 2m
\fB\-\-maintenance\-window\-end\fR=\fITIME_STAMP\fR

The end time for calculating the duration of the maintenance window, as
expressed by the amount of time after the START_TIME, in the same format. The
value for END_TIME must be in the future, relative to START_TIME. This only
calculates the duration of the window, and doesn't set when the maintenance
window stops recurring. Maintenance windows only stop recurring when they're
removed. See $ gcloud topic datetimes for information on time formats.

This flag argument must be specified if any of the other arguments in this group
are specified.

This flag argument must be specified if any of the other arguments in this group
are specified.

.TP 2m
\fB\-\-maintenance\-window\-recurrence\fR=\fIRRULE\fR

An RFC 5545 RRULE, specifying how the window will recur. Note that minimum
requirements for maintenance periods will be enforced. Note that FREQ=SECONDLY,
MINUTELY, and HOURLY are not supported.

This flag argument must be specified if any of the other arguments in this group
are specified.

.TP 2m
\fB\-\-maintenance\-window\-start\fR=\fITIME_STAMP\fR

Start time of the first window (can occur in the past). The start time
influences when the window will start for recurrences. See $ gcloud topic
datetimes for information on time formats.

This flag argument must be specified if any of the other arguments in this group
are specified.

.RE
.RE
.sp
.TP 2m

Basic auth


.RS 2m
.TP 2m
\fB\-\-password\fR=\fIPASSWORD\fR

The password to use for cluster auth. Defaults to a server\-specified
randomly\-generated string.

.TP 2m

Options to specify the username.

At most one of these can be specified:


.RS 2m
.TP 2m
\fB\-\-enable\-basic\-auth\fR

Enable basic (username/password) auth for the cluster.
\f5\-\-enable\-basic\-auth\fR is an alias for \f5\-\-username=admin\fR;
\f5\-\-no\-enable\-basic\-auth\fR is an alias for \f5\-\-username=""\fR. Use
\f5\-\-password\fR to specify a password; if not, the server will randomly
generate one. For cluster versions before 1.12, if neither
\f5\-\-enable\-basic\-auth\fR nor \f5\-\-username\fR is specified,
\f5\-\-enable\-basic\-auth\fR will default to \f5true\fR. After 1.12,
\f5\-\-enable\-basic\-auth\fR will default to \f5false\fR.

.TP 2m
\fB\-\-username\fR=\fIUSERNAME\fR, \fB\-u\fR \fIUSERNAME\fR

The user name to use for basic auth for the cluster. Use \f5\-\-password\fR to
specify a password; if not, the server will randomly generate one.

.RE
.RE
.sp
.TP 2m

Specifies the reservation for the default initial node pool.


.RS 2m
.TP 2m
\fB\-\-reservation\fR=\fIRESERVATION\fR

The name of the reservation, required when
\f5\-\-reservation\-affinity=specific\fR.

.TP 2m
\fB\-\-reservation\-affinity\fR=\fIRESERVATION_AFFINITY\fR

The type of the reservation for the default initial node pool.
\fIRESERVATION_AFFINITY\fR must be one of: \fBany\fR, \fBnone\fR,
\fBspecific\fR.

.RE
.sp
.TP 2m

Options to specify the node identity.


.RS 2m
.TP 2m

Scopes options.


.RS 2m
.TP 2m
\fB\-\-scopes\fR=[\fISCOPE\fR,...]; default="gke\-default"

Specifies scopes for the node instances.

Examples:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-scopes=https://www.googleapis.com/auth/devstorage.read_only
.RE

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-scopes=bigquery,storage\-rw,compute\-ro
.RE

Multiple scopes can be specified, separated by commas. Various scopes are
automatically added based on feature usage. Such scopes are not added if an
equivalent scope already exists.

.RS 2m
.IP "\(em" 2m
\f5monitoring\-write\fR: always added to ensure metrics can be written
.IP "\(em" 2m
\f5logging\-write\fR: added if Cloud Logging is enabled
(\f5\-\-enable\-cloud\-logging\fR/\f5\-\-logging\fR)
.IP "\(em" 2m
\f5monitoring\fR: added if Cloud Monitoring is enabled
(\f5\-\-enable\-cloud\-monitoring\fR/\f5\-\-monitoring\fR)
.IP "\(em" 2m
\f5gke\-default\fR: added for Autopilot clusters that use the default service
account
.IP "\(em" 2m
\f5cloud\-platform\fR: added for Autopilot clusters that use any other service
account
.RE
.sp

SCOPE can be either the full URI of the scope or an alias. \fBDefault\fR scopes
are assigned to all instances. Available aliases are:


.TS
tab(	);
lB lB
l l.
Alias	URI
bigquery	https://www.googleapis.com/auth/bigquery
cloud-platform	https://www.googleapis.com/auth/cloud-platform
cloud-source-repos	https://www.googleapis.com/auth/source.full_control
cloud-source-repos-ro	https://www.googleapis.com/auth/source.read_only
compute-ro	https://www.googleapis.com/auth/compute.readonly
compute-rw	https://www.googleapis.com/auth/compute
datastore	https://www.googleapis.com/auth/datastore
default	https://www.googleapis.com/auth/devstorage.read_only
	https://www.googleapis.com/auth/logging.write
	https://www.googleapis.com/auth/monitoring.write
	https://www.googleapis.com/auth/pubsub
	https://www.googleapis.com/auth/service.management.readonly
	https://www.googleapis.com/auth/servicecontrol
	https://www.googleapis.com/auth/trace.append
gke-default	https://www.googleapis.com/auth/devstorage.read_only
	https://www.googleapis.com/auth/logging.write
	https://www.googleapis.com/auth/monitoring
	https://www.googleapis.com/auth/service.management.readonly
	https://www.googleapis.com/auth/servicecontrol
	https://www.googleapis.com/auth/trace.append
logging-write	https://www.googleapis.com/auth/logging.write
monitoring	https://www.googleapis.com/auth/monitoring
monitoring-read	https://www.googleapis.com/auth/monitoring.read
monitoring-write	https://www.googleapis.com/auth/monitoring.write
pubsub	https://www.googleapis.com/auth/pubsub
service-control	https://www.googleapis.com/auth/servicecontrol
service-management	https://www.googleapis.com/auth/service.management.readonly
sql (deprecated)	https://www.googleapis.com/auth/sqlservice
sql-admin	https://www.googleapis.com/auth/sqlservice.admin
storage-full	https://www.googleapis.com/auth/devstorage.full_control
storage-ro	https://www.googleapis.com/auth/devstorage.read_only
storage-rw	https://www.googleapis.com/auth/devstorage.read_write
taskqueue	https://www.googleapis.com/auth/taskqueue
trace	https://www.googleapis.com/auth/trace.append
userinfo-email	https://www.googleapis.com/auth/userinfo.email
.TE

DEPRECATION WARNING: https://www.googleapis.com/auth/sqlservice account scope
and \f5sql\fR alias do not provide SQL instance management capabilities and have
been deprecated. Please, use https://www.googleapis.com/auth/sqlservice.admin or
\f5sql\-admin\fR to manage your Google SQL Service instances.

.RE
.sp
.TP 2m
\fB\-\-service\-account\fR=\fISERVICE_ACCOUNT\fR

The Google Cloud Platform Service Account to be used by the node VMs. If a
service account is specified, the cloud\-platform and userinfo.email scopes are
used. If no Service Account is specified, the project default service account is
used.

.RE
.sp
.TP 2m

Flags for Security Profile:


.RS 2m
.TP 2m
\fB\-\-security\-profile\fR=\fISECURITY_PROFILE\fR

Name and version of the security profile to be applied to the cluster.

Examples:

.RS 2m
$ gcloud alpha container clusters create example\-cluster \e
    \-\-security\-profile=default\-1.0\-gke.0
.RE

.TP 2m
\fB\-\-security\-profile\-runtime\-rules\fR

Apply runtime rules in the specified security profile to the cluster. When
enabled (by default), a security profile controller and webhook are deployed on
the cluster to enforce the runtime rules. If
\-\-no\-security\-profile\-runtime\-rules is specified to disable this feature,
only bootstrapping rules are applied, and no security profile controller or
webhook are installed. Enabled by default, use
\fB\-\-no\-security\-profile\-runtime\-rules\fR to disable.


.RE
.RE
.sp

.SH "GCLOUD WIDE FLAGS"

These flags are available to all commands: \-\-access\-token\-file, \-\-account,
\-\-billing\-project, \-\-configuration, \-\-flags\-file, \-\-flatten,
\-\-format, \-\-help, \-\-impersonate\-service\-account, \-\-log\-http,
\-\-project, \-\-quiet, \-\-trace\-token, \-\-user\-output\-enabled,
\-\-verbosity.

Run \fB$ gcloud help\fR for details.



.SH "NOTES"

This command is currently in alpha and might change without notice. If this
command fails with API permission errors despite specifying the correct project,
you might be trying to access an API with an invitation\-only early access
allowlist. These variants are also available:

.RS 2m
$ gcloud container clusters create
.RE

.RS 2m
$ gcloud beta container clusters create
.RE