File: //snap/google-cloud-cli/current/help/man/man1/gcloud_beta_dataflow_flex-template_run.1
.TH "GCLOUD_BETA_DATAFLOW_FLEX\-TEMPLATE_RUN" 1
.SH "NAME"
.HP
gcloud beta dataflow flex\-template run \- runs a job from the specified path
.SH "SYNOPSIS"
.HP
\f5gcloud beta dataflow flex\-template run\fR \fIJOB_NAME\fR \fB\-\-template\-file\-gcs\-location\fR=\fITEMPLATE_FILE_GCS_LOCATION\fR [\fB\-\-additional\-experiments\fR=[\fIADDITIONAL_EXPERIMENTS\fR,...]] [\fB\-\-additional\-pipeline\-options\fR=[\fIADDITIONAL_PIPELINE_OPTIONS\fR,...]] [\fB\-\-additional\-user\-labels\fR=[\fIADDITIONAL_USER_LABELS\fR,...]] [\fB\-\-dataflow\-kms\-key\fR=\fIDATAFLOW_KMS_KEY\fR] [\fB\-\-disable\-public\-ips\fR] [\fB\-\-enable\-streaming\-engine\fR] [\fB\-\-flexrs\-goal\fR=\fIFLEXRS_GOAL\fR] [\fB\-\-launcher\-machine\-type\fR=\fILAUNCHER_MACHINE_TYPE\fR] [\fB\-\-max\-workers\fR=\fIMAX_WORKERS\fR] [\fB\-\-network\fR=\fINETWORK\fR] [\fB\-\-num\-workers\fR=\fINUM_WORKERS\fR] [\fB\-\-parameters\fR=[\fIPARAMETERS\fR,...]] [\fB\-\-region\fR=\fIREGION_ID\fR] [\fB\-\-service\-account\-email\fR=\fISERVICE_ACCOUNT_EMAIL\fR] [\fB\-\-staging\-location\fR=\fISTAGING_LOCATION\fR] [\fB\-\-subnetwork\fR=\fISUBNETWORK\fR] [\fB\-\-temp\-location\fR=\fITEMP_LOCATION\fR] [\fB\-\-worker\-machine\-type\fR=\fIWORKER_MACHINE_TYPE\fR] [[\fB\-\-[no\-]update\fR\ :\ \fB\-\-transform\-name\-mappings\fR=[\fITRANSFORM_NAME_MAPPINGS\fR,...]]] [\fB\-\-worker\-region\fR=\fIWORKER_REGION\fR\ |\ \fB\-\-worker\-zone\fR=\fIWORKER_ZONE\fR] [\fIGCLOUD_WIDE_FLAG\ ...\fR]
.SH "DESCRIPTION"
\fB(BETA)\fR Runs a job from the specified flex template gcs path.
.SH "EXAMPLES"
To run a job from the flex template, run:
.RS 2m
$ gcloud beta dataflow flex\-template run my\-job \e
\-\-template\-file\-gcs\-location=gs://flex\-template\-path \e
\-\-region=europe\-west1 \e
\-\-parameters=input="gs://input",output="gs://output\-path" \e
\-\-max\-workers=5
.RE
.SH "POSITIONAL ARGUMENTS"
.RS 2m
.TP 2m
\fIJOB_NAME\fR
Unique name to assign to the job.
.RE
.sp
.SH "REQUIRED FLAGS"
.RS 2m
.TP 2m
\fB\-\-template\-file\-gcs\-location\fR=\fITEMPLATE_FILE_GCS_LOCATION\fR
Google Cloud Storage location of the flex template to run. (Must be a URL
beginning with 'gs://'.)
.RE
.sp
.SH "OPTIONAL FLAGS"
.RS 2m
.TP 2m
\fB\-\-additional\-experiments\fR=[\fIADDITIONAL_EXPERIMENTS\fR,...]
Additional experiments to pass to the job. Example:
\-\-additional\-experiments=experiment1,experiment2=value2
.TP 2m
\fB\-\-additional\-pipeline\-options\fR=[\fIADDITIONAL_PIPELINE_OPTIONS\fR,...]
Additional pipeline options to pass to the job. Example:
\-\-additional\-pipeline\-options=option1=value1,option2=value2
.TP 2m
\fB\-\-additional\-user\-labels\fR=[\fIADDITIONAL_USER_LABELS\fR,...]
Additional user labels to pass to the job. Example:
\-\-additional\-user\-labels='key1=value1,key2=value2'
.TP 2m
\fB\-\-dataflow\-kms\-key\fR=\fIDATAFLOW_KMS_KEY\fR
Cloud KMS key to protect the job resources.
.TP 2m
\fB\-\-disable\-public\-ips\fR
Cloud Dataflow workers must not use public IP addresses. Overrides the default
\fBdataflow/disable_public_ips\fR property value for this command invocation.
.TP 2m
\fB\-\-enable\-streaming\-engine\fR
Enabling Streaming Engine for the streaming job. Overrides the default
\fBdataflow/enable_streaming_engine\fR property value for this command
invocation.
.TP 2m
\fB\-\-flexrs\-goal\fR=\fIFLEXRS_GOAL\fR
FlexRS goal for the flex template job. \fIFLEXRS_GOAL\fR must be one of:
\fBCOST_OPTIMIZED\fR, \fBSPEED_OPTIMIZED\fR.
.TP 2m
\fB\-\-launcher\-machine\-type\fR=\fILAUNCHER_MACHINE_TYPE\fR
The machine type to use for launching the job. The default isn1\-standard\-1.
.TP 2m
\fB\-\-max\-workers\fR=\fIMAX_WORKERS\fR
Maximum number of workers to run.
.TP 2m
\fB\-\-network\fR=\fINETWORK\fR
Compute Engine network for launching instances to run your pipeline.
.TP 2m
\fB\-\-num\-workers\fR=\fINUM_WORKERS\fR
Initial number of workers to use.
.TP 2m
\fB\-\-parameters\fR=[\fIPARAMETERS\fR,...]
Parameters to pass to the job.
.TP 2m
\fB\-\-region\fR=\fIREGION_ID\fR
Region ID of the job's regional endpoint. Defaults to 'us\-central1'.
.TP 2m
\fB\-\-service\-account\-email\fR=\fISERVICE_ACCOUNT_EMAIL\fR
Service account to run the workers as.
.TP 2m
\fB\-\-staging\-location\fR=\fISTAGING_LOCATION\fR
Default Google Cloud Storage location to stage local files.(Must be a URL
beginning with 'gs://'.)
.TP 2m
\fB\-\-subnetwork\fR=\fISUBNETWORK\fR
Compute Engine subnetwork for launching instances to run your pipeline.
.TP 2m
\fB\-\-temp\-location\fR=\fITEMP_LOCATION\fR
Default Google Cloud Storage location to stage temporary files. If not set,
defaults to the value for \-\-staging\-location.(Must be a URL beginning with
\'gs://'.)
.TP 2m
\fB\-\-worker\-machine\-type\fR=\fIWORKER_MACHINE_TYPE\fR
Type of machine to use for workers. Defaults to server\-specified.
.TP 2m
\fB\-\-[no\-]update\fR
Set this to true for streaming update jobs. Use \fB\-\-update\fR to enable and
\fB\-\-no\-update\fR to disable.
.TP 2m
\fB\-\-transform\-name\-mappings\fR=[\fITRANSFORM_NAME_MAPPINGS\fR,...]
Transform name mappings for the streaming update job.
.TP 2m
At most one of these can be specified:
.RS 2m
.TP 2m
\fB\-\-worker\-region\fR=\fIWORKER_REGION\fR
Region to run the workers in.
.TP 2m
\fB\-\-worker\-zone\fR=\fIWORKER_ZONE\fR
Zone to run the workers in.
.RE
.RE
.sp
.SH "GCLOUD WIDE FLAGS"
These flags are available to all commands: \-\-access\-token\-file, \-\-account,
\-\-billing\-project, \-\-configuration, \-\-flags\-file, \-\-flatten,
\-\-format, \-\-help, \-\-impersonate\-service\-account, \-\-log\-http,
\-\-project, \-\-quiet, \-\-trace\-token, \-\-user\-output\-enabled,
\-\-verbosity.
Run \fB$ gcloud help\fR for details.
.SH "NOTES"
This command is currently in beta and might change without notice. This variant
is also available:
.RS 2m
$ gcloud dataflow flex\-template run
.RE