File: //snap/google-cloud-cli/394/help/man/man1/gcloud_alpha_dataplex_tasks_update.1
.TH "GCLOUD_ALPHA_DATAPLEX_TASKS_UPDATE" 1
.SH "NAME"
.HP
gcloud alpha dataplex tasks update \- update a Dataplex task resource
.SH "SYNOPSIS"
.HP
\f5gcloud alpha dataplex tasks update\fR (\fITASK\fR\ :\ \fB\-\-lake\fR=\fILAKE\fR\ \fB\-\-location\fR=\fILOCATION\fR) [\fB\-\-async\fR] [\fB\-\-description\fR=\fIDESCRIPTION\fR] [\fB\-\-display\-name\fR=\fIDISPLAY_NAME\fR] [\fB\-\-update\-labels\fR=[\fIKEY\fR=\fIVALUE\fR,...]] [\fB\-\-clear\-labels\fR\ |\ \fB\-\-remove\-labels\fR=[\fIKEY\fR,...]] [\fB\-\-execution\-args\fR=[\fIKEY\fR=\fIVALUE\fR,...]\ \fB\-\-execution\-project\fR=\fIEXECUTION_PROJECT\fR\ \fB\-\-execution\-service\-account\fR=\fIEXECUTION_SERVICE_ACCOUNT\fR\ \fB\-\-kms\-key\fR=\fIKMS_KEY\fR\ \fB\-\-max\-job\-execution\-lifetime\fR=\fIMAX_JOB_EXECUTION_LIFETIME\fR] [\fB\-\-notebook\fR=\fINOTEBOOK\fR\ \fB\-\-notebook\-archive\-uris\fR=[\fINOTEBOOK_ARCHIVE_URIS\fR,...]\ \fB\-\-notebook\-file\-uris\fR=[\fINOTEBOOK_FILE_URIS\fR,...]\ \fB\-\-notebook\-batch\-executors\-count\fR=\fINOTEBOOK_BATCH_EXECUTORS_COUNT\fR\ \fB\-\-notebook\-batch\-max\-executors\-count\fR=\fINOTEBOOK_BATCH_MAX_EXECUTORS_COUNT\fR\ \fB\-\-notebook\-container\-image\fR=\fINOTEBOOK_CONTAINER_IMAGE\fR\ \fB\-\-notebook\-container\-image\-java\-jars\fR=[\fINOTEBOOK_CONTAINER_IMAGE_JAVA_JARS\fR,...]\ \fB\-\-notebook\-container\-image\-properties\fR=[\fIKEY\fR=\fIVALUE\fR,...]\ \fB\-\-notebook\-vpc\-network\-tags\fR=[\fINOTEBOOK_VPC_NETWORK_TAGS\fR,...]\ \fB\-\-notebook\-vpc\-network\-name\fR=\fINOTEBOOK_VPC_NETWORK_NAME\fR\ |\ \fB\-\-notebook\-vpc\-sub\-network\-name\fR=\fINOTEBOOK_VPC_SUB_NETWORK_NAME\fR\ |\ \fB\-\-spark\-archive\-uris\fR=[\fISPARK_ARCHIVE_URIS\fR,...]\ \fB\-\-spark\-file\-uris\fR=[\fISPARK_FILE_URIS\fR,...]\ \fB\-\-batch\-executors\-count\fR=\fIBATCH_EXECUTORS_COUNT\fR\ \fB\-\-batch\-max\-executors\-count\fR=\fIBATCH_MAX_EXECUTORS_COUNT\fR\ \fB\-\-container\-image\fR=\fICONTAINER_IMAGE\fR\ \fB\-\-container\-image\-java\-jars\fR=[\fICONTAINER_IMAGE_JAVA_JARS\fR,...]\ \fB\-\-container\-image\-properties\fR=[\fIKEY\fR=\fIVALUE\fR,...]\ \fB\-\-container\-image\-python\-packages\fR=[\fICONTAINER_IMAGE_PYTHON_PACKAGES\fR,...]\ \fB\-\-vpc\-network\-tags\fR=[\fIVPC_NETWORK_TAGS\fR,...]\ \fB\-\-vpc\-network\-name\fR=\fIVPC_NETWORK_NAME\fR\ \fB\-\-vpc\-sub\-network\-name\fR=\fIVPC_SUB_NETWORK_NAME\fR\ \fB\-\-spark\-main\-class\fR=\fISPARK_MAIN_CLASS\fR\ |\ \fB\-\-spark\-main\-jar\-file\-uri\fR=\fISPARK_MAIN_JAR_FILE_URI\fR\ |\ \fB\-\-spark\-python\-script\-file\fR=\fISPARK_PYTHON_SCRIPT_FILE\fR\ |\ \fB\-\-spark\-sql\-script\fR=\fISPARK_SQL_SCRIPT\fR\ |\ \fB\-\-spark\-sql\-script\-file\fR=\fISPARK_SQL_SCRIPT_FILE\fR] [\fB\-\-trigger\-disabled\fR\ \fB\-\-trigger\-max\-retires\fR=\fITRIGGER_MAX_RETIRES\fR\ \fB\-\-trigger\-schedule\fR=\fITRIGGER_SCHEDULE\fR\ \fB\-\-trigger\-start\-time\fR=\fITRIGGER_START_TIME\fR] [\fIGCLOUD_WIDE_FLAG\ ...\fR]
.SH "DESCRIPTION"
\fB(ALPHA)\fR Update a Dataplex task resource with the given configurations.
.SH "EXAMPLES"
To update a Dataplex task \f5test\-task\fR within lake \f5test\-lake\fR in
location \f5us\-central1\fR and change the description to \f5Updated
Description\fR, run:
.RS 2m
$ gcloud alpha dataplex tasks update \e
projects/test\-project/locations/us\-central1/lakes/test\-lake/\e
tasks/test\-task \-\-description='Updated Description'
.RE
.SH "POSITIONAL ARGUMENTS"
.RS 2m
.TP 2m
Task resource \- Arguments and flags that specify the Dataplex Task you want to
update. The arguments in this group can be used to specify the attributes of
this resource. (NOTE) Some attributes are not given arguments in this group but
can be set in other ways.
To set the \f5project\fR attribute:
.RS 2m
.IP "\(em" 2m
provide the argument \f5task\fR on the command line with a fully specified name;
.IP "\(em" 2m
provide the argument \f5\-\-project\fR on the command line;
.IP "\(em" 2m
set the property \f5core/project\fR.
.RE
.sp
This must be specified.
.RS 2m
.TP 2m
\fITASK\fR
ID of the task or fully qualified identifier for the task.
To set the \f5task\fR attribute:
.RS 2m
.IP "\(bu" 2m
provide the argument \f5task\fR on the command line.
.RE
.sp
This positional argument must be specified if any of the other arguments in this
group are specified.
.TP 2m
\fB\-\-lake\fR=\fILAKE\fR
Identifier of the Dataplex lake resource.
To set the \f5lake\fR attribute:
.RS 2m
.IP "\(bu" 2m
provide the argument \f5task\fR on the command line with a fully specified name;
.IP "\(bu" 2m
provide the argument \f5\-\-lake\fR on the command line.
.RE
.sp
.TP 2m
\fB\-\-location\fR=\fILOCATION\fR
Location of the Dataplex resource.
To set the \f5location\fR attribute:
.RS 2m
.IP "\(bu" 2m
provide the argument \f5task\fR on the command line with a fully specified name;
.IP "\(bu" 2m
provide the argument \f5\-\-location\fR on the command line;
.IP "\(bu" 2m
set the property \f5dataplex/location\fR.
.RE
.sp
.RE
.RE
.sp
.SH "FLAGS"
.RS 2m
.TP 2m
\fB\-\-async\fR
Return immediately, without waiting for the operation in progress to complete.
.TP 2m
\fB\-\-description\fR=\fIDESCRIPTION\fR
Description of the task.
.TP 2m
\fB\-\-display\-name\fR=\fIDISPLAY_NAME\fR
Display name of the task.
.TP 2m
\fB\-\-update\-labels\fR=[\fIKEY\fR=\fIVALUE\fR,...]
List of label KEY=VALUE pairs to update. If a label exists, its value is
modified. Otherwise, a new label is created.
Keys must start with a lowercase character and contain only hyphens (\f5\-\fR),
underscores (\f5_\fR), lowercase characters, and numbers. Values must contain
only hyphens (\f5\-\fR), underscores (\f5_\fR), lowercase characters, and
numbers.
.TP 2m
At most one of these can be specified:
.RS 2m
.TP 2m
\fB\-\-clear\-labels\fR
Remove all labels. If \f5\-\-update\-labels\fR is also specified then
\f5\-\-clear\-labels\fR is applied first.
For example, to remove all labels:
.RS 2m
$ gcloud alpha dataplex tasks update \-\-clear\-labels
.RE
To remove all existing labels and create two new labels, \f5\fIfoo\fR\fR and
\f5\fIbaz\fR\fR:
.RS 2m
$ gcloud alpha dataplex tasks update \-\-clear\-labels \e
\-\-update\-labels foo=bar,baz=qux
.RE
.TP 2m
\fB\-\-remove\-labels\fR=[\fIKEY\fR,...]
List of label keys to remove. If a label does not exist it is silently ignored.
If \f5\-\-update\-labels\fR is also specified then \f5\-\-update\-labels\fR is
applied first.
.RE
.sp
.TP 2m
Spec related to how a task is executed.
.RS 2m
.TP 2m
\fB\-\-execution\-args\fR=[\fIKEY\fR=\fIVALUE\fR,...]
The arguments to pass to the task. The args can use placeholders of the format
${placeholder} as part of key/value string. These will be interpolated before
passing the args to the driver. Currently supported placeholders:
.RS 2m
.IP "\(bu" 2m
${task_id}
.IP "\(bu" 2m
${job_time} To pass positional args, set the key as TASK_ARGS. The value should
be a comma\-separated string of all the positional arguments. To use a delimiter
other than comma, refer to
https://cloud.google.com/sdk/gcloud/reference/topic/escaping. In case of other
keys being present in the args, then TASK_ARGS will be passed as the last
argument.
.RE
.sp
.TP 2m
\fB\-\-execution\-project\fR=\fIEXECUTION_PROJECT\fR
The project in which jobs are run. By default, the project containing the Lake
is used. If a project is provided, the \-\-execution\-service\-account must
belong to this same project.
.TP 2m
\fB\-\-execution\-service\-account\fR=\fIEXECUTION_SERVICE_ACCOUNT\fR
Service account to use to execute a task. If not provided, the default Compute
service account for the project is used.
.TP 2m
\fB\-\-kms\-key\fR=\fIKMS_KEY\fR
The Cloud KMS key to use for encryption, of the form:
projects/{project_number}/locations/{location_id}/keyRings/{key\-ring\-name}/cryptoKeys/{key\-name}
.TP 2m
\fB\-\-max\-job\-execution\-lifetime\fR=\fIMAX_JOB_EXECUTION_LIFETIME\fR
The maximum duration before the job execution expires.
.RE
.sp
.TP 2m
Select which task you want to schedule and provide the required arguments:\-
.RS 2m
.IP "\(em" 2m
spark tasks
.IP "\(em" 2m
notebook tasks
.RE
.sp
At most one of these can be specified:
.RS 2m
.TP 2m
Config related to running custom Notebook tasks.
.RS 2m
.TP 2m
\fB\-\-notebook\fR=\fINOTEBOOK\fR
Google Cloud Storage URIs of the notebook file or the path to a Notebook
Content. Path to input notebook.
.TP 2m
\fB\-\-notebook\-archive\-uris\fR=[\fINOTEBOOK_ARCHIVE_URIS\fR,...]
Google Cloud Storage URIs of archives to be extracted into the working directory
of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
.TP 2m
\fB\-\-notebook\-file\-uris\fR=[\fINOTEBOOK_FILE_URIS\fR,...]
Google Cloud Storage URIs of files to be placed in the working directory of each
executor.
.TP 2m
Compute resources needed for a Task when using Dataproc Serverless.
.RS 2m
.TP 2m
\fB\-\-notebook\-batch\-executors\-count\fR=\fINOTEBOOK_BATCH_EXECUTORS_COUNT\fR
Total number of job executors.
.TP 2m
\fB\-\-notebook\-batch\-max\-executors\-count\fR=\fINOTEBOOK_BATCH_MAX_EXECUTORS_COUNT\fR
Max configurable executors. If max_executors_count > executors_count, then
auto\-scaling is enabled.
.RE
.sp
.TP 2m
Container Image Runtime Configuration.
.RS 2m
.TP 2m
\fB\-\-notebook\-container\-image\fR=\fINOTEBOOK_CONTAINER_IMAGE\fR
Optional custom container image for the job.
.TP 2m
\fB\-\-notebook\-container\-image\-java\-jars\fR=[\fINOTEBOOK_CONTAINER_IMAGE_JAVA_JARS\fR,...]
A list of Java JARS to add to the classpath. Valid input includes Cloud Storage
URIs to Jar binaries. For example, gs://bucket\-name/my/path/to/file.jar
.TP 2m
\fB\-\-notebook\-container\-image\-properties\fR=[\fIKEY\fR=\fIVALUE\fR,...]
Override to common configuration of open source components installed on the
Dataproc cluster. The properties to set on daemon config files. Property keys
are specified in \f5prefix:property\fR format, for example
\f5core:hadoop.tmp.dir\fR. For more information, see Cluster properties
(https://cloud.google.com/dataproc/docs/concepts/cluster\-properties).
.RE
.sp
.TP 2m
Cloud VPC Network used to run the infrastructure.
.RS 2m
.TP 2m
\fB\-\-notebook\-vpc\-network\-tags\fR=[\fINOTEBOOK_VPC_NETWORK_TAGS\fR,...]
List of network tags to apply to the job.
.TP 2m
The Cloud VPC network identifier.
At most one of these can be specified:
.RS 2m
.TP 2m
\fB\-\-notebook\-vpc\-network\-name\fR=\fINOTEBOOK_VPC_NETWORK_NAME\fR
The Cloud VPC network in which the job is run. By default, the Cloud VPC network
named Default within the project is used.
.TP 2m
\fB\-\-notebook\-vpc\-sub\-network\-name\fR=\fINOTEBOOK_VPC_SUB_NETWORK_NAME\fR
The Cloud VPC sub\-network in which the job is run.
.RE
.RE
.RE
.sp
.TP 2m
Config related to running custom Spark tasks.
.RS 2m
.TP 2m
\fB\-\-spark\-archive\-uris\fR=[\fISPARK_ARCHIVE_URIS\fR,...]
Google Cloud Storage URIs of archives to be extracted into the working directory
of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
.TP 2m
\fB\-\-spark\-file\-uris\fR=[\fISPARK_FILE_URIS\fR,...]
Google Cloud Storage URIs of files to be placed in the working directory of each
executor.
.TP 2m
Compute resources needed for a Task when using Dataproc Serverless.
.RS 2m
.TP 2m
\fB\-\-batch\-executors\-count\fR=\fIBATCH_EXECUTORS_COUNT\fR
Total number of job executors.
.TP 2m
\fB\-\-batch\-max\-executors\-count\fR=\fIBATCH_MAX_EXECUTORS_COUNT\fR
Max configurable executors. If max_executors_count > executors_count, then
auto\-scaling is enabled.
.RE
.sp
.TP 2m
Container Image Runtime Configuration.
.RS 2m
.TP 2m
\fB\-\-container\-image\fR=\fICONTAINER_IMAGE\fR
Optional custom container image for the job.
.TP 2m
\fB\-\-container\-image\-java\-jars\fR=[\fICONTAINER_IMAGE_JAVA_JARS\fR,...]
A list of Java JARS to add to the classpath. Valid input includes Cloud Storage
URIs to Jar binaries. For example, gs://bucket\-name/my/path/to/file.jar
.TP 2m
\fB\-\-container\-image\-properties\fR=[\fIKEY\fR=\fIVALUE\fR,...]
Override to common configuration of open source components installed on the
Dataproc cluster. The properties to set on daemon config files. Property keys
are specified in \f5prefix:property\fR format, for example
\f5core:hadoop.tmp.dir\fR. For more information, see Cluster properties
(https://cloud.google.com/dataproc/docs/concepts/cluster\-properties).
.TP 2m
\fB\-\-container\-image\-python\-packages\fR=[\fICONTAINER_IMAGE_PYTHON_PACKAGES\fR,...]
A list of python packages to be installed. Valid formats include Cloud Storage
URI to a PIP installable library. For example,
gs://bucket\-name/my/path/to/lib.tar.gz
.RE
.sp
.TP 2m
Cloud VPC Network used to run the infrastructure.
.RS 2m
.TP 2m
\fB\-\-vpc\-network\-tags\fR=[\fIVPC_NETWORK_TAGS\fR,...]
List of network tags to apply to the job.
.TP 2m
The Cloud VPC network identifier.
.RS 2m
.TP 2m
\fB\-\-vpc\-network\-name\fR=\fIVPC_NETWORK_NAME\fR
The Cloud VPC network in which the job is run. By default, the Cloud VPC network
named Default within the project is used.
.TP 2m
\fB\-\-vpc\-sub\-network\-name\fR=\fIVPC_SUB_NETWORK_NAME\fR
The Cloud VPC sub\-network in which the job is run.
.RE
.RE
.sp
.TP 2m
The specification of the main method to call to drive the job. Specify either
the jar file that contains the main class or the main class name.
At most one of these can be specified:
.RS 2m
.TP 2m
\fB\-\-spark\-main\-class\fR=\fISPARK_MAIN_CLASS\fR
The name of the driver's main class. The jar file that contains the class must
be in the default CLASSPATH or specified in
.TP 2m
\fB\-\-spark\-main\-jar\-file\-uri\fR=\fISPARK_MAIN_JAR_FILE_URI\fR
The Google Cloud Storage URI of the jar file that contains the main class. The
execution args are passed in as a sequence of named process arguments
(\f5\-\-key=value\fR).
.TP 2m
\fB\-\-spark\-python\-script\-file\fR=\fISPARK_PYTHON_SCRIPT_FILE\fR
The Google Cloud Storage URI of the main Python file to use as the driver. Must
be a .py file.
.TP 2m
\fB\-\-spark\-sql\-script\fR=\fISPARK_SQL_SCRIPT\fR
The SQL query text.
.TP 2m
\fB\-\-spark\-sql\-script\-file\fR=\fISPARK_SQL_SCRIPT_FILE\fR
A reference to a query file. This can be the Google Cloud Storage URI of the
query file or it can the path to a SqlScript Content.
.RE
.RE
.RE
.sp
.TP 2m
Spec related to how often and when a task should be triggered.
.RS 2m
.TP 2m
\fB\-\-trigger\-disabled\fR
Prevent the task from executing. This does not cancel already running tasks. It
is intended to temporarily disable RECURRING tasks.
.TP 2m
\fB\-\-trigger\-max\-retires\fR=\fITRIGGER_MAX_RETIRES\fR
Number of retry attempts before aborting. Set to zero to never attempt to retry
a failed task.
.TP 2m
\fB\-\-trigger\-schedule\fR=\fITRIGGER_SCHEDULE\fR
Cron schedule (https://en.wikipedia.org/wiki/Cron) for running tasks
periodically.
.TP 2m
\fB\-\-trigger\-start\-time\fR=\fITRIGGER_START_TIME\fR
The first run of the task will be after this time. If not specified, the task
will run shortly after being submitted if ON_DEMAND and based on the schedule if
RECURRING.
.RE
.RE
.sp
.SH "GCLOUD WIDE FLAGS"
These flags are available to all commands: \-\-access\-token\-file, \-\-account,
\-\-billing\-project, \-\-configuration, \-\-flags\-file, \-\-flatten,
\-\-format, \-\-help, \-\-impersonate\-service\-account, \-\-log\-http,
\-\-project, \-\-quiet, \-\-trace\-token, \-\-user\-output\-enabled,
\-\-verbosity.
Run \fB$ gcloud help\fR for details.
.SH "API REFERENCE"
This command uses the \fBdataplex/v1\fR API. The full documentation for this API
can be found at: https://cloud.google.com/dataplex/docs
.SH "NOTES"
This command is currently in alpha and might change without notice. If this
command fails with API permission errors despite specifying the correct project,
you might be trying to access an API with an invitation\-only early access
allowlist. This variant is also available:
.RS 2m
$ gcloud dataplex tasks update
.RE