HEX
Server: Apache/2.4.65 (Ubuntu)
System: Linux ielts-store-v2 6.8.0-1036-gcp #38~22.04.1-Ubuntu SMP Thu Aug 14 01:19:18 UTC 2025 x86_64
User: root (0)
PHP: 7.2.34-54+ubuntu20.04.1+deb.sury.org+1
Disabled: pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wifcontinued,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_get_handler,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,pcntl_async_signals,
Upload Files
File: //snap/google-cloud-cli/396/help/man/man1/gcloud_beta_ai_custom-jobs_create.1
.TH "GCLOUD_BETA_AI_CUSTOM\-JOBS_CREATE" 1



.SH "NAME"
.HP
gcloud beta ai custom\-jobs create \- create a new custom job



.SH "SYNOPSIS"
.HP
\f5gcloud beta ai custom\-jobs create\fR \fB\-\-display\-name\fR=\fIDISPLAY_NAME\fR (\fB\-\-config\fR=\fICONFIG\fR\ \fB\-\-worker\-pool\-spec\fR=[\fIWORKER_POOL_SPEC\fR,...]) [\fB\-\-args\fR=[\fIARG\fR,...]] [\fB\-\-command\fR=[\fICOMMAND\fR,...]] [\fB\-\-enable\-dashboard\-access\fR] [\fB\-\-enable\-web\-access\fR] [\fB\-\-labels\fR=[\fIKEY\fR=\fIVALUE\fR,...]] [\fB\-\-network\fR=\fINETWORK\fR] [\fB\-\-persistent\-resource\-id\fR=\fIPERSISTENT_RESOURCE_ID\fR] [\fB\-\-python\-package\-uris\fR=[\fIPYTHON_PACKAGE_URIS\fR,...]] [\fB\-\-region\fR=\fIREGION\fR] [\fB\-\-service\-account\fR=\fISERVICE_ACCOUNT\fR] [\fB\-\-kms\-key\fR=\fIKMS_KEY\fR\ :\ \fB\-\-kms\-keyring\fR=\fIKMS_KEYRING\fR\ \fB\-\-kms\-location\fR=\fIKMS_LOCATION\fR\ \fB\-\-kms\-project\fR=\fIKMS_PROJECT\fR] [\fIGCLOUD_WIDE_FLAG\ ...\fR]



.SH "DESCRIPTION"

\fB(BETA)\fR This command will attempt to run the custom job immediately upon
creation.



.SH "EXAMPLES"

To create a job under project \f5\fIexample\fR\fR in region
\f5\fIus\-central1\fR\fR, run:

.RS 2m
$ gcloud beta ai custom\-jobs create \-\-region=us\-central1 \e
    \-\-project=example \e
    \-\-worker\-pool\-spec=replica\-count=1,machine\-type='n1\-highmem\-2',\e
container\-image\-uri='gcr.io/ucaip\-test/ucaip\-training\-test' \e
    \-\-display\-name=test
.RE



.SH "REQUIRED FLAGS"

.RS 2m
.TP 2m
\fB\-\-display\-name\fR=\fIDISPLAY_NAME\fR

Display name of the custom job to create.

.TP 2m

Worker pool specification.

At least one of these must be specified:


.RS 2m
.TP 2m
\fB\-\-config\fR=\fICONFIG\fR

Path to the job configuration file. This file should be a YAML document
containing a `CustomJobSpec`
(https://cloud.google.com/vertex\-ai/docs/reference/rest/v1/CustomJobSpec). If
an option is specified both in the configuration file **and** via command\-line
arguments, the command\-line arguments override the configuration file. Note
that keys with underscore are invalid.

Example(YAML):

.RS 2m
workerPoolSpecs:
  machineSpec:
    machineType: n1\-highmem\-2
  replicaCount: 1
  containerSpec:
    imageUri: gcr.io/ucaip\-test/ucaip\-training\-test
    args:
    \- port=8500
    command:
    \- start
.RE

.TP 2m
\fB\-\-worker\-pool\-spec\fR=[\fIWORKER_POOL_SPEC\fR,...]

Define the worker pool configuration used by the custom job. You can specify
multiple worker pool specs in order to create a custom job with multiple worker
pools.

The spec can contain the following fields:

.RS 2m
.TP 2m
\fBmachine\-type\fR
(Required): The type of the machine. see
https://cloud.google.com/vertex\-ai/docs/training/configure\-compute#machine\-types
for supported types. This is corresponding to the \f5machineSpec.machineType\fR
field in \f5WorkerPoolSpec\fR API message.
.TP 2m
\fBreplica\-count\fR
The number of worker replicas to use for this worker pool, by default the value
is 1. This is corresponding to the \f5replicaCount\fR field in
\f5WorkerPoolSpec\fR API message.
.TP 2m
\fBaccelerator\-type\fR
The type of GPUs. see
https://cloud.google.com/vertex\-ai/docs/training/configure\-compute#specifying_gpus
for more requirements. This is corresponding to the
\f5machineSpec.acceleratorType\fR field in \f5WorkerPoolSpec\fR API message.
.TP 2m
\fBaccelerator\-count\fR
The number of GPUs for each VM in the worker pool to use, by default the value
if 1. This is corresponding to the \f5machineSpec.acceleratorCount\fR field in
\f5WorkerPoolSpec\fR API message.
.TP 2m
\fBcontainer\-image\-uri\fR
The URI of a container image to be directly run on each worker replica. This is
corresponding to the \f5containerSpec.imageUri\fR field in \f5WorkerPoolSpec\fR
API message.
.TP 2m
\fBexecutor\-image\-uri\fR
The URI of a container image that will run the provided package.
.TP 2m
\fBoutput\-image\-uri\fR
The URI of a custom container image to be built for autopackaged custom jobs.
.TP 2m
\fBpython\-module\fR
The Python module name to run within the provided package.
.TP 2m
\fBlocal\-package\-path\fR
The local path of a folder that contains training code.
.TP 2m
\fBscript\fR
The relative path under the \f5local\-package\-path\fR to a file to execute. It
can be a Python file or an arbitrary bash script.
.TP 2m
\fBrequirements\fR
Python dependencies to be installed from PyPI, separated by ";". This is
supposed to be used when some public packages are required by your training
application but not in the base images. It has the same effect as editing a
"requirements.txt" file under \f5local\-package\-path\fR.
.TP 2m
\fBextra\-packages\fR
Relative paths of local Python archives to be installed, separated by ";". This
is supposed to be used when some custom packages are required by your training
application but not in the base images. Every path should be relative to the
\f5local\-package\-path\fR.
.TP 2m
\fBextra\-dirs\fR
Relative paths of the folders under \f5local\-package\-path\fR to be copied into
the container, separated by ";". If not specified, only the parent directory
that contains the main executable (\f5script\fR or \f5python\-module\fR) will be
copied.


.RE
.sp
Note that some of these fields are used for different job creation methods and
are categorized as mutually exclusive groups listed below. Exactly one of these
groups of fields must be specified:


.RS 2m
.TP 2m
\f5container\-image\-uri\fR
Specify this field to use a custom container image for training. Together with
the \f5\-\-command\fR and \f5\-\-args\fR flags, this field represents a
`WorkerPoolSpec.ContainerSpec`
(https://cloud.google.com/vertex\-ai/docs/reference/rest/v1/CustomJobSpec?#containerspec)
message. In this case, the \f5\-\-python\-package\-uris\fR flag is disallowed.

Example:
\-\-worker\-pool\-spec=replica\-count=1,machine\-type=n1\-highmem\-2,container\-image\-uri=gcr.io/ucaip\-test/ucaip\-training\-test

.TP 2m
\f5executor\-image\-uri, python\-module\fR
Specify these fields to train using a pre\-built container and Python packages
that are already in Cloud Storage. Together with the
\f5\-\-python\-package\-uris\fR and \f5\-\-args\fR flags, these fields represent
a `WorkerPoolSpec.PythonPackageSpec`
(https://cloud.google.com/vertex\-ai/docs/reference/rest/v1/CustomJobSpec#pythonpackagespec)
message .

Example:
\-\-worker\-pool\-spec=machine\-type=e2\-standard\-4,executor\-image\-uri=us\-docker.pkg.dev/vertex\-ai/training/tf\-cpu.2\-4:latest,python\-module=trainer.task

.TP 2m
\f5output\-image\-uri\fR
Specify this field to push the output custom container training image to a
specific path in Container Registry or Artifact Registry for an autopackaged
custom job.

Example:
\-\-worker\-pool\-spec=machine\-type=e2\-standard\-4,executor\-image\-uri=us\-docker.pkg.dev/vertex\-ai/training/tf\-cpu.2\-4:latest,output\-image\-uri='eu.gcr.io/projectName/imageName',python\-module=trainer.task

.TP 2m
\f5local\-package\-path, executor\-image\-uri, output\-image\-uri, python\-module|script\fR
Specify these fields, optionally with \f5requirements\fR, \f5extra\-packages\fR,
or \f5extra\-dirs\fR, to train using a pre\-built container and Python code from
a local path. In this case, the \f5\-\-python\-package\-uris\fR flag is
disallowed.

Example using \f5python\-module\fR:
\-\-worker\-pool\-spec=machine\-type=e2\-standard\-4,replica\-count=1,executor\-image\-uri=us\-docker.pkg.dev/vertex\-ai/training/tf\-cpu.2\-4:latest,python\-module=trainer.task,local\-package\-path=/usr/page/application

Example using \f5script\fR:
\-\-worker\-pool\-spec=machine\-type=e2\-standard\-4,replica\-count=1,executor\-image\-uri=us\-docker.pkg.dev/vertex\-ai/training/tf\-cpu.2\-4:latest,script=my_run.sh,local\-package\-path=/usr/jeff/application


.RE
.RE
.RE
.sp

.SH "OPTIONAL FLAGS"

.RS 2m
.TP 2m
\fB\-\-args\fR=[\fIARG\fR,...]

Comma\-separated arguments passed to containers or python tasks.

.TP 2m
\fB\-\-command\fR=[\fICOMMAND\fR,...]

Command to be invoked when containers are started. It overrides the entrypoint
instruction in Dockerfile when provided.

.TP 2m
\fB\-\-enable\-dashboard\-access\fR

Whether you want Vertex AI to enable dashboard built on the training containers.
If set to \f5\fItrue\fR\fR, you can access the dashboard at the URIs given by
CustomJob.web_access_uris or Trial.web_access_uris (within
HyperparameterTuningJob.trials).

.TP 2m
\fB\-\-enable\-web\-access\fR

Whether you want Vertex AI to enable interactive shell access
(https://cloud.google.com/vertex\-ai/docs/training/monitor\-debug\-interactive\-shell)
to training containers. If set to \f5\fItrue\fR\fR, you can access interactive
shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris
(within HyperparameterTuningJob.trials).

.TP 2m
\fB\-\-labels\fR=[\fIKEY\fR=\fIVALUE\fR,...]

List of label KEY=VALUE pairs to add.

Keys must start with a lowercase character and contain only hyphens (\f5\-\fR),
underscores (\f5_\fR), lowercase characters, and numbers. Values must contain
only hyphens (\f5\-\fR), underscores (\f5_\fR), lowercase characters, and
numbers.

.TP 2m
\fB\-\-network\fR=\fINETWORK\fR

Full name of the Google Compute Engine network to which the Job is peered with.
Private services access must already have been configured. If unspecified, the
Job is not peered with any network.

.TP 2m
\fB\-\-persistent\-resource\-id\fR=\fIPERSISTENT_RESOURCE_ID\fR

The name of the persistent resource from the same project and region on which to
run this custom job.

If this is specified, the job will be run on existing machines held by the
PersistentResource instead of on\-demand short\-lived machines. The network and
CMEK configs on the job should be consistent with those on the
PersistentResource, otherwise, the job will be rejected.

.TP 2m
\fB\-\-python\-package\-uris\fR=[\fIPYTHON_PACKAGE_URIS\fR,...]

The common Python package URIs to be used for training with a pre\-built
container image. e.g. \f5\-\-python\-package\-uri=path1,path2\fR If you are
using multiple worker pools and want to specify a different Python packag fo
reach pool, use \f5\-\-config\fR instead.

.TP 2m

Region resource \- Cloud region to create a custom job. This represents a Cloud
resource. (NOTE) Some attributes are not given arguments in this group but can
be set in other ways.

To set the \f5project\fR attribute:
.RS 2m
.IP "\(em" 2m
provide the argument \f5\-\-region\fR on the command line with a fully specified
name;
.IP "\(em" 2m
set the property \f5ai/region\fR with a fully specified name;
.IP "\(em" 2m
choose one from the prompted list of available regions with a fully specified
name;
.IP "\(em" 2m
provide the argument \f5\-\-project\fR on the command line;
.IP "\(em" 2m
set the property \f5core/project\fR.
.RE
.sp


.RS 2m
.TP 2m
\fB\-\-region\fR=\fIREGION\fR

ID of the region or fully qualified identifier for the region.

To set the \f5region\fR attribute:
.RS 2m
.IP "\(bu" 2m
provide the argument \f5\-\-region\fR on the command line;
.IP "\(bu" 2m
set the property \f5ai/region\fR;
.IP "\(bu" 2m
choose one from the prompted list of available regions.
.RE
.sp

.RE
.sp
.TP 2m
\fB\-\-service\-account\fR=\fISERVICE_ACCOUNT\fR

The email address of a service account to use when running the training
appplication. You must have the \f5iam.serviceAccounts.actAs\fR permission for
the specified service account.

.TP 2m

Key resource \- The Cloud KMS (Key Management Service) cryptokey that will be
used to protect the custom job. The 'Vertex AI Service Agent' service account
must hold permission 'Cloud KMS CryptoKey Encrypter/Decrypter'. The arguments in
this group can be used to specify the attributes of this resource.


.RS 2m
.TP 2m
\fB\-\-kms\-key\fR=\fIKMS_KEY\fR

ID of the key or fully qualified identifier for the key.

To set the \f5kms\-key\fR attribute:
.RS 2m
.IP "\(bu" 2m
provide the argument \f5\-\-kms\-key\fR on the command line.
.RE
.sp

This flag argument must be specified if any of the other arguments in this group
are specified.

.TP 2m
\fB\-\-kms\-keyring\fR=\fIKMS_KEYRING\fR

The KMS keyring of the key.

To set the \f5kms\-keyring\fR attribute:
.RS 2m
.IP "\(bu" 2m
provide the argument \f5\-\-kms\-key\fR on the command line with a fully
specified name;
.IP "\(bu" 2m
provide the argument \f5\-\-kms\-keyring\fR on the command line.
.RE
.sp

.TP 2m
\fB\-\-kms\-location\fR=\fIKMS_LOCATION\fR

The Google Cloud location for the key.

To set the \f5kms\-location\fR attribute:
.RS 2m
.IP "\(bu" 2m
provide the argument \f5\-\-kms\-key\fR on the command line with a fully
specified name;
.IP "\(bu" 2m
provide the argument \f5\-\-kms\-location\fR on the command line.
.RE
.sp

.TP 2m
\fB\-\-kms\-project\fR=\fIKMS_PROJECT\fR

The Google Cloud project for the key.

To set the \f5kms\-project\fR attribute:
.RS 2m
.IP "\(bu" 2m
provide the argument \f5\-\-kms\-key\fR on the command line with a fully
specified name;
.IP "\(bu" 2m
provide the argument \f5\-\-kms\-project\fR on the command line;
.IP "\(bu" 2m
set the property \f5core/project\fR.
.RE
.sp


.RE
.RE
.sp

.SH "GCLOUD WIDE FLAGS"

These flags are available to all commands: \-\-access\-token\-file, \-\-account,
\-\-billing\-project, \-\-configuration, \-\-flags\-file, \-\-flatten,
\-\-format, \-\-help, \-\-impersonate\-service\-account, \-\-log\-http,
\-\-project, \-\-quiet, \-\-trace\-token, \-\-user\-output\-enabled,
\-\-verbosity.

Run \fB$ gcloud help\fR for details.



.SH "NOTES"

This command is currently in beta and might change without notice. These
variants are also available:

.RS 2m
$ gcloud ai custom\-jobs create
.RE

.RS 2m
$ gcloud alpha ai custom\-jobs create
.RE