HEX
Server: Apache/2.4.65 (Ubuntu)
System: Linux ielts-store-v2 6.8.0-1036-gcp #38~22.04.1-Ubuntu SMP Thu Aug 14 01:19:18 UTC 2025 x86_64
User: root (0)
PHP: 7.2.34-54+ubuntu20.04.1+deb.sury.org+1
Disabled: pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wifcontinued,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_get_handler,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,pcntl_async_signals,
Upload Files
File: //snap/google-cloud-cli/394/help/man/man1/gcloud_ai_custom-jobs_local-run.1
.TH "GCLOUD_AI_CUSTOM\-JOBS_LOCAL\-RUN" 1



.SH "NAME"
.HP
gcloud ai custom\-jobs local\-run \- run a custom training locally



.SH "SYNOPSIS"
.HP
\f5gcloud ai custom\-jobs local\-run\fR  \fB\-\-executor\-image\-uri\fR=\fIIMAGE_URI\fR [\fB\-\-extra\-dirs\fR=[\fIEXTRA_DIR\fR,...]] [\fB\-\-extra\-packages\fR=[\fIPACKAGE\fR,...]] [\fB\-\-gpu\fR] [\fB\-\-local\-package\-path\fR=\fILOCAL_PATH\fR] [\fB\-\-output\-image\-uri\fR=\fIOUTPUT_IMAGE\fR] [\fB\-\-requirements\fR=[\fIREQUIREMENTS\fR,...]] [\fB\-\-service\-account\-key\-file\fR=\fIACCOUNT_KEY_FILE\fR] [\fB\-\-python\-module\fR=\fIPYTHON_MODULE\fR\ |\ \fB\-\-script\fR=\fISCRIPT\fR] [\fIGCLOUD_WIDE_FLAG\ ...\fR] [\-\-\ \fIARGS\fR\ ...]



.SH "DESCRIPTION"

Packages your training code into a Docker image and executes it locally.

You should execute this command in the top folder which includes all the code
and resources you want to pack and run, or specify the 'work\-dir' flag to point
to it. Any other path you specified via flags should be a relative path to the
work\-dir and under it; otherwise it will be unaccessible.

Supposing your directories are like the following structures:

.RS 2m
/root
  \- my_project
      \- my_training
          \- task.py
          \- util.py
          \- setup.py
      \- other_modules
          \- some_module.py
      \- dataset
          \- small.dat
          \- large.dat
      \- config
      \- dep
          \- foo.tar.gz
      \- bar.whl
      \- requirements.txt
  \- another_project
      \- something
.RE

If you set 'my_project' as the package, then you should execute the task.py by
specifying "\-\-script=my_training/task.py" or
"\-\-python\-module=my_training.task", the 'requirements.txt' will be processed.
And you will also be able to install extra packages by, e.g. specifying
"\-\-extra\-packages=dep/foo.tar.gz,bar.whl" or include extra directories, e.g.
specifying "\-\-extra\-dirs=dataset,config".

If you set 'my_training' as the package, then you should execute the task.py by
specifying "\-\-script=task.py" or "\-\-python\-module=task", the 'setup.py'
will be processed. However, you won't be able to access any other files or
directories that are not in 'my_training' folder.

See more details in the HELP info of the corresponding flags.



.SH "EXAMPLES"

To execute an python module with required dependencies, run:

.RS 2m
$ gcloud ai custom\-jobs local\-run \-\-python\-module=my_training.task \e
    \-\-executor\-image\-uri=gcr.io/my/image \e
    \-\-requirements=pandas,scipy>=1.3.0
.RE

To execute a python script using local GPU, run:

.RS 2m
$ gcloud ai custom\-jobs local\-run \-\-script=my_training/task.py \e
    \-\-executor\-image\-uri=gcr.io/my/image \-\-gpu
.RE

To execute an arbitrary script with custom arguments, run:

.RS 2m
$ gcloud ai custom\-jobs local\-run \-\-script=my_run.sh \e
    \-\-executor\-image\-uri=gcr.io/my/image \-\- \-\-my\-arg bar \e
    \-\-enable_foo
.RE

To run an existing container training without building new image, run:

.RS 2m
$ gcloud ai custom\-jobs local\-run \e
    \-\-executor\-image\-uri=gcr.io/my/custom\-training\-image
.RE



.SH "POSITIONAL ARGUMENTS"

.RS 2m
.TP 2m
[\-\- \fIARGS\fR ...]

Additional user arguments to be forwarded to your application.

The '\-\-' argument must be specified between gcloud specific args on the left
and ARGS on the right. Example:

.RS 2m
$ gcloud ai custom\-jobs local\-run \-\-script=my_run.sh \e
    \-\-base\-image=gcr.io/my/image \-\- \-\-my\-arg bar \-\-enable_foo
.RE


.RE
.sp

.SH "REQUIRED FLAGS"

.RS 2m
.TP 2m
\fB\-\-executor\-image\-uri\fR=\fIIMAGE_URI\fR

URI or ID of the container image in either the Container Registry or local that
will run the application. See
https://cloud.google.com/vertex\-ai/docs/training/pre\-built\-containers for
available pre\-built container images provided by Vertex AI for training.


.RE
.sp

.SH "OPTIONAL FLAGS"

.RS 2m
.TP 2m
\fB\-\-extra\-dirs\fR=[\fIEXTRA_DIR\fR,...]

Extra directories under the working directory to include, besides the one that
contains the main executable.

By default, only the parent directory of the main script or python module is
copied to the container. For example, if the module is "training.task" or the
script is "training/task.py", the whole "training" directory, including its
sub\-directories, will always be copied to the container. You may specify this
flag to also copy other directories if necessary.

Note: if no parent is specified in 'python_module' or 'scirpt', the whole
working directory is copied, then you don't need to specify this flag.

.TP 2m
\fB\-\-extra\-packages\fR=[\fIPACKAGE\fR,...]

Local paths to Python archives used as training dependencies in the image
container. These can be absolute or relative paths. However, they have to be
under the work_dir; Otherwise, this tool will not be able to access it.

Example: 'dep1.tar.gz, ./downloads/dep2.whl'

.TP 2m
\fB\-\-gpu\fR

Enable to use GPU.

.TP 2m
\fB\-\-local\-package\-path\fR=\fILOCAL_PATH\fR

local path of the directory where the python\-module or script exists. If not
specified, it use the directory where you run the this command.

Only the contents of this directory will be accessible to the built container
image.

.TP 2m
\fB\-\-output\-image\-uri\fR=\fIOUTPUT_IMAGE\fR

Uri of the custom container image to be built with the your application packed
in.

.TP 2m
\fB\-\-requirements\fR=[\fIREQUIREMENTS\fR,...]

Python dependencies from PyPI to be used when running the application. If this
is not specified, and there is no "setup.py" or "requirements.txt" in the
working directory, your application will only have access to what exists in the
base image with on other dependencies.

Example: 'tensorflow\-cpu, pandas==1.2.0, matplotlib>=3.0.2'

.TP 2m
\fB\-\-service\-account\-key\-file\fR=\fIACCOUNT_KEY_FILE\fR

The JSON file of a Google Cloud service account private key. When specified, the
corresponding service account will be used to authenticate the local container
to access Google Cloud services. Note that the key file won't be copied to the
container, it will be mounted during running time.

.TP 2m

At most one of these can be specified:


.RS 2m
.TP 2m
\fB\-\-python\-module\fR=\fIPYTHON_MODULE\fR

Name of the python module to execute, in 'trainer.train' or 'train' format. Its
path should be relative to the \f5work_dir\fR.

.TP 2m
\fB\-\-script\fR=\fISCRIPT\fR

The relative path of the file to execute. Accepets a Python file or an arbitrary
bash script. This path should be relative to the \f5work_dir\fR.


.RE
.RE
.sp

.SH "GCLOUD WIDE FLAGS"

These flags are available to all commands: \-\-access\-token\-file, \-\-account,
\-\-billing\-project, \-\-configuration, \-\-flags\-file, \-\-flatten,
\-\-format, \-\-help, \-\-impersonate\-service\-account, \-\-log\-http,
\-\-project, \-\-quiet, \-\-trace\-token, \-\-user\-output\-enabled,
\-\-verbosity.

Run \fB$ gcloud help\fR for details.



.SH "NOTES"

These variants are also available:

.RS 2m
$ gcloud alpha ai custom\-jobs local\-run
.RE

.RS 2m
$ gcloud beta ai custom\-jobs local\-run
.RE