HEX
Server: Apache/2.4.65 (Ubuntu)
System: Linux ielts-store-v2 6.8.0-1036-gcp #38~22.04.1-Ubuntu SMP Thu Aug 14 01:19:18 UTC 2025 x86_64
User: root (0)
PHP: 7.2.34-54+ubuntu20.04.1+deb.sury.org+1
Disabled: pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wifcontinued,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_get_handler,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,pcntl_async_signals,
Upload Files
File: //snap/google-cloud-cli/current/help/man/man1/gcloud_alpha_ai-platform_predict.1
.TH "GCLOUD_ALPHA_AI\-PLATFORM_PREDICT" 1



.SH "NAME"
.HP
gcloud alpha ai\-platform predict \- run AI Platform online prediction



.SH "SYNOPSIS"
.HP
\f5gcloud alpha ai\-platform predict\fR \fB\-\-model\fR=\fIMODEL\fR (\fB\-\-json\-instances\fR=\fIJSON_INSTANCES\fR\ |\ \fB\-\-json\-request\fR=\fIJSON_REQUEST\fR\ |\ \fB\-\-text\-instances\fR=\fITEXT_INSTANCES\fR) [\fB\-\-region\fR=\fIREGION\fR] [\fB\-\-signature\-name\fR=\fISIGNATURE_NAME\fR] [\fB\-\-version\fR=\fIVERSION\fR] [\fIGCLOUD_WIDE_FLAG\ ...\fR]



.SH "DESCRIPTION"

\fB(ALPHA)\fR \f5gcloud alpha ai\-platform predict\fR sends a prediction request
to AI Platform for the given instances. This command will read up to 100
instances, though the service itself will accept instances up to the payload
limit size (currently, 1.5MB). If you are predicting on more instances, you
should use batch prediction via

.RS 2m
$ gcloud alpha ai\-platform jobs submit prediction.
.RE



.SH "REQUIRED FLAGS"

.RS 2m
.TP 2m
\fB\-\-model\fR=\fIMODEL\fR

Name of the model.

.TP 2m

Exactly one of these must be specified:


.RS 2m
.TP 2m
\fB\-\-json\-instances\fR=\fIJSON_INSTANCES\fR

Path to a local file from which instances are read. Instances are in JSON
format; newline delimited.

An example of the JSON instances file:

.RS 2m
{"images": [0.0, ..., 0.1], "key": 3}
{"images": [0.0, ..., 0.1], "key": 2}
...
.RE

This flag accepts "\-" for stdin.

.TP 2m
\fB\-\-json\-request\fR=\fIJSON_REQUEST\fR

Path to a local file containing the body of JSON request.

An example of a JSON request:

.RS 2m
{
  "instances": [
    {"x": [1, 2], "y": [3, 4]},
    {"x": [\-1, \-2], "y": [\-3, \-4]}
  ]
}
.RE

This flag accepts "\-" for stdin.

.TP 2m
\fB\-\-text\-instances\fR=\fITEXT_INSTANCES\fR

Path to a local file from which instances are read. Instances are in UTF\-8
encoded text format; newline delimited.

An example of the text instances file:

.RS 2m
107,4.9,2.5,4.5,1.7
100,5.7,2.8,4.1,1.3
...
.RE

This flag accepts "\-" for stdin.


.RE
.RE
.sp

.SH "OPTIONAL FLAGS"

.RS 2m
.TP 2m
\fB\-\-region\fR=\fIREGION\fR

Google Cloud region of the regional endpoint to use for this command. For the
global endpoint, the region needs to be specified as \f5global\fR.

Learn more about regional endpoints and see a list of available regions:
https://cloud.google.com/ai\-platform/prediction/docs/regional\-endpoints

\fIREGION\fR must be one of: \fBglobal\fR, \fBasia\-east1\fR,
\fBasia\-northeast1\fR, \fBasia\-southeast1\fR, \fBaustralia\-southeast1\fR,
\fBeurope\-west1\fR, \fBeurope\-west2\fR, \fBeurope\-west3\fR,
\fBeurope\-west4\fR, \fBnorthamerica\-northeast1\fR, \fBus\-central1\fR,
\fBus\-east1\fR, \fBus\-east4\fR, \fBus\-west1\fR.

.TP 2m
\fB\-\-signature\-name\fR=\fISIGNATURE_NAME\fR

Name of the signature defined in the SavedModel to use for this job. Defaults to
DEFAULT_SERVING_SIGNATURE_DEF_KEY in
https://www.tensorflow.org/api_docs/python/tf/compat/v1/saved_model/signature_constants,
which is "serving_default". Only applies to TensorFlow models.

.TP 2m
\fB\-\-version\fR=\fIVERSION\fR

Model version to be used.

If unspecified, the default version of the model will be used. To list model
versions run

.RS 2m
$ gcloud alpha ai\-platform versions list
.RE


.RE
.sp

.SH "GCLOUD WIDE FLAGS"

These flags are available to all commands: \-\-access\-token\-file, \-\-account,
\-\-billing\-project, \-\-configuration, \-\-flags\-file, \-\-flatten,
\-\-format, \-\-help, \-\-impersonate\-service\-account, \-\-log\-http,
\-\-project, \-\-quiet, \-\-trace\-token, \-\-user\-output\-enabled,
\-\-verbosity.

Run \fB$ gcloud help\fR for details.



.SH "NOTES"

This command is currently in alpha and might change without notice. If this
command fails with API permission errors despite specifying the correct project,
you might be trying to access an API with an invitation\-only early access
allowlist. These variants are also available:

.RS 2m
$ gcloud ai\-platform predict
.RE

.RS 2m
$ gcloud beta ai\-platform predict
.RE