HEX
Server: Apache/2.4.65 (Ubuntu)
System: Linux ielts-store-v2 6.8.0-1036-gcp #38~22.04.1-Ubuntu SMP Thu Aug 14 01:19:18 UTC 2025 x86_64
User: root (0)
PHP: 7.2.34-54+ubuntu20.04.1+deb.sury.org+1
Disabled: pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wifcontinued,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_get_handler,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,pcntl_async_signals,
Upload Files
File: //snap/google-cloud-cli/396/help/man/man1/gcloud_beta_ai_endpoints_explain.1
.TH "GCLOUD_BETA_AI_ENDPOINTS_EXPLAIN" 1



.SH "NAME"
.HP
gcloud beta ai endpoints explain \- request an online explanation from an Vertex AI endpoint



.SH "SYNOPSIS"
.HP
\f5gcloud beta ai endpoints explain\fR (\fIENDPOINT\fR\ :\ \fB\-\-region\fR=\fIREGION\fR) \fB\-\-json\-request\fR=\fIJSON_REQUEST\fR [\fB\-\-deployed\-model\-id\fR=\fIDEPLOYED_MODEL_ID\fR] [\fIGCLOUD_WIDE_FLAG\ ...\fR]



.SH "DESCRIPTION"

\fB(BETA)\fR \f5gcloud beta ai endpoints explain\fR sends an explanation request
to the Vertex AI endpoint for the given instances. This command reads up to 100
instances, though the service itself accepts instances up to the payload limit
size (currently, 1.5MB).



.SH "EXAMPLES"

To send an explanation request to the endpoint for the json file, input.json,
run:

.RS 2m
$ gcloud beta ai endpoints explain ENDPOINT_ID \e
    \-\-region=us\-central1 \-\-json\-request=input.json
.RE



.SH "POSITIONAL ARGUMENTS"

.RS 2m
.TP 2m

Endpoint resource \- The endpoint to request an online explanation. The
arguments in this group can be used to specify the attributes of this resource.
(NOTE) Some attributes are not given arguments in this group but can be set in
other ways.

To set the \f5project\fR attribute:
.RS 2m
.IP "\(em" 2m
provide the argument \f5endpoint\fR on the command line with a fully specified
name;
.IP "\(em" 2m
provide the argument \f5\-\-project\fR on the command line;
.IP "\(em" 2m
set the property \f5core/project\fR.
.RE
.sp

This must be specified.


.RS 2m
.TP 2m
\fIENDPOINT\fR

ID of the endpoint or fully qualified identifier for the endpoint.

To set the \f5name\fR attribute:
.RS 2m
.IP "\(bu" 2m
provide the argument \f5endpoint\fR on the command line.
.RE
.sp

This positional argument must be specified if any of the other arguments in this
group are specified.

.TP 2m
\fB\-\-region\fR=\fIREGION\fR

Cloud region for the endpoint.

To set the \f5region\fR attribute:
.RS 2m
.IP "\(bu" 2m
provide the argument \f5endpoint\fR on the command line with a fully specified
name;
.IP "\(bu" 2m
provide the argument \f5\-\-region\fR on the command line;
.IP "\(bu" 2m
set the property \f5ai/region\fR;
.IP "\(bu" 2m
choose one from the prompted list of available regions.
.RE
.sp


.RE
.RE
.sp

.SH "REQUIRED FLAGS"

.RS 2m
.TP 2m
\fB\-\-json\-request\fR=\fIJSON_REQUEST\fR

Path to a local file containing the body of a JSON request.

An example of a JSON request:

.RS 2m
{
  "instances": [
    {"x": [1, 2], "y": [3, 4]},
    {"x": [\-1, \-2], "y": [\-3, \-4]}
  ]
}
.RE

This flag accepts "\-" for stdin.


.RE
.sp

.SH "OPTIONAL FLAGS"

.RS 2m
.TP 2m
\fB\-\-deployed\-model\-id\fR=\fIDEPLOYED_MODEL_ID\fR

Id of the deployed model.


.RE
.sp

.SH "GCLOUD WIDE FLAGS"

These flags are available to all commands: \-\-access\-token\-file, \-\-account,
\-\-billing\-project, \-\-configuration, \-\-flags\-file, \-\-flatten,
\-\-format, \-\-help, \-\-impersonate\-service\-account, \-\-log\-http,
\-\-project, \-\-quiet, \-\-trace\-token, \-\-user\-output\-enabled,
\-\-verbosity.

Run \fB$ gcloud help\fR for details.



.SH "NOTES"

This command is currently in beta and might change without notice. These
variants are also available:

.RS 2m
$ gcloud ai endpoints explain
.RE

.RS 2m
$ gcloud alpha ai endpoints explain
.RE