HEX
Server: Apache/2.4.65 (Ubuntu)
System: Linux ielts-store-v2 6.8.0-1036-gcp #38~22.04.1-Ubuntu SMP Thu Aug 14 01:19:18 UTC 2025 x86_64
User: root (0)
PHP: 7.2.34-54+ubuntu20.04.1+deb.sury.org+1
Disabled: pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wifcontinued,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_get_handler,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,pcntl_async_signals,
Upload Files
File: //snap/google-cloud-cli/394/help/man/man1/gcloud_ai_model-monitoring-jobs_create.1
.TH "GCLOUD_AI_MODEL\-MONITORING\-JOBS_CREATE" 1



.SH "NAME"
.HP
gcloud ai model\-monitoring\-jobs create \- create a new Vertex AI model monitoring job



.SH "SYNOPSIS"
.HP
\f5gcloud ai model\-monitoring\-jobs create\fR \fB\-\-display\-name\fR=\fIDISPLAY_NAME\fR \fB\-\-emails\fR=[\fIEMAILS\fR,...] \fB\-\-endpoint\fR=\fIENDPOINT\fR \fB\-\-prediction\-sampling\-rate\fR=\fIPREDICTION_SAMPLING_RATE\fR [\fB\-\-analysis\-instance\-schema\fR=\fIANALYSIS_INSTANCE_SCHEMA\fR] [\fB\-\-[no\-]anomaly\-cloud\-logging\fR] [\fB\-\-labels\fR=[\fIKEY\fR=\fIVALUE\fR,...]] [\fB\-\-log\-ttl\fR=\fILOG_TTL\fR] [\fB\-\-monitoring\-frequency\fR=\fIMONITORING_FREQUENCY\fR;\ default=24] [\fB\-\-notification\-channels\fR=[\fINOTIFICATION_CHANNELS\fR,...]] [\fB\-\-predict\-instance\-schema\fR=\fIPREDICT_INSTANCE_SCHEMA\fR] [\fB\-\-region\fR=\fIREGION\fR] [\fB\-\-sample\-predict\-request\fR=\fISAMPLE_PREDICT_REQUEST\fR] [\fB\-\-kms\-key\fR=\fIKMS_KEY\fR\ :\ \fB\-\-kms\-keyring\fR=\fIKMS_KEYRING\fR\ \fB\-\-kms\-location\fR=\fIKMS_LOCATION\fR\ \fB\-\-kms\-project\fR=\fIKMS_PROJECT\fR] [\fB\-\-monitoring\-config\-from\-file\fR=\fIMONITORING_CONFIG_FROM_FILE\fR\ |\ \fB\-\-feature\-attribution\-thresholds\fR=[\fIKEY\fR=\fIVALUE\fR,...]\ \fB\-\-feature\-thresholds\fR=[\fIKEY\fR=\fIVALUE\fR,...]\ \fB\-\-target\-field\fR=\fITARGET_FIELD\fR\ \fB\-\-training\-sampling\-rate\fR=\fITRAINING_SAMPLING_RATE\fR;\ default=1.0\ \fB\-\-bigquery\-uri\fR=\fIBIGQUERY_URI\fR\ |\ \fB\-\-dataset\fR=\fIDATASET\fR\ |\ \fB\-\-data\-format\fR=\fIDATA_FORMAT\fR\ \fB\-\-gcs\-uris\fR=[\fIGCS_URIS\fR,...]] [\fIGCLOUD_WIDE_FLAG\ ...\fR]



.SH "DESCRIPTION"

Create a new Vertex AI model monitoring job.



.SH "EXAMPLES"

To create a model deployment monitoring job under project \f5\fIexample\fR\fR in
region \f5\fIus\-central1\fR\fR for endpoint \f5\fI123\fR\fR, run:

.RS 2m
$ gcloud ai model\-monitoring\-jobs create \-\-project=example \e
    \-\-region=us\-central1 \-\-display\-name=my_monitoring_job \e
    \-\-emails=a@gmail.com,b@gmail.com \-\-endpoint=123 \e
    \-\-prediction\-sampling\-rate=0.2
.RE

To create a model deployment monitoring job with drift detection for all the
deployed models under the endpoint \f5\fI123\fR\fR, run:

.RS 2m
$ gcloud ai model\-monitoring\-jobs create \-\-project=example \e
    \-\-region=us\-central1 \-\-display\-name=my_monitoring_job \e
    \-\-emails=a@gmail.com,b@gmail.com \-\-endpoint=123 \e
    \-\-prediction\-sampling\-rate=0.2 \e
    \-\-feature\-thresholds=feat1=0.1,feat2=0.2,feat3=0.2,feat4=0.3
.RE

To create a model deployment monitoring job with skew detection for all the
deployed models under the endpoint \f5\fI123\fR\fR, with training dataset from
Google Cloud Storage, run:

.RS 2m
$ gcloud ai model\-monitoring\-jobs create \-\-project=example \e
    \-\-region=us\-central1 \-\-display\-name=my_monitoring_job \e
    \-\-emails=a@gmail.com,b@gmail.com \-\-endpoint=123 \e
    \-\-prediction\-sampling\-rate=0.2 \e
    \-\-feature\-thresholds=feat1=0.1,feat2=0.2,feat3=0.2,feat4=0.3 \e
    \-\-target\-field=price \-\-data\-format=csv \e
    \-\-gcs\-uris=gs://test\-bucket/dataset.csv
.RE

To create a model deployment monitoring job with skew detection for all the
deployed models under the endpoint \f5\fI123\fR\fR, with training dataset from
Vertex AI dataset \f5\fI456\fR\fR, run:

.RS 2m
$ gcloud ai model\-monitoring\-jobs create \-\-project=example \e
    \-\-region=us\-central1 \-\-display\-name=my_monitoring_job \e
    \-\-emails=a@gmail.com,b@gmail.com \-\-endpoint=123 \e
    \-\-prediction\-sampling\-rate=0.2 \e
    \-\-feature\-thresholds=feat1=0.1,feat2=0.2,feat3=0.2,feat4=0.3 \e
    \-\-target\-field=price \-\-dataset=456
.RE

To create a model deployment monitoring job with different drift detection or
skew detection for different deployed models, run:

.RS 2m
$ gcloud ai model\-monitoring\-jobs create \-\-project=example \e
    \-\-region=us\-central1 \-\-display\-name=my_monitoring_job \e
    \-\-emails=a@gmail.com,b@gmail.com \-\-endpoint=123 \e
    \-\-prediction\-sampling\-rate=0.2 \e
    \-\-monitoring\-config\-from\-file=your_objective_config.yaml
.RE

After creating the monitoring job, be sure to send some predict requests. It
will be used to generate some metadata for analysis purpose, like predict and
analysis instance schema.



.SH "REQUIRED FLAGS"

.RS 2m
.TP 2m
\fB\-\-display\-name\fR=\fIDISPLAY_NAME\fR

Display name of the model deployment monitoring job.

.TP 2m
\fB\-\-emails\fR=[\fIEMAILS\fR,...]

Comma\-separated email address list. e.g. \-\-emails=a@gmail.com,b@gmail.com

.TP 2m
\fB\-\-endpoint\fR=\fIENDPOINT\fR

Id of the endpoint.

.TP 2m
\fB\-\-prediction\-sampling\-rate\fR=\fIPREDICTION_SAMPLING_RATE\fR

Prediction sampling rate.


.RE
.sp

.SH "OPTIONAL FLAGS"

.RS 2m
.TP 2m
\fB\-\-analysis\-instance\-schema\fR=\fIANALYSIS_INSTANCE_SCHEMA\fR

YAML schema file uri(Google Cloud Storage) describing the format of a single
instance that you want Tensorflow Data Validation (TFDV) to analyze.

.TP 2m
\fB\-\-[no\-]anomaly\-cloud\-logging\fR

If true, anomaly will be sent to Cloud Logging. Use
\fB\-\-anomaly\-cloud\-logging\fR to enable and
\fB\-\-no\-anomaly\-cloud\-logging\fR to disable.

.TP 2m
\fB\-\-labels\fR=[\fIKEY\fR=\fIVALUE\fR,...]

List of label KEY=VALUE pairs to add.

Keys must start with a lowercase character and contain only hyphens (\f5\-\fR),
underscores (\f5_\fR), lowercase characters, and numbers. Values must contain
only hyphens (\f5\-\fR), underscores (\f5_\fR), lowercase characters, and
numbers.

.TP 2m
\fB\-\-log\-ttl\fR=\fILOG_TTL\fR

TTL of BigQuery tables in user projects which stores logs(Day\-based unit).

.TP 2m
\fB\-\-monitoring\-frequency\fR=\fIMONITORING_FREQUENCY\fR; default=24

Monitoring frequency, unit is 1 hour.

.TP 2m
\fB\-\-notification\-channels\fR=[\fINOTIFICATION_CHANNELS\fR,...]

Comma\-separated notification channel list. e.g.
\-\-notification\-channels=projects/fake\-project/notificationChannels/123,projects/fake\-project/notificationChannels/456

.TP 2m
\fB\-\-predict\-instance\-schema\fR=\fIPREDICT_INSTANCE_SCHEMA\fR

YAML schema file uri(Google Cloud Storage) describing the format of a single
instance, which are given to format this Endpoint's prediction. If not set,
predict schema will be generated from collected predict requests.

.TP 2m

Region resource \- Cloud region to create model deployment monitoring job. This
represents a Cloud resource. (NOTE) Some attributes are not given arguments in
this group but can be set in other ways.

To set the \f5project\fR attribute:
.RS 2m
.IP "\(em" 2m
provide the argument \f5\-\-region\fR on the command line with a fully specified
name;
.IP "\(em" 2m
set the property \f5ai/region\fR with a fully specified name;
.IP "\(em" 2m
choose one from the prompted list of available regions with a fully specified
name;
.IP "\(em" 2m
provide the argument \f5\-\-project\fR on the command line;
.IP "\(em" 2m
set the property \f5core/project\fR.
.RE
.sp


.RS 2m
.TP 2m
\fB\-\-region\fR=\fIREGION\fR

ID of the region or fully qualified identifier for the region.

To set the \f5region\fR attribute:
.RS 2m
.IP "\(bu" 2m
provide the argument \f5\-\-region\fR on the command line;
.IP "\(bu" 2m
set the property \f5ai/region\fR;
.IP "\(bu" 2m
choose one from the prompted list of available regions.
.RE
.sp

.RE
.sp
.TP 2m
\fB\-\-sample\-predict\-request\fR=\fISAMPLE_PREDICT_REQUEST\fR

Path to a local file containing the body of a JSON object. Same format as
[PredictRequest.instances][], this can be set as a replacement of
predict\-instance\-schema. If not set, predict schema will be generated from
collected predict requests.

An example of a JSON request:

.RS 2m
{"x": [1, 2], "y": [3, 4]}
.RE

.TP 2m

Key resource \- The Cloud KMS (Key Management Service) cryptokey that will be
used to protect the model deployment monitoring job. The 'Vertex AI Service
Agent' service account must hold permission 'Cloud KMS CryptoKey
Encrypter/Decrypter'. The arguments in this group can be used to specify the
attributes of this resource.


.RS 2m
.TP 2m
\fB\-\-kms\-key\fR=\fIKMS_KEY\fR

ID of the key or fully qualified identifier for the key.

To set the \f5kms\-key\fR attribute:
.RS 2m
.IP "\(bu" 2m
provide the argument \f5\-\-kms\-key\fR on the command line.
.RE
.sp

This flag argument must be specified if any of the other arguments in this group
are specified.

.TP 2m
\fB\-\-kms\-keyring\fR=\fIKMS_KEYRING\fR

The KMS keyring of the key.

To set the \f5kms\-keyring\fR attribute:
.RS 2m
.IP "\(bu" 2m
provide the argument \f5\-\-kms\-key\fR on the command line with a fully
specified name;
.IP "\(bu" 2m
provide the argument \f5\-\-kms\-keyring\fR on the command line.
.RE
.sp

.TP 2m
\fB\-\-kms\-location\fR=\fIKMS_LOCATION\fR

The Google Cloud location for the key.

To set the \f5kms\-location\fR attribute:
.RS 2m
.IP "\(bu" 2m
provide the argument \f5\-\-kms\-key\fR on the command line with a fully
specified name;
.IP "\(bu" 2m
provide the argument \f5\-\-kms\-location\fR on the command line.
.RE
.sp

.TP 2m
\fB\-\-kms\-project\fR=\fIKMS_PROJECT\fR

The Google Cloud project for the key.

To set the \f5kms\-project\fR attribute:
.RS 2m
.IP "\(bu" 2m
provide the argument \f5\-\-kms\-key\fR on the command line with a fully
specified name;
.IP "\(bu" 2m
provide the argument \f5\-\-kms\-project\fR on the command line;
.IP "\(bu" 2m
set the property \f5core/project\fR.
.RE
.sp

.RE
.sp
.TP 2m

At most one of these can be specified:


.RS 2m
.TP 2m
\fB\-\-monitoring\-config\-from\-file\fR=\fIMONITORING_CONFIG_FROM_FILE\fR

Path to the model monitoring objective config file. This file should be a YAML
document containing a
\f5ModelDeploymentMonitoringJob\fR(https://cloud.google.com/vertex\-ai/docs/reference/rest/v1beta1/projects.locations.modelDeploymentMonitoringJobs#ModelDeploymentMonitoringJob),
but only the ModelDeploymentMonitoringObjectiveConfig needs to be configured.

Note: Only one of \-\-monitoring\-config\-from\-file and other objective config
set, like \-\-feature\-thresholds, \-\-feature\-attribution\-thresholds needs to
be set.

Example(YAML):

.RS 2m
modelDeploymentMonitoringObjectiveConfigs:
\- deployedModelId: '5251549009234886656'
  objectiveConfig:
    trainingDataset:
      dataFormat: csv
      gcsSource:
        uris:
        \- gs://fake\-bucket/training_data.csv
      targetField: price
    trainingPredictionSkewDetectionConfig:
      skewThresholds:
        feat1:
          value: 0.9
        feat2:
          value: 0.8
\- deployedModelId: '2945706000021192704'
  objectiveConfig:
    predictionDriftDetectionConfig:
      driftThresholds:
        feat1:
          value: 0.3
        feat2:
          value: 0.4
.RE

.TP 2m
\fB\-\-feature\-attribution\-thresholds\fR=[\fIKEY\fR=\fIVALUE\fR,...]

List of feature\-attribution score threshold value pairs(Apply for all the
deployed models under the endpoint, if you want to specify different thresholds
for different deployed model, please use flag \-\-monitoring\-config\-from\-file
or call API directly). If only feature name is set, the default threshold value
would be 0.3.

For example: \f5feature\-attribution\-thresholds=feat1=0.1,feat2,feat3=0.2\fR

.TP 2m
\fB\-\-feature\-thresholds\fR=[\fIKEY\fR=\fIVALUE\fR,...]

List of feature\-threshold value pairs(Apply for all the deployed models under
the endpoint, if you want to specify different thresholds for different deployed
model, please use flag \-\-monitoring\-config\-from\-file or call API directly).
If only feature name is set, the default threshold value would be 0.3.

For example: \f5\-\-feature\-thresholds=feat1=0.1,feat2,feat3=0.2\fR

.TP 2m
\fB\-\-target\-field\fR=\fITARGET_FIELD\fR

Target field name the model is to predict. Must be provided if you'd like to do
training\-prediction skew detection.

.TP 2m
\fB\-\-training\-sampling\-rate\fR=\fITRAINING_SAMPLING_RATE\fR; default=1.0

Training Dataset sampling rate.

.TP 2m

At most one of these can be specified:


.RS 2m
.TP 2m
\fB\-\-bigquery\-uri\fR=\fIBIGQUERY_URI\fR

BigQuery table of the unmanaged Dataset used to train this Model. For example:
\f5bq://projectId.bqDatasetId.bqTableId\fR.

.TP 2m
\fB\-\-dataset\fR=\fIDATASET\fR

Id of Vertex AI Dataset used to train this Model.

.TP 2m
\fB\-\-data\-format\fR=\fIDATA_FORMAT\fR

Data format of the dataset, must be provided if the input is from Google Cloud
Storage. The possible formats are: tf\-record, csv

.TP 2m
\fB\-\-gcs\-uris\fR=[\fIGCS_URIS\fR,...]

Comma\-separated Google Cloud Storage uris of the unmanaged Datasets used to
train this Model.


.RE
.RE
.RE
.sp

.SH "GCLOUD WIDE FLAGS"

These flags are available to all commands: \-\-access\-token\-file, \-\-account,
\-\-billing\-project, \-\-configuration, \-\-flags\-file, \-\-flatten,
\-\-format, \-\-help, \-\-impersonate\-service\-account, \-\-log\-http,
\-\-project, \-\-quiet, \-\-trace\-token, \-\-user\-output\-enabled,
\-\-verbosity.

Run \fB$ gcloud help\fR for details.



.SH "NOTES"

These variants are also available:

.RS 2m
$ gcloud alpha ai model\-monitoring\-jobs create
.RE

.RS 2m
$ gcloud beta ai model\-monitoring\-jobs create
.RE