HEX
Server: Apache/2.4.65 (Ubuntu)
System: Linux ielts-store-v2 6.8.0-1036-gcp #38~22.04.1-Ubuntu SMP Thu Aug 14 01:19:18 UTC 2025 x86_64
User: root (0)
PHP: 7.2.34-54+ubuntu20.04.1+deb.sury.org+1
Disabled: pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wifcontinued,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_get_handler,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,pcntl_async_signals,
Upload Files
File: //snap/google-cloud-cli/396/help/man/man1/gcloud_datastream_streams_update.1
.TH "GCLOUD_DATASTREAM_STREAMS_UPDATE" 1



.SH "NAME"
.HP
gcloud datastream streams update \- updates a Datastream stream



.SH "SYNOPSIS"
.HP
\f5gcloud datastream streams update\fR (\fISTREAM\fR\ :\ \fB\-\-location\fR=\fILOCATION\fR) [\fB\-\-display\-name\fR=\fIDISPLAY_NAME\fR] [\fB\-\-state\fR=\fISTATE\fR] [\fB\-\-update\-labels\fR=[\fIKEY\fR=\fIVALUE\fR,...]] [\fB\-\-update\-mask\fR=\fIUPDATE_MASK\fR] [\fB\-\-backfill\-none\fR\ |\ \fB\-\-backfill\-all\fR\ \fB\-\-mongodb\-excluded\-objects\fR=\fIMONGODB_EXCLUDED_OBJECTS\fR\ |\ \fB\-\-mysql\-excluded\-objects\fR=\fIMYSQL_EXCLUDED_OBJECTS\fR\ |\ \fB\-\-oracle\-excluded\-objects\fR=\fIORACLE_EXCLUDED_OBJECTS\fR\ |\ \fB\-\-postgresql\-excluded\-objects\fR=\fIPOSTGRESQL_EXCLUDED_OBJECTS\fR\ |\ \fB\-\-salesforce\-excluded\-objects\fR=\fISALESFORCE_EXCLUDED_OBJECTS\fR\ |\ \fB\-\-sqlserver\-excluded\-objects\fR=\fISQLSERVER_EXCLUDED_OBJECTS\fR] [\fB\-\-clear\-labels\fR\ |\ \fB\-\-remove\-labels\fR=[\fIKEY\fR,...]] [\fB\-\-destination\fR=\fIDESTINATION\fR\ \fB\-\-bigquery\-destination\-config\fR=\fIBIGQUERY_DESTINATION_CONFIG\fR\ |\ \fB\-\-gcs\-destination\-config\fR=\fIGCS_DESTINATION_CONFIG\fR] [\fB\-\-force\fR\ |\ \fB\-\-validate\-only\fR] [\fB\-\-source\fR=\fISOURCE\fR\ \fB\-\-mongodb\-source\-config\fR=\fIMONGODB_SOURCE_CONFIG\fR\ |\ \fB\-\-mysql\-source\-config\fR=\fIMYSQL_SOURCE_CONFIG\fR\ |\ \fB\-\-oracle\-source\-config\fR=\fIORACLE_SOURCE_CONFIG\fR\ |\ \fB\-\-postgresql\-source\-config\fR=\fIPOSTGRESQL_SOURCE_CONFIG\fR\ |\ \fB\-\-salesforce\-source\-config\fR=\fISALESFORCE_SOURCE_CONFIG\fR\ |\ \fB\-\-sqlserver\-source\-config\fR=\fISQLSERVER_SOURCE_CONFIG\fR] [\fIGCLOUD_WIDE_FLAG\ ...\fR]



.SH "DESCRIPTION"

Update a Datastream stream. If successful, the response body contains a newly
created instance of Operation. To get the operation result, call: describe
OPERATION



.SH "EXAMPLES"

To update a stream with a new source and new display name:

.RS 2m
$ gcloud datastream streams update STREAM \-\-location=us\-central1 \e
  \-\-display\-name=my\-stream \-\-source=source \e
  \-\-update\-mask=display_name,source
.RE

To update a stream's state to RUNNING:

.RS 2m
$ gcloud datastream streams update STREAM \-\-location=us\-central1 \e
  \-\-state=RUNNING \-\-update\-mask=state
.RE

To update a stream's oracle source config:

.RS 2m
$ gcloud datastream streams update STREAM \-\-location=us\-central1 \e
  \-\-oracle\-source\-config=good_oracle_cp.json \e
  \-\-update\-mask=oracle_source_config.include_objects
.RE



.SH "POSITIONAL ARGUMENTS"

.RS 2m
.TP 2m

Stream resource \- The stream to update. The arguments in this group can be used
to specify the attributes of this resource. (NOTE) Some attributes are not given
arguments in this group but can be set in other ways.

To set the \f5project\fR attribute:
.RS 2m
.IP "\(em" 2m
provide the argument \f5stream\fR on the command line with a fully specified
name;
.IP "\(em" 2m
provide the argument \f5\-\-project\fR on the command line;
.IP "\(em" 2m
set the property \f5core/project\fR.
.RE
.sp

This must be specified.


.RS 2m
.TP 2m
\fISTREAM\fR

ID of the stream or fully qualified identifier for the stream.

To set the \f5stream\fR attribute:
.RS 2m
.IP "\(bu" 2m
provide the argument \f5stream\fR on the command line.
.RE
.sp

This positional argument must be specified if any of the other arguments in this
group are specified.

.TP 2m
\fB\-\-location\fR=\fILOCATION\fR

The Cloud location for the stream.

To set the \f5location\fR attribute:
.RS 2m
.IP "\(bu" 2m
provide the argument \f5stream\fR on the command line with a fully specified
name;
.IP "\(bu" 2m
provide the argument \f5\-\-location\fR on the command line.
.RE
.sp


.RE
.RE
.sp

.SH "FLAGS"

.RS 2m
.TP 2m
\fB\-\-display\-name\fR=\fIDISPLAY_NAME\fR

Friendly name for the stream.

.TP 2m
\fB\-\-state\fR=\fISTATE\fR

Stream state, can be set to: "RUNNING" or "PAUSED".

.TP 2m
\fB\-\-update\-labels\fR=[\fIKEY\fR=\fIVALUE\fR,...]

List of label KEY=VALUE pairs to update. If a label exists, its value is
modified. Otherwise, a new label is created.

Keys must start with a lowercase character and contain only hyphens (\f5\-\fR),
underscores (\f5_\fR), lowercase characters, and numbers. Values must contain
only hyphens (\f5\-\fR), underscores (\f5_\fR), lowercase characters, and
numbers.

.TP 2m
\fB\-\-update\-mask\fR=\fIUPDATE_MASK\fR

Used to specify the fields to be overwritten in the stream resource by the
update. If the update mask is used, then a field will be overwritten only if it
is in the mask. If the user does not provide a mask then all fields will be
overwritten. This is a comma\-separated list of fully qualified names of fields,
written as snake_case or camelCase. Example: "display_name,
source_config.oracle_source_config".

.TP 2m

At most one of these can be specified:


.RS 2m
.TP 2m
\fB\-\-backfill\-none\fR

Do not automatically backfill any objects. This flag is equivalent to selecting
the Manual backfill type in the Google Cloud console.

.TP 2m
\fB\-\-backfill\-all\fR

Automatically backfill objects included in the stream source configuration.
Specific objects can be excluded. This flag is equivalent to selecting the
Automatic backfill type in the Google Cloud console.

.TP 2m

At most one of these can be specified:


.RS 2m
.TP 2m
\fB\-\-mongodb\-excluded\-objects\fR=\fIMONGODB_EXCLUDED_OBJECTS\fR

Path to a YAML (or JSON) file containing the MongoDB data sources to avoid
backfilling.

The JSON file is formatted as follows, with camelCase field naming:

.RS 2m
  {
    "databases": [
      {
        "database":"sample_database",
        "collections": [
          {
            "collection": "sample_collection",
            "fields": [
              {
                "field": "sample_field",
              }
            ]
          }
        ]
      }
    ]
  }
.RE

.TP 2m
\fB\-\-mysql\-excluded\-objects\fR=\fIMYSQL_EXCLUDED_OBJECTS\fR

Path to a YAML (or JSON) file containing the MySQL data sources to avoid
backfilling.

The JSON file is formatted as follows, with camelCase field naming:

.RS 2m
  {
    "mysqlDatabases": [
      {
        "database":"sample_database",
        "mysqlTables": [
          {
            "table": "sample_table",
            "mysqlColumns": [
              {
                "column": "sample_column",
              }
              ]
          }
        ]
      }
    ]
  }
.RE

.TP 2m
\fB\-\-oracle\-excluded\-objects\fR=\fIORACLE_EXCLUDED_OBJECTS\fR

Path to a YAML (or JSON) file containing the Oracle data sources to avoid
backfilling.

The JSON file is formatted as follows, with camelCase field naming:

.RS 2m
  {
    "oracleSchemas": [
      {
        "schema": "SAMPLE",
        "oracleTables": [
          {
            "table": "SAMPLE_TABLE",
            "oracleColumns": [
              {
                "column": "COL",
              }
            ]
          }
        ]
      }
    ]
  }
.RE

.TP 2m
\fB\-\-postgresql\-excluded\-objects\fR=\fIPOSTGRESQL_EXCLUDED_OBJECTS\fR

Path to a YAML (or JSON) file containing the PostgreSQL data sources to avoid
backfilling.

The JSON file is formatted as follows, with camelCase field naming:

.RS 2m
  {
    "postgresqlSchemas": [
      {
        "schema": "SAMPLE",
        "postgresqlTables": [
          {
            "table": "SAMPLE_TABLE",
            "postgresqlColumns": [
              {
                "column": "COL",
              }
            ]
          }
        ]
      }
    ]
  }
.RE

.TP 2m
\fB\-\-salesforce\-excluded\-objects\fR=\fISALESFORCE_EXCLUDED_OBJECTS\fR

Path to a YAML (or JSON) file containing the Salesforce data sources to avoid
backfilling.

The JSON file is formatted as follows, with camelCase field naming:

.RS 2m
  {
    "objects": [
      {
        "objectName": "SAMPLE",
      },
      {
        "objectName": "SAMPLE2",
      }
    ]
  }
.RE

.TP 2m
\fB\-\-sqlserver\-excluded\-objects\fR=\fISQLSERVER_EXCLUDED_OBJECTS\fR

Path to a YAML (or JSON) file containing the SQL Server data sources to avoid
backfilling.

The JSON file is formatted as follows, with camelCase field naming:

.RS 2m
  {
    "schemas": [
      {
        "schema": "SAMPLE",
        "tables": [
          {
            "table": "SAMPLE_TABLE",
            "columns": [
              {
                "column": "COL",
              }
            ]
          }
        ]
      }
    ]
  }
.RE

.RE
.RE
.sp
.TP 2m

At most one of these can be specified:


.RS 2m
.TP 2m
\fB\-\-clear\-labels\fR

Remove all labels. If \f5\-\-update\-labels\fR is also specified then
\f5\-\-clear\-labels\fR is applied first.

For example, to remove all labels:

.RS 2m
$ gcloud datastream streams update \-\-clear\-labels
.RE

To remove all existing labels and create two new labels, \f5\fIfoo\fR\fR and
\f5\fIbaz\fR\fR:

.RS 2m
$ gcloud datastream streams update \-\-clear\-labels \e
  \-\-update\-labels foo=bar,baz=qux
.RE

.TP 2m
\fB\-\-remove\-labels\fR=[\fIKEY\fR,...]

List of label keys to remove. If a label does not exist it is silently ignored.
If \f5\-\-update\-labels\fR is also specified then \f5\-\-update\-labels\fR is
applied first.

.RE
.sp
.TP 2m

Connection profile resource \- Resource ID of the destination connection
profile. This represents a Cloud resource. (NOTE) Some attributes are not given
arguments in this group but can be set in other ways.

To set the \f5project\fR attribute:
.RS 2m
.IP "\(em" 2m
provide the argument \f5\-\-destination\fR on the command line with a fully
specified name;
.IP "\(em" 2m
provide the argument \f5\-\-project\fR on the command line;
.IP "\(em" 2m
set the property \f5core/project\fR.
.RE
.sp

To set the \f5location\fR attribute:
.RS 2m
.IP "\(em" 2m
provide the argument \f5\-\-destination\fR on the command line with a fully
specified name;
.IP "\(em" 2m
provide the argument \f5\-\-location\fR on the command line.
.RE
.sp


.RS 2m
.TP 2m
\fB\-\-destination\fR=\fIDESTINATION\fR

ID of the connection_profile or fully qualified identifier for the
connection_profile.

To set the \f5connection_profile\fR attribute:
.RS 2m
.IP "\(bu" 2m
provide the argument \f5\-\-destination\fR on the command line.
.RE
.sp

.RE
.sp
.TP 2m

At most one of these can be specified:


.RS 2m
.TP 2m
\fB\-\-bigquery\-destination\-config\fR=\fIBIGQUERY_DESTINATION_CONFIG\fR

Path to a YAML (or JSON) file containing the configuration for Google BigQuery
Destination Config.

The YAML (or JSON) file should be formatted as follows:

BigQuery configuration with source hierarchy datasets and merge mode (merge mode
is by default):

.RS 2m
{
  "sourceHierarchyDatasets": {
    "datasetTemplate": {
      "location": "us\-central1",
      "datasetIdPrefix": "my_prefix",
      "kmsKeyName": "projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{cryptoKey}"
    }
  },
  "merge": {}
  "dataFreshness": "3600s"
}
.RE

BigQuery configuration with source hierarchy datasets and append only mode:.RS 2m
{
  "sourceHierarchyDatasets": {
    "datasetTemplate": {
      "location": "us\-central1",
      "datasetIdPrefix": "my_prefix",
      "kmsKeyName": "projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{cryptoKey}"
    }
  },
  "appendOnly": {}
}

.RE

BigQuery configuration with single target dataset and merge mode:

.RS 2m
{
  "singleTargetDataset": {
    "datasetId": "projectId:my_dataset"
  },
  "merge": {}
  "dataFreshness": "3600s"
}
.RE

BigQuery configuration with Big Lake table configuration:.RS 2m
{
  "singleTargetDataset": {
    "datasetId": "projectId:datasetId"
  },
  "appendOnly": {},
  "blmtConfig": {
    "bucket": "bucketName",
    "tableFormat": "ICEBERG",
    "fileFormat": "PARQUET",
    "connectionName": "projectId.region.connectionName",
    "rootPath": "/root"
  }
}

.RE

.TP 2m
\fB\-\-gcs\-destination\-config\fR=\fIGCS_DESTINATION_CONFIG\fR

Path to a YAML (or JSON) file containing the configuration for Google Cloud
Storage Destination Config.

The JSON file is formatted as follows:

.RS 2m
 {
 "path": "some/path",
 "fileRotationMb":5,
 "fileRotationInterval":"15s",
 "avroFileFormat": {}
 }
.RE

.RE
.sp
.TP 2m

At most one of these can be specified:


.RS 2m
.TP 2m
\fB\-\-force\fR

Update the stream without validating it.

.TP 2m
\fB\-\-validate\-only\fR

Only validate the stream, but do not update any resources. The default is false.

.RE
.sp
.TP 2m

Connection profile resource \- Resource ID of the source connection profile.
This represents a Cloud resource. (NOTE) Some attributes are not given arguments
in this group but can be set in other ways.

To set the \f5project\fR attribute:
.RS 2m
.IP "\(em" 2m
provide the argument \f5\-\-source\fR on the command line with a fully specified
name;
.IP "\(em" 2m
provide the argument \f5\-\-project\fR on the command line;
.IP "\(em" 2m
set the property \f5core/project\fR.
.RE
.sp

To set the \f5location\fR attribute:
.RS 2m
.IP "\(em" 2m
provide the argument \f5\-\-source\fR on the command line with a fully specified
name;
.IP "\(em" 2m
provide the argument \f5\-\-location\fR on the command line.
.RE
.sp


.RS 2m
.TP 2m
\fB\-\-source\fR=\fISOURCE\fR

ID of the connection_profile or fully qualified identifier for the
connection_profile.

To set the \f5connection_profile\fR attribute:
.RS 2m
.IP "\(bu" 2m
provide the argument \f5\-\-source\fR on the command line.
.RE
.sp

.RE
.sp
.TP 2m

At most one of these can be specified:


.RS 2m
.TP 2m
\fB\-\-mongodb\-source\-config\fR=\fIMONGODB_SOURCE_CONFIG\fR

Path to a YAML (or JSON) file containing the configuration for MongoDB Source
Config.

The JSON file is formatted as follows, with snake_case field naming:

.RS 2m
  {
    "includeObjects": {},
    "excludeObjects": {
      "databases": [
        {
          "database": "sampleDb",
          "collections": [
            {
              "collection": "sampleCollection",
              "fields": [
                {
                  "field": "SAMPLE_FIELD",
                }
              ]
            }
          ]
        }
      ]
    }
  }
.RE

.TP 2m
\fB\-\-mysql\-source\-config\fR=\fIMYSQL_SOURCE_CONFIG\fR

Path to a YAML (or JSON) file containing the configuration for MySQL Source
Config.

The JSON file is formatted as follows, with camelCase field naming:

.RS 2m
  {
    "includeObjects": {},
    "excludeObjects":  {
      "mysqlDatabases": [
          {
            "database":"sample_database",
            "mysqlTables": [
              {
                "table": "sample_table",
                "mysqlColumns": [
                  {
                    "column": "sample_column",
                  }
                 ]
              }
            ]
          }
        ]
      }
  }
.RE

.TP 2m
\fB\-\-oracle\-source\-config\fR=\fIORACLE_SOURCE_CONFIG\fR

Path to a YAML (or JSON) file containing the configuration for Oracle Source
Config.

The JSON file is formatted as follows, with camelCase field naming:

.RS 2m
  {
    "includeObjects": {},
    "excludeObjects": {
      "oracleSchemas": [
        {
          "schema": "SAMPLE",
          "oracleTables": [
            {
              "table": "SAMPLE_TABLE",
              "oracleColumns": [
                {
                  "column": "COL",
                }
              ]
            }
          ]
        }
      ]
    }
  }
.RE

.TP 2m
\fB\-\-postgresql\-source\-config\fR=\fIPOSTGRESQL_SOURCE_CONFIG\fR

Path to a YAML (or JSON) file containing the configuration for PostgreSQL Source
Config.

The JSON file is formatted as follows, with camelCase field naming:

.RS 2m
  {
    "includeObjects": {},
    "excludeObjects": {
      "postgresqlSchemas": [
        {
          "schema": "SAMPLE",
          "postgresqlTables": [
            {
              "table": "SAMPLE_TABLE",
              "postgresqlColumns": [
                {
                  "column": "COL",
                }
              ]
            }
          ]
        }
      ]
    },
    "replicationSlot": "SAMPLE_REPLICATION_SLOT",
    "publication": "SAMPLE_PUBLICATION"
  }
.RE

.TP 2m
\fB\-\-salesforce\-source\-config\fR=\fISALESFORCE_SOURCE_CONFIG\fR

Path to a YAML (or JSON) file containing the configuration for Salesforce Source
Config.

The JSON file is formatted as follows, with camelCase field naming:

.RS 2m
  {
    "pollingInterval": "3000s",
    "includeObjects": {},
    "excludeObjects": {
      "objects": [
        {
          "objectName": "SAMPLE",
          "fields": [
            {
              "fieldName": "SAMPLE_FIELD",
            }
          ]
        }
      ]
    }
  }
.RE

.TP 2m
\fB\-\-sqlserver\-source\-config\fR=\fISQLSERVER_SOURCE_CONFIG\fR

Path to a YAML (or JSON) file containing the configuration for SQL Server Source
Config.

The JSON file is formatted as follows, with camelCase field naming:

.RS 2m
  {
    "includeObjects": {},
    "excludeObjects": {
      "schemas": [
        {
          "schema": "SAMPLE",
          "tables": [
            {
              "table": "SAMPLE_TABLE",
              "columns": [
                {
                  "column": "COL",
                }
              ]
            }
          ]
        }
      ]
    },
    "maxConcurrentCdcTasks": 2,
    "maxConcurrentBackfillTasks": 10,
    "transactionLogs": {}  # Or changeTables
  }
.RE


.RE
.RE
.sp

.SH "GCLOUD WIDE FLAGS"

These flags are available to all commands: \-\-access\-token\-file, \-\-account,
\-\-billing\-project, \-\-configuration, \-\-flags\-file, \-\-flatten,
\-\-format, \-\-help, \-\-impersonate\-service\-account, \-\-log\-http,
\-\-project, \-\-quiet, \-\-trace\-token, \-\-user\-output\-enabled,
\-\-verbosity.

Run \fB$ gcloud help\fR for details.



.SH "NOTES"

This variant is also available:

.RS 2m
$ gcloud beta datastream streams update
.RE