File: //snap/google-cloud-cli/current/lib/surface/healthcare/fhir_stores/export/bq.yaml
- release_tracks: [ALPHA, BETA, GA]
help_text:
brief: Export Cloud Healthcare API FHIR resources to BigQuery.
description: Export Cloud Healthcare API FHIR resources to BigQuery.
examples: |
To export the fhir-store 'test-fhir-store' to the BigQuery dataset 'bqdataset', run:
$ {command} test-fhir-store --bq-dataset=bq://my-project.bqdataset --dataset=test-dataset
To perform the same export, but with the 'ANALYTICS' schema and the recursive structure depth of 3, run:
$ {command} test-fhir-store --bq-dataset=bq://my-project.bqdataset --dataset=test-dataset --schema-type=analytics --recursive-depth=3
request:
collection: healthcare.projects.locations.datasets.fhirStores
method: export
ALPHA:
api_version: v1alpha2
BETA:
api_version: v1beta1
GA:
api_version: v1
arguments:
resource:
help_text: Cloud Healthcare API FHIR store to export resources from.
spec: !REF googlecloudsdk.command_lib.healthcare.resources:fhir_store
params:
- arg_name: bq-dataset
api_field: exportResourcesRequest.bigqueryDestination.datasetUri
required: true
help_text: |
BigQuery dataset that houses the BigQuery tables.
- arg_name: schema-type
release_tracks: [ALPHA, BETA]
api_field: exportResourcesRequest.bigqueryDestination.schemaConfig.schemaType
required: true
help_text: |
Specifies the output schema type. If unspecified, the default
is `LOSSLESS`.
type: str
choices:
- arg_value: analytics
enum_value: ANALYTICS
help_text: |
Analytics schema defined by the FHIR community.
See https://github.com/rbrush/sql-on-fhir/blob/master/sql-on-fhir.md.
- arg_value: analytics_v2
enum_value: ANALYTICS_V2
help_text: |
Analytics V2, similar to schema defined by the FHIR community,
with added support for extensions with one or more occurrences
and contained resources in stringified JSON.
- arg_value: lossless
enum_value: LOSSLESS
help_text: |
Schema generated from original FHIR data.
- arg_name: schema-type
release_tracks: [GA]
api_field: exportResourcesRequest.bigqueryDestination.schemaConfig.schemaType
required: true
help_text: |
Specifies the output schema type.
type: str
choices:
- arg_value: analytics
enum_value: ANALYTICS
help_text: |
Analytics schema defined by the FHIR community.
See https://github.com/rbrush/sql-on-fhir/blob/master/sql-on-fhir.md.
- arg_value: analytics_v2
enum_value: ANALYTICS_V2
help_text: |
Analytics V2, similar to Analytics schema type, with added support
for extensions with one or more occurrences and contained resources
to be represented in stringified JSON.
- arg_name: recursive-depth
api_field: exportResourcesRequest.bigqueryDestination.schemaConfig.recursiveStructureDepth
help_text: |
The depth for all recursive structures in the output analytics schema. For example,
concept in the CodeSystem resource is a recursive structure; when the depth
is 2, the CodeSystem table will have a column called concept.concept but not
concept.concept.concept. If not specified or set to 0, the server will use the
default value 2.
- arg_name: write-disposition
api_field: exportResourcesRequest.bigqueryDestination.writeDisposition
help_text: |
Determines whether existing tables in the destination dataset are overwritten or appended
to.
type: str
choices:
- arg_value: write-empty
enum_value: WRITE_EMPTY
help_text: |
Only export data if the destination tables are empty.
- arg_value: write-truncate
enum_value: WRITE_TRUNCATE
help_text: |
Erase all existing data in the tables before writing the instances.
- arg_value: write-append
enum_value: WRITE_APPEND
help_text: |
Append data to the existing tables.
- arg_name: resource-type
api_field: exportResourcesRequest._type
help_text: |
String of comma-delimited FHIR resource types. If provided, only resources of the specified
resource type(s) are exported.
- arg_name: since
api_field: exportResourcesRequest._since
help_text: |
If provided, only resources updated after this time are exported. The time uses the format
YYYY-MM-DDThh:mm:ss.sss+zz:zz. For example, `2015-02-07T13:28:17.239+02:00` or
`2017-01-01T00:00:00Z`. The time must be specified to the second and include a time zone.
async:
collection: healthcare.projects.locations.datasets.operations