Marketplace
BrowsePublish
Marketplace
You are viewing an outdated version of provider-gcp.Go to Latest
upbound/provider-gcp@v0.41.2
Job
bigquery.gcp.upbound.io
Job
upbound/provider-gcp@v0.41.2bigquery.gcp.upbound.io

Job is the Schema for the Jobs API. Jobs are actions that BigQuery runs on your behalf to load data, export data, query data, or copy data.

Type

CRD

Group

bigquery.gcp.upbound.io

Version

v1beta1

apiVersion: bigquery.gcp.upbound.io/v1beta1

kind: Job

API Documentation
apiVersion
string
kind
string
metadata
object
spec
object
object

JobSpec defines the desired state of Job

forProvider
requiredobject
requiredobject

No description provided.

copy
array
array

Copies a table. Structure is documented below.

array

Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.

object

Reference to a CryptoKey in kms to populate kmsKeyName.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a CryptoKey in kms to populate kmsKeyName.

policy
object
object

Policies for selection.

resolve
string
array

The destination table. Structure is documented below.

datasetId
string
object

Reference to a Dataset in bigquery to populate datasetId.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Dataset in bigquery to populate datasetId.

policy
object
object

Policies for selection.

resolve
string
projectId
string
tableId
string
object

Reference to a Table in bigquery to populate tableId.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Table in bigquery to populate tableId.

policy
object
object

Policies for selection.

resolve
string
array

Source tables to copy. Structure is documented below.

datasetId
string
projectId
string
tableId
string
extract
array
array

Configures an extract job. Structure is documented below.

array

A list of fully-qualified Google Cloud Storage URIs where the extracted table should be written.

array

A reference to the model being exported. Structure is documented below.

datasetId
string
modelId
string
projectId
string
array

A reference to the table being exported. Structure is documented below.

datasetId
string
object

Reference to a Dataset in bigquery to populate datasetId.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Dataset in bigquery to populate datasetId.

policy
object
object

Policies for selection.

resolve
string
projectId
string
tableId
string
object

Reference to a Table in bigquery to populate tableId.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Table in bigquery to populate tableId.

policy
object
object

Policies for selection.

resolve
string
jobId
string
labels
object
load
array
array

Configures a load job. Structure is documented below.

autodetect
boolean
array

Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.

array

The destination table to load the data into. Structure is documented below.

datasetId
string
object

Reference to a Dataset in bigquery to populate datasetId.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Dataset in bigquery to populate datasetId.

policy
object
object

Policies for selection.

resolve
string
projectId
string
tableId
string
object

Reference to a Table in bigquery to populate tableId.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Table in bigquery to populate tableId.

policy
object
object

Policies for selection.

resolve
string
encoding
string
array

Parquet Options for load and make external tables. Structure is documented below.

array

If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup. Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties. If any named property isn't found in the Cloud Datastore backup, an invalid error is returned in the job result.

quote
string
array

Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.

array

The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the '' wildcard character is not allowed.

array

Time-based partitioning specification for the destination table. Structure is documented below.

field
string
type
string
location
string
project
string
query
array
array

Configures a query job. Structure is documented below.

array

Specifies the default dataset to use for unqualified table names in the query. Note that this does not alter behavior of unqualified dataset names. Structure is documented below.

datasetId
string
object

Reference to a Dataset in bigquery to populate datasetId.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Dataset in bigquery to populate datasetId.

policy
object
object

Policies for selection.

resolve
string
projectId
string
array

Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.

array

Describes the table where the query results should be stored. This property must be set for large results that exceed the maximum response size. For queries that produce anonymous (cached) results, this field will be populated by BigQuery. Structure is documented below.

datasetId
string
object

Reference to a Dataset in bigquery to populate datasetId.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Dataset in bigquery to populate datasetId.

policy
object
object

Policies for selection.

resolve
string
projectId
string
tableId
string
object

Reference to a Table in bigquery to populate tableId.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Table in bigquery to populate tableId.

policy
object
object

Policies for selection.

resolve
string
priority
string
query
string
array

Allows the schema of the destination table to be updated as a side effect of the query job. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.

array

Options controlling the execution of scripts. Structure is documented below.

array

Describes user-defined function resources used in the query. Structure is documented below.

object

THIS IS A BETA FIELD. It will be honored unless the Management Policies feature flag is disabled. InitProvider holds the same fields as ForProvider, with the exception of Identifier and other resource reference fields. The fields that are in InitProvider are merged into ForProvider when the resource is created. The same fields are also added to the terraform ignore_changes hook, to avoid updating them after creation. This is useful for fields that are required on creation, but we do not desire to update them after creation, for example because of an external controller is managing them, like an autoscaler.

copy
array
array

Copies a table. Structure is documented below.

array

Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.

object

Reference to a CryptoKey in kms to populate kmsKeyName.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a CryptoKey in kms to populate kmsKeyName.

policy
object
object

Policies for selection.

resolve
string
array

The destination table. Structure is documented below.

datasetId
string
object

Reference to a Dataset in bigquery to populate datasetId.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Dataset in bigquery to populate datasetId.

policy
object
object

Policies for selection.

resolve
string
projectId
string
tableId
string
object

Reference to a Table in bigquery to populate tableId.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Table in bigquery to populate tableId.

policy
object
object

Policies for selection.

resolve
string
array

Source tables to copy. Structure is documented below.

datasetId
string
projectId
string
tableId
string
extract
array
array

Configures an extract job. Structure is documented below.

array

A list of fully-qualified Google Cloud Storage URIs where the extracted table should be written.

array

A reference to the model being exported. Structure is documented below.

datasetId
string
modelId
string
projectId
string
array

A reference to the table being exported. Structure is documented below.

datasetId
string
object

Reference to a Dataset in bigquery to populate datasetId.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Dataset in bigquery to populate datasetId.

policy
object
object

Policies for selection.

resolve
string
projectId
string
tableId
string
object

Reference to a Table in bigquery to populate tableId.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Table in bigquery to populate tableId.

policy
object
object

Policies for selection.

resolve
string
jobId
string
labels
object
load
array
array

Configures a load job. Structure is documented below.

autodetect
boolean
array

Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.

array

The destination table to load the data into. Structure is documented below.

datasetId
string
object

Reference to a Dataset in bigquery to populate datasetId.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Dataset in bigquery to populate datasetId.

policy
object
object

Policies for selection.

resolve
string
projectId
string
tableId
string
object

Reference to a Table in bigquery to populate tableId.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Table in bigquery to populate tableId.

policy
object
object

Policies for selection.

resolve
string
encoding
string
array

Parquet Options for load and make external tables. Structure is documented below.

array

If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup. Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties. If any named property isn't found in the Cloud Datastore backup, an invalid error is returned in the job result.

quote
string
array

Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.

array

The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the '' wildcard character is not allowed.

array

Time-based partitioning specification for the destination table. Structure is documented below.

field
string
type
string
location
string
project
string
query
array
array

Configures a query job. Structure is documented below.

array

Specifies the default dataset to use for unqualified table names in the query. Note that this does not alter behavior of unqualified dataset names. Structure is documented below.

datasetId
string
object

Reference to a Dataset in bigquery to populate datasetId.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Dataset in bigquery to populate datasetId.

policy
object
object

Policies for selection.

resolve
string
projectId
string
array

Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.

array

Describes the table where the query results should be stored. This property must be set for large results that exceed the maximum response size. For queries that produce anonymous (cached) results, this field will be populated by BigQuery. Structure is documented below.

datasetId
string
object

Reference to a Dataset in bigquery to populate datasetId.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Dataset in bigquery to populate datasetId.

policy
object
object

Policies for selection.

resolve
string
projectId
string
tableId
string
object

Reference to a Table in bigquery to populate tableId.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Table in bigquery to populate tableId.

policy
object
object

Policies for selection.

resolve
string
priority
string
query
string
array

Allows the schema of the destination table to be updated as a side effect of the query job. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.

array

Options controlling the execution of scripts. Structure is documented below.

array

Describes user-defined function resources used in the query. Structure is documented below.

array

THIS IS A BETA FIELD. It is on by default but can be opted out through a Crossplane feature flag. ManagementPolicies specify the array of actions Crossplane is allowed to take on the managed and external resources. This field is planned to replace the DeletionPolicy field in a future release. Currently, both could be set independently and non-default values would be honored if the feature flag is enabled. If both are custom, the DeletionPolicy field will be ignored. See the design doc for more information: https://github.com/crossplane/crossplane/blob/499895a25d1a1a0ba1604944ef98ac7a1a71f197/design/design-doc-observe-only-resources.md?plain=1#L223 and this one: https://github.com/crossplane/crossplane/blob/444267e84783136daa93568b364a5f01228cacbe/design/one-pager-ignore-changes.md

object

ProviderConfigReference specifies how the provider that will be used to create, observe, update, and delete this managed resource should be configured.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

PublishConnectionDetailsTo specifies the connection secret config which contains a name, metadata and a reference to secret store config to which any connection details for this managed resource should be written. Connection details frequently include the endpoint, username, and password required to connect to the managed resource.

configRef
object
object

SecretStoreConfigRef specifies which secret store config should be used for this ConnectionSecret.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
metadata
object
object

Metadata is the metadata for connection secret.

labels
object
type
string
name
requiredstring
object

WriteConnectionSecretToReference specifies the namespace and name of a Secret to which any connection details for this managed resource should be written. Connection details frequently include the endpoint, username, and password required to connect to the managed resource. This field is planned to be replaced in a future release in favor of PublishConnectionDetailsTo. Currently, both could be set independently and connection details would be published to both without affecting each other.

name
requiredstring
namespace
requiredstring
status
object
object

JobStatus defines the observed state of Job.

object

No description provided.

copy
array
array

Copies a table. Structure is documented below.

array

Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.

array

The destination table. Structure is documented below.

datasetId
string
projectId
string
tableId
string
array

Source tables to copy. Structure is documented below.

datasetId
string
projectId
string
tableId
string
extract
array
array

Configures an extract job. Structure is documented below.

array

A list of fully-qualified Google Cloud Storage URIs where the extracted table should be written.

array

A reference to the model being exported. Structure is documented below.

datasetId
string
modelId
string
projectId
string
array

A reference to the table being exported. Structure is documented below.

datasetId
string
projectId
string
tableId
string
id
string
jobId
string
jobType
string
labels
object
load
array
array

Configures a load job. Structure is documented below.

autodetect
boolean
array

Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.

array

The destination table to load the data into. Structure is documented below.

datasetId
string
projectId
string
tableId
string
encoding
string
array

Parquet Options for load and make external tables. Structure is documented below.

array

If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup. Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties. If any named property isn't found in the Cloud Datastore backup, an invalid error is returned in the job result.

quote
string
array

Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.

array

The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the '' wildcard character is not allowed.

array

Time-based partitioning specification for the destination table. Structure is documented below.

field
string
type
string
location
string
project
string
query
array
array

Configures a query job. Structure is documented below.

array

Specifies the default dataset to use for unqualified table names in the query. Note that this does not alter behavior of unqualified dataset names. Structure is documented below.

datasetId
string
projectId
string
array

Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.

array

Describes the table where the query results should be stored. This property must be set for large results that exceed the maximum response size. For queries that produce anonymous (cached) results, this field will be populated by BigQuery. Structure is documented below.

datasetId
string
projectId
string
tableId
string
priority
string
query
string
array

Allows the schema of the destination table to be updated as a side effect of the query job. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.

array

Options controlling the execution of scripts. Structure is documented below.

array

Describes user-defined function resources used in the query. Structure is documented below.

status
array
array

The status of this job. Examine this value when polling an asynchronous job to see if the job is complete. Structure is documented below.

array

(Output) Final error result of the job. If present, indicates that the job has completed and was unsuccessful. Structure is documented below.

location
string
message
string
reason
string
errors
array
array

(Output) The first errors encountered during the running of the job. The final message includes the number of errors that caused the process to stop. Errors here do not necessarily mean that the job has not completed or was unsuccessful. Structure is documented below.

location
string
message
string
reason
string
state
string
userEmail
string
array

Conditions of the resource.

lastTransitionTime
requiredstring
message
string
reason
requiredstring
status
requiredstring
type
requiredstring
Marketplace

Discover the building blocks for your internal cloud platform.

© 2022 Upbound, Inc.

SolutionsProvidersConfigurations
LearnDocumentationTry for Free
MorePrivacy PolicyTerms & Conditions
Marketplace

© 2022 Upbound, Inc.

Marketplace

Discover the building blocksfor your internal cloud platform.