Job is the Schema for the Jobs API. Jobs are actions that BigQuery runs on your behalf to load data, export data, query data, or copy data.
Type
CRD
Group
bigquery.gcp.upbound.io
Version
v1beta1
apiVersion: bigquery.gcp.upbound.io/v1beta1
kind: Job
JobSpec defines the desired state of Job
No description provided.
Copies a table. Structure is documented below.
Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.
Reference to a CryptoKey in kms to populate kmsKeyName.
Policies for referencing.
Selector for a CryptoKey in kms to populate kmsKeyName.
Policies for selection.
The destination table. Structure is documented below.
Reference to a Dataset in bigquery to populate datasetId.
Policies for referencing.
Selector for a Dataset in bigquery to populate datasetId.
Policies for selection.
Reference to a Table in bigquery to populate tableId.
Policies for referencing.
Selector for a Table in bigquery to populate tableId.
Policies for selection.
Configures an extract job. Structure is documented below.
A list of fully-qualified Google Cloud Storage URIs where the extracted table should be written.
A reference to the table being exported. Structure is documented below.
Reference to a Dataset in bigquery to populate datasetId.
Policies for referencing.
Selector for a Dataset in bigquery to populate datasetId.
Policies for selection.
Reference to a Table in bigquery to populate tableId.
Policies for referencing.
Selector for a Table in bigquery to populate tableId.
Policies for selection.
Configures a load job. Structure is documented below.
Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.
The destination table to load the data into. Structure is documented below.
Reference to a Dataset in bigquery to populate datasetId.
Policies for referencing.
Selector for a Dataset in bigquery to populate datasetId.
Policies for selection.
Reference to a Table in bigquery to populate tableId.
Policies for referencing.
Selector for a Table in bigquery to populate tableId.
Policies for selection.
If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup. Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties. If any named property isn't found in the Cloud Datastore backup, an invalid error is returned in the job result.
Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.
The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the '' wildcard character is not allowed.
Time-based partitioning specification for the destination table. Structure is documented below.
Configures a query job. Structure is documented below.
Specifies the default dataset to use for unqualified table names in the query. Note that this does not alter behavior of unqualified dataset names. Structure is documented below.
Reference to a Dataset in bigquery to populate datasetId.
Policies for referencing.
Selector for a Dataset in bigquery to populate datasetId.
Policies for selection.
Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.
Describes the table where the query results should be stored. This property must be set for large results that exceed the maximum response size. For queries that produce anonymous (cached) results, this field will be populated by BigQuery. Structure is documented below.
Reference to a Dataset in bigquery to populate datasetId.
Policies for referencing.
Selector for a Dataset in bigquery to populate datasetId.
Policies for selection.
Reference to a Table in bigquery to populate tableId.
Policies for referencing.
Selector for a Table in bigquery to populate tableId.
Policies for selection.
Allows the schema of the destination table to be updated as a side effect of the query job. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.
Options controlling the execution of scripts. Structure is documented below.
Describes user-defined function resources used in the query. Structure is documented below.
ProviderConfigReference specifies how the provider that will be used to create, observe, update, and delete this managed resource should be configured.
Policies for referencing.
ProviderReference specifies the provider that will be used to create, observe, update, and delete this managed resource. Deprecated: Please use ProviderConfigReference, i.e. providerConfigRef
Policies for referencing.
PublishConnectionDetailsTo specifies the connection secret config which contains a name, metadata and a reference to secret store config to which any connection details for this managed resource should be written. Connection details frequently include the endpoint, username, and password required to connect to the managed resource.
WriteConnectionSecretToReference specifies the namespace and name of a Secret to which any connection details for this managed resource should be written. Connection details frequently include the endpoint, username, and password required to connect to the managed resource. This field is planned to be replaced in a future release in favor of PublishConnectionDetailsTo. Currently, both could be set independently and connection details would be published to both without affecting each other.
JobStatus defines the observed state of Job.
No description provided.
Copies a table. Structure is documented below.
Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.
Configures an extract job. Structure is documented below.
A list of fully-qualified Google Cloud Storage URIs where the extracted table should be written.
Configures a load job. Structure is documented below.
Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.
If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup. Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties. If any named property isn't found in the Cloud Datastore backup, an invalid error is returned in the job result.
Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.
The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the '' wildcard character is not allowed.
Time-based partitioning specification for the destination table. Structure is documented below.
Configures a query job. Structure is documented below.
Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.
Describes the table where the query results should be stored. This property must be set for large results that exceed the maximum response size. For queries that produce anonymous (cached) results, this field will be populated by BigQuery. Structure is documented below.
Allows the schema of the destination table to be updated as a side effect of the query job. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.
Options controlling the execution of scripts. Structure is documented below.
Describes user-defined function resources used in the query. Structure is documented below.
The status of this job. Examine this value when polling an asynchronous job to see if the job is complete. Structure is documented below.
(Output) The first errors encountered during the running of the job. The final message includes the number of errors that caused the process to stop. Errors here do not necessarily mean that the job has not completed or was unsuccessful. Structure is documented below.
Conditions of the resource.
job
apiVersion: bigquery.gcp.upbound.io/v1beta1
kind: Job
metadata:
annotations:
meta.upbound.io/example-id: bigquery/v1beta1/job
upjet.upbound.io/manual-intervention: This resource requires a schema to be
provisioned in the referenced dataset's table
labels:
testing.upbound.io/example-name: job
name: job
spec:
forProvider:
jobId: my_job_query
labels:
example-label: example-value
query:
- allowLargeResults: true
destinationTable:
- datasetIdSelector:
matchLabels:
testing.upbound.io/example-name: bar
tableIdSelector:
matchLabels:
testing.upbound.io/example-name: foo
flattenResults: true
query: SELECT status FROM crossplane-playground.bar.foo
scriptOptions:
- keyResultStatement: LAST
© 2022 Upbound, Inc.
Discover the building blocksfor your internal cloud platform.