Routine is the Schema for the Routines API. A user-defined function or a stored procedure that belongs to a Dataset
Type
CRD
Group
bigquery.gcp.upbound.io
Version
v1beta1
apiVersion: bigquery.gcp.upbound.io/v1beta1
kind: Routine
RoutineSpec defines the desired state of Routine
No description provided.
Input/output argument of a function or a stored procedure. Structure is documented below.
Reference to a Dataset in bigquery to populate datasetId.
Policies for referencing.
Selector for a Dataset in bigquery to populate datasetId.
Policies for selection.
Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
Remote function specific options. Structure is documented below.
Reference to a Connection in bigquery to populate connection.
Policies for referencing.
Selector for a Connection in bigquery to populate connection.
Policies for selection.
Optional. If language is one of "PYTHON", "JAVA", "SCALA", this field stores the options for spark stored procedure. Structure is documented below.
Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
Reference to a Connection in bigquery to populate connection.
Policies for referencing.
Selector for a Connection in bigquery to populate connection.
Policies for selection.
Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.
THIS IS A BETA FIELD. It will be honored unless the Management Policies feature flag is disabled. InitProvider holds the same fields as ForProvider, with the exception of Identifier and other resource reference fields. The fields that are in InitProvider are merged into ForProvider when the resource is created. The same fields are also added to the terraform ignore_changes hook, to avoid updating them after creation. This is useful for fields that are required on creation, but we do not desire to update them after creation, for example because of an external controller is managing them, like an autoscaler.
Input/output argument of a function or a stored procedure. Structure is documented below.
Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
Remote function specific options. Structure is documented below.
Reference to a Connection in bigquery to populate connection.
Policies for referencing.
Selector for a Connection in bigquery to populate connection.
Policies for selection.
Optional. If language is one of "PYTHON", "JAVA", "SCALA", this field stores the options for spark stored procedure. Structure is documented below.
Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
Reference to a Connection in bigquery to populate connection.
Policies for referencing.
Selector for a Connection in bigquery to populate connection.
Policies for selection.
Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.
THIS IS A BETA FIELD. It is on by default but can be opted out through a Crossplane feature flag. ManagementPolicies specify the array of actions Crossplane is allowed to take on the managed and external resources. This field is planned to replace the DeletionPolicy field in a future release. Currently, both could be set independently and non-default values would be honored if the feature flag is enabled. If both are custom, the DeletionPolicy field will be ignored. See the design doc for more information: https://github.com/crossplane/crossplane/blob/499895a25d1a1a0ba1604944ef98ac7a1a71f197/design/design-doc-observe-only-resources.md?plain=1#L223 and this one: https://github.com/crossplane/crossplane/blob/444267e84783136daa93568b364a5f01228cacbe/design/one-pager-ignore-changes.md
ProviderConfigReference specifies how the provider that will be used to create, observe, update, and delete this managed resource should be configured.
Policies for referencing.
PublishConnectionDetailsTo specifies the connection secret config which contains a name, metadata and a reference to secret store config to which any connection details for this managed resource should be written. Connection details frequently include the endpoint, username, and password required to connect to the managed resource.
WriteConnectionSecretToReference specifies the namespace and name of a Secret to which any connection details for this managed resource should be written. Connection details frequently include the endpoint, username, and password required to connect to the managed resource. This field is planned to be replaced in a future release in favor of PublishConnectionDetailsTo. Currently, both could be set independently and connection details would be published to both without affecting each other.
RoutineStatus defines the observed state of Routine.
No description provided.
Input/output argument of a function or a stored procedure. Structure is documented below.
Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
Remote function specific options. Structure is documented below.
Optional. If language is one of "PYTHON", "JAVA", "SCALA", this field stores the options for spark stored procedure. Structure is documented below.
Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.
Conditions of the resource.
sproc
apiVersion: bigquery.gcp.upbound.io/v1beta1
kind: Routine
metadata:
annotations:
meta.upbound.io/example-id: bigquery/v1beta1/routine
labels:
testing.upbound.io/example-name: sproc
name: sproc
spec:
forProvider:
datasetIdSelector:
matchLabels:
testing.upbound.io/example-name: test
definitionBody: CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);
language: SQL
routineType: PROCEDURE
© 2022 Upbound, Inc.
Discover the building blocksfor your internal cloud platform.