Marketplace
BrowsePublish
Marketplace
You are viewing an outdated version of provider-gcp.Go to Latest
upbound/provider-gcp@v1.0.2
Routine
bigquery.gcp.upbound.io
Routine
upbound/provider-gcp@v1.0.2bigquery.gcp.upbound.io

Routine is the Schema for the Routines API. A user-defined function or a stored procedure that belongs to a Dataset

Type

CRD

Group

bigquery.gcp.upbound.io

Version

v1beta1

apiVersion: bigquery.gcp.upbound.io/v1beta1

kind: Routine

API Documentation
apiVersion
string
kind
string
metadata
object
spec
object
object

RoutineSpec defines the desired state of Routine

forProvider
requiredobject
requiredobject

No description provided.

array

Input/output argument of a function or a stored procedure. Structure is documented below.

dataType
string
mode
string
name
string
datasetId
string
object

Reference to a Dataset in bigquery to populate datasetId.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Dataset in bigquery to populate datasetId.

policy
object
object

Policies for selection.

resolve
string
array

Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.

language
string
project
string
array

Remote function specific options. Structure is documented below.

object

Reference to a Connection in bigquery to populate connection.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Connection in bigquery to populate connection.

policy
object
object

Policies for selection.

resolve
string
endpoint
string
array

Optional. If language is one of "PYTHON", "JAVA", "SCALA", this field stores the options for spark stored procedure. Structure is documented below.

array

Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.

object

Reference to a Connection in bigquery to populate connection.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Connection in bigquery to populate connection.

policy
object
object

Policies for selection.

resolve
string
array

Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.

jarUris
array
array

JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.

mainClass
string
array

Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.

object

THIS IS A BETA FIELD. It will be honored unless the Management Policies feature flag is disabled. InitProvider holds the same fields as ForProvider, with the exception of Identifier and other resource reference fields. The fields that are in InitProvider are merged into ForProvider when the resource is created. The same fields are also added to the terraform ignore_changes hook, to avoid updating them after creation. This is useful for fields that are required on creation, but we do not desire to update them after creation, for example because of an external controller is managing them, like an autoscaler.

array

Input/output argument of a function or a stored procedure. Structure is documented below.

dataType
string
mode
string
name
string
array

Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.

language
string
project
string
array

Remote function specific options. Structure is documented below.

object

Reference to a Connection in bigquery to populate connection.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Connection in bigquery to populate connection.

policy
object
object

Policies for selection.

resolve
string
endpoint
string
array

Optional. If language is one of "PYTHON", "JAVA", "SCALA", this field stores the options for spark stored procedure. Structure is documented below.

array

Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.

object

Reference to a Connection in bigquery to populate connection.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Connection in bigquery to populate connection.

policy
object
object

Policies for selection.

resolve
string
array

Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.

jarUris
array
array

JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.

mainClass
string
array

Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.

array

THIS IS A BETA FIELD. It is on by default but can be opted out through a Crossplane feature flag. ManagementPolicies specify the array of actions Crossplane is allowed to take on the managed and external resources. This field is planned to replace the DeletionPolicy field in a future release. Currently, both could be set independently and non-default values would be honored if the feature flag is enabled. If both are custom, the DeletionPolicy field will be ignored. See the design doc for more information: https://github.com/crossplane/crossplane/blob/499895a25d1a1a0ba1604944ef98ac7a1a71f197/design/design-doc-observe-only-resources.md?plain=1#L223 and this one: https://github.com/crossplane/crossplane/blob/444267e84783136daa93568b364a5f01228cacbe/design/one-pager-ignore-changes.md

object

ProviderConfigReference specifies how the provider that will be used to create, observe, update, and delete this managed resource should be configured.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

PublishConnectionDetailsTo specifies the connection secret config which contains a name, metadata and a reference to secret store config to which any connection details for this managed resource should be written. Connection details frequently include the endpoint, username, and password required to connect to the managed resource.

configRef
object
object

SecretStoreConfigRef specifies which secret store config should be used for this ConnectionSecret.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
metadata
object
object

Metadata is the metadata for connection secret.

labels
object
type
string
name
requiredstring
object

WriteConnectionSecretToReference specifies the namespace and name of a Secret to which any connection details for this managed resource should be written. Connection details frequently include the endpoint, username, and password required to connect to the managed resource. This field is planned to be replaced in a future release in favor of PublishConnectionDetailsTo. Currently, both could be set independently and connection details would be published to both without affecting each other.

name
requiredstring
namespace
requiredstring
status
object
object

RoutineStatus defines the observed state of Routine.

object

No description provided.

array

Input/output argument of a function or a stored procedure. Structure is documented below.

dataType
string
mode
string
name
string
datasetId
string
id
string
array

Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.

language
string
project
string
array

Remote function specific options. Structure is documented below.

array

Optional. If language is one of "PYTHON", "JAVA", "SCALA", this field stores the options for spark stored procedure. Structure is documented below.

array

Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.

array

Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.

jarUris
array
array

JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.

mainClass
string
array

Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.

array

Conditions of the resource.

lastTransitionTime
requiredstring
message
string
reason
requiredstring
status
requiredstring
type
requiredstring
Marketplace

Discover the building blocks for your internal cloud platform.

© 2022 Upbound, Inc.

SolutionsProvidersConfigurations
LearnDocumentationTry for Free
MorePrivacy PolicyTerms & Conditions
Marketplace

© 2022 Upbound, Inc.

Marketplace

Discover the building blocksfor your internal cloud platform.