Marketplace
BrowsePublish
Marketplace
You are viewing an outdated version of provider-gcp.Go to Latest
upbound/provider-gcp@v0.41.2
Job
dataproc.gcp.upbound.io
Job
upbound/provider-gcp@v0.41.2dataproc.gcp.upbound.io

Job is the Schema for the Jobs API. Manages a job resource within a Dataproc cluster.

Type

CRD

Group

dataproc.gcp.upbound.io

Version

v1beta1

apiVersion: dataproc.gcp.upbound.io/v1beta1

kind: Job

API Documentation
apiVersion
string
kind
string
metadata
object
spec
object
object

JobSpec defines the desired state of Job

forProvider
requiredobject
requiredobject

No description provided.

array

No description provided.

array

HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip.

args
array
array

The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.

array

HCFS URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.

array

HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.

array

No description provided.

mainClass
string
array

No description provided.

array

HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.

array

The list of Hive queries or statements to execute as part of the job. Conflicts with query_file_uri

labels
object
array

No description provided.

array

HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.

array

No description provided.

array

The list of Hive queries or statements to execute as part of the job. Conflicts with query_file_uri

array

No description provided.

object

Reference to a Cluster in dataproc to populate clusterName.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Cluster in dataproc to populate clusterName.

policy
object
object

Policies for selection.

resolve
string
array

No description provided.

array

Presto client tags to attach to this query.

array

No description provided.

array

The list of SQL queries or statements to execute as part of the job. Conflicts with query_file_uri

project
string
array

No description provided.

array

HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip.

args
array
array

The arguments to pass to the driver.

array

HCFS URIs of files to be copied to the working directory of Python drivers and distributed tasks. Useful for naively parallel tasks.

array

HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.

array

No description provided.

array

HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

array

No description provided.

jobId
string
region
string
regionRef
object
object

Reference to a Cluster in dataproc to populate region.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Cluster in dataproc to populate region.

policy
object
object

Policies for selection.

resolve
string
array

No description provided.

array

No description provided.

array

HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip.

args
array
array

The arguments to pass to the driver.

array

HCFS URIs of files to be copied to the working directory of Spark drivers and distributed tasks. Useful for naively parallel tasks.

array

HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.

array

No description provided.

mainClass
string
array

No description provided.

array

HCFS URIs of jar files to be added to the Spark CLASSPATH.

array

No description provided.

array

The list of SQL queries or statements to execute as part of the job. Conflicts with query_file_uri

object

THIS IS A BETA FIELD. It will be honored unless the Management Policies feature flag is disabled. InitProvider holds the same fields as ForProvider, with the exception of Identifier and other resource reference fields. The fields that are in InitProvider are merged into ForProvider when the resource is created. The same fields are also added to the terraform ignore_changes hook, to avoid updating them after creation. This is useful for fields that are required on creation, but we do not desire to update them after creation, for example because of an external controller is managing them, like an autoscaler.

array

No description provided.

array

HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip.

args
array
array

The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.

array

HCFS URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.

array

HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.

array

No description provided.

mainClass
string
array

No description provided.

array

HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.

array

The list of Hive queries or statements to execute as part of the job. Conflicts with query_file_uri

labels
object
array

No description provided.

array

HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.

array

No description provided.

array

The list of Hive queries or statements to execute as part of the job. Conflicts with query_file_uri

array

No description provided.

object

Reference to a Cluster in dataproc to populate clusterName.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Cluster in dataproc to populate clusterName.

policy
object
object

Policies for selection.

resolve
string
array

No description provided.

array

Presto client tags to attach to this query.

array

No description provided.

array

The list of SQL queries or statements to execute as part of the job. Conflicts with query_file_uri

project
string
array

No description provided.

array

HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip.

args
array
array

The arguments to pass to the driver.

array

HCFS URIs of files to be copied to the working directory of Python drivers and distributed tasks. Useful for naively parallel tasks.

array

HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.

array

No description provided.

array

HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

array

No description provided.

jobId
string
region
string
regionRef
object
object

Reference to a Cluster in dataproc to populate region.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

Selector for a Cluster in dataproc to populate region.

policy
object
object

Policies for selection.

resolve
string
array

No description provided.

array

No description provided.

array

HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip.

args
array
array

The arguments to pass to the driver.

array

HCFS URIs of files to be copied to the working directory of Spark drivers and distributed tasks. Useful for naively parallel tasks.

array

HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.

array

No description provided.

mainClass
string
array

No description provided.

array

HCFS URIs of jar files to be added to the Spark CLASSPATH.

array

No description provided.

array

The list of SQL queries or statements to execute as part of the job. Conflicts with query_file_uri

array

THIS IS A BETA FIELD. It is on by default but can be opted out through a Crossplane feature flag. ManagementPolicies specify the array of actions Crossplane is allowed to take on the managed and external resources. This field is planned to replace the DeletionPolicy field in a future release. Currently, both could be set independently and non-default values would be honored if the feature flag is enabled. If both are custom, the DeletionPolicy field will be ignored. See the design doc for more information: https://github.com/crossplane/crossplane/blob/499895a25d1a1a0ba1604944ef98ac7a1a71f197/design/design-doc-observe-only-resources.md?plain=1#L223 and this one: https://github.com/crossplane/crossplane/blob/444267e84783136daa93568b364a5f01228cacbe/design/one-pager-ignore-changes.md

object

ProviderConfigReference specifies how the provider that will be used to create, observe, update, and delete this managed resource should be configured.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
object

PublishConnectionDetailsTo specifies the connection secret config which contains a name, metadata and a reference to secret store config to which any connection details for this managed resource should be written. Connection details frequently include the endpoint, username, and password required to connect to the managed resource.

configRef
object
object

SecretStoreConfigRef specifies which secret store config should be used for this ConnectionSecret.

name
requiredstring
policy
object
object

Policies for referencing.

resolve
string
metadata
object
object

Metadata is the metadata for connection secret.

labels
object
type
string
name
requiredstring
object

WriteConnectionSecretToReference specifies the namespace and name of a Secret to which any connection details for this managed resource should be written. Connection details frequently include the endpoint, username, and password required to connect to the managed resource. This field is planned to be replaced in a future release in favor of PublishConnectionDetailsTo. Currently, both could be set independently and connection details would be published to both without affecting each other.

name
requiredstring
namespace
requiredstring
status
object
object

JobStatus defines the observed state of Job.

object

No description provided.

array

No description provided.

array

HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip.

args
array
array

The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.

array

HCFS URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.

array

HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.

array

No description provided.

mainClass
string
array

No description provided.

array

HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.

array

The list of Hive queries or statements to execute as part of the job. Conflicts with query_file_uri

id
string
labels
object
array

No description provided.

array

HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.

array

No description provided.

array

The list of Hive queries or statements to execute as part of the job. Conflicts with query_file_uri

array

No description provided.

array

No description provided.

array

Presto client tags to attach to this query.

array

No description provided.

array

The list of SQL queries or statements to execute as part of the job. Conflicts with query_file_uri

project
string
array

No description provided.

array

HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip.

args
array
array

The arguments to pass to the driver.

array

HCFS URIs of files to be copied to the working directory of Python drivers and distributed tasks. Useful for naively parallel tasks.

array

HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.

array

No description provided.

array

HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

array

No description provided.

jobId
string
region
string
array

No description provided.

array

No description provided.

array

HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip.

args
array
array

The arguments to pass to the driver.

array

HCFS URIs of files to be copied to the working directory of Spark drivers and distributed tasks. Useful for naively parallel tasks.

array

HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.

array

No description provided.

mainClass
string
array

No description provided.

array

HCFS URIs of jar files to be added to the Spark CLASSPATH.

array

No description provided.

array

The list of SQL queries or statements to execute as part of the job. Conflicts with query_file_uri

status
array
array

No description provided.

details
string
state
string
substate
string
array

Conditions of the resource.

lastTransitionTime
requiredstring
message
string
reason
requiredstring
status
requiredstring
type
requiredstring
Marketplace

Discover the building blocks for your internal cloud platform.

© 2022 Upbound, Inc.

SolutionsProvidersConfigurations
LearnDocumentationTry for Free
MorePrivacy PolicyTerms & Conditions
Marketplace

© 2022 Upbound, Inc.

Marketplace

Discover the building blocksfor your internal cloud platform.