DataFlow is the Schema for the DataFlows API. Manages a Data Flow inside an Azure Data Factory.
Type
CRD
Group
datafactory.azure.upbound.io
Version
v1beta1
apiVersion: datafactory.azure.upbound.io/v1beta1
kind: DataFlow
DataFlowSpec defines the desired state of DataFlow
No description provided.
List of tags that can be used for describing the Data Factory Data Flow.
Reference to a Factory in datafactory to populate dataFactoryId.
Policies for referencing.
Selector for a Factory in datafactory to populate dataFactoryId.
Policies for selection.
The script lines for the Data Factory Data Flow.
One or more sink blocks as defined below.
A dataset block as defined below.
Reference to a DataSetJSON in datafactory to populate name.
Policies for referencing.
Selector for a DataSetJSON in datafactory to populate name.
Policies for selection.
A flowlet block as defined below.
A linked_service block as defined below.
A rejected_linked_service block as defined below.
A schema_linked_service block as defined below.
One or more source blocks as defined below.
A dataset block as defined below.
Reference to a DataSetJSON in datafactory to populate name.
Policies for referencing.
Selector for a DataSetJSON in datafactory to populate name.
Policies for selection.
A flowlet block as defined below.
A linked_service block as defined below.
A rejected_linked_service block as defined below.
A schema_linked_service block as defined below.
One or more transformation blocks as defined below.
A dataset block as defined below.
A flowlet block as defined below.
A linked_service block as defined below.
THIS IS AN ALPHA FIELD. Do not use it in production. It is not honored unless the relevant Crossplane feature flag is enabled, and may be changed or removed without notice. InitProvider holds the same fields as ForProvider, with the exception of Identifier and other resource reference fields. The fields that are in InitProvider are merged into ForProvider when the resource is created. The same fields are also added to the terraform ignore_changes hook, to avoid updating them after creation. This is useful for fields that are required on creation, but we do not desire to update them after creation, for example because of an external controller is managing them, like an autoscaler.
List of tags that can be used for describing the Data Factory Data Flow.
The script lines for the Data Factory Data Flow.
One or more sink blocks as defined below.
A dataset block as defined below.
A flowlet block as defined below.
A linked_service block as defined below.
A rejected_linked_service block as defined below.
A schema_linked_service block as defined below.
One or more source blocks as defined below.
A dataset block as defined below.
A flowlet block as defined below.
A linked_service block as defined below.
A rejected_linked_service block as defined below.
A schema_linked_service block as defined below.
One or more transformation blocks as defined below.
A dataset block as defined below.
A flowlet block as defined below.
A linked_service block as defined below.
THIS IS AN ALPHA FIELD. Do not use it in production. It is not honored unless the relevant Crossplane feature flag is enabled, and may be changed or removed without notice. ManagementPolicies specify the array of actions Crossplane is allowed to take on the managed and external resources. This field is planned to replace the DeletionPolicy field in a future release. Currently, both could be set independently and non-default values would be honored if the feature flag is enabled. If both are custom, the DeletionPolicy field will be ignored. See the design doc for more information: https://github.com/crossplane/crossplane/blob/499895a25d1a1a0ba1604944ef98ac7a1a71f197/design/design-doc-observe-only-resources.md?plain=1#L223 and this one: https://github.com/crossplane/crossplane/blob/444267e84783136daa93568b364a5f01228cacbe/design/one-pager-ignore-changes.md
ProviderConfigReference specifies how the provider that will be used to create, observe, update, and delete this managed resource should be configured.
Policies for referencing.
ProviderReference specifies the provider that will be used to create, observe, update, and delete this managed resource. Deprecated: Please use ProviderConfigReference, i.e. providerConfigRef
Policies for referencing.
PublishConnectionDetailsTo specifies the connection secret config which contains a name, metadata and a reference to secret store config to which any connection details for this managed resource should be written. Connection details frequently include the endpoint, username, and password required to connect to the managed resource.
WriteConnectionSecretToReference specifies the namespace and name of a Secret to which any connection details for this managed resource should be written. Connection details frequently include the endpoint, username, and password required to connect to the managed resource. This field is planned to be replaced in a future release in favor of PublishConnectionDetailsTo. Currently, both could be set independently and connection details would be published to both without affecting each other.
DataFlowStatus defines the observed state of DataFlow.
No description provided.
List of tags that can be used for describing the Data Factory Data Flow.
The script lines for the Data Factory Data Flow.
One or more sink blocks as defined below.
A dataset block as defined below.
A flowlet block as defined below.
A linked_service block as defined below.
A rejected_linked_service block as defined below.
A schema_linked_service block as defined below.
One or more source blocks as defined below.
A dataset block as defined below.
A flowlet block as defined below.
A linked_service block as defined below.
A rejected_linked_service block as defined below.
A schema_linked_service block as defined below.
One or more transformation blocks as defined below.
A dataset block as defined below.
A flowlet block as defined below.
A linked_service block as defined below.
Conditions of the resource.
example
apiVersion: datafactory.azure.upbound.io/v1beta1
kind: DataFlow
metadata:
annotations:
meta.upbound.io/example-id: datafactory/v1beta1/dataflow
labels:
testing.upbound.io/example-name: example
name: example
spec:
forProvider:
dataFactoryIdSelector:
matchLabels:
testing.upbound.io/example-name: factoryexample
script: |
source(
allowSchemaDrift: true,
validateSchema: false,
limit: 100,
ignoreNoFilesFound: false,
documentForm: 'documentPerLine') ~> source1
source1 sink(
allowSchemaDrift: true,
validateSchema: false,
skipDuplicateMapInputs: true,
skipDuplicateMapOutputs: true) ~> sink1
sink:
- dataset:
- nameSelector:
matchLabels:
testing.upbound.io/example-name: example2
name: sink1
source:
- dataset:
- nameSelector:
matchLabels:
testing.upbound.io/example-name: example1
name: source1
© 2022 Upbound, Inc.
Discover the building blocksfor your internal cloud platform.