Connector is the Schema for the Connectors API. Provides an Amazon MSK Connect Connector resource. Changes to any parameter besides "scaling" will be rejected. Instead you must create a new resource.
Type
CRD
Group
kafkaconnect.aws.upbound.io
Version
apiVersion: kafkaconnect.aws.upbound.io/v1beta1
kind: Connector
ConnectorSpec defines the desired state of Connector
No description provided.
Information about the capacity allocated to the connector. See below.
Information about the auto scaling parameters for the connector. See below.
The scale-in policy for the connector. See below.
The scale-out policy for the connector. See below.
Details about a fixed capacity allocated to a connector. See below.
Specifies which Apache Kafka cluster to connect to. See below.
The Apache Kafka cluster to which the connector is connected.
Details of an Amazon VPC which has network connectivity to the Apache Kafka cluster.
References to SecurityGroup in ec2 to populate securityGroups.
Policies for referencing.
Selector for a list of SecurityGroup in ec2 to populate securityGroups.
Policies for selection.
The security groups for the connector.
References to Subnet in ec2 to populate subnets.
Policies for referencing.
Selector for a list of Subnet in ec2 to populate subnets.
Policies for selection.
The subnets for the connector.
Details of the client authentication used by the Apache Kafka cluster. See below.
Details of encryption in transit to the Apache Kafka cluster. See below.
Details about log delivery. See below.
The workers can send worker logs to different destination types. This configuration specifies the details of these destinations. See below.
Details about delivering logs to Amazon CloudWatch Logs. See below.
Reference to a Group in cloudwatchlogs to populate logGroup.
Policies for referencing.
Selector for a Group in cloudwatchlogs to populate logGroup.
Policies for selection.
Details about delivering logs to Amazon Kinesis Data Firehose. See below.
Reference to a DeliveryStream in firehose to populate deliveryStream.
Policies for referencing.
Selector for a DeliveryStream in firehose to populate deliveryStream.
Policies for selection.
Details about delivering logs to Amazon S3. See below.
Reference to a Bucket in s3 to populate bucket.
Policies for referencing.
Selector for a Bucket in s3 to populate bucket.
Policies for selection.
Specifies which plugins to use for the connector. See below.
Details about a custom plugin. See below.
Reference to a CustomPlugin in kafkaconnect to populate arn.
Policies for referencing.
Selector for a CustomPlugin in kafkaconnect to populate arn.
Policies for selection.
Reference to a Role in iam to populate serviceExecutionRoleArn.
Policies for referencing.
Selector for a Role in iam to populate serviceExecutionRoleArn.
Policies for selection.
Specifies which worker configuration to use with the connector. See below.
Reference to a WorkerConfiguration in kafkaconnect to populate arn.
Policies for referencing.
Selector for a WorkerConfiguration in kafkaconnect to populate arn.
Policies for selection.
THIS IS A BETA FIELD. It will be honored unless the Management Policies feature flag is disabled. InitProvider holds the same fields as ForProvider, with the exception of Identifier and other resource reference fields. The fields that are in InitProvider are merged into ForProvider when the resource is created. The same fields are also added to the terraform ignore_changes hook, to avoid updating them after creation. This is useful for fields that are required on creation, but we do not desire to update them after creation, for example because of an external controller is managing them, like an autoscaler.
Information about the capacity allocated to the connector. See below.
Information about the auto scaling parameters for the connector. See below.
The scale-in policy for the connector. See below.
The scale-out policy for the connector. See below.
Details about a fixed capacity allocated to a connector. See below.
Specifies which Apache Kafka cluster to connect to. See below.
The Apache Kafka cluster to which the connector is connected.
Details of an Amazon VPC which has network connectivity to the Apache Kafka cluster.
References to SecurityGroup in ec2 to populate securityGroups.
Policies for referencing.
Selector for a list of SecurityGroup in ec2 to populate securityGroups.
Policies for selection.
The security groups for the connector.
References to Subnet in ec2 to populate subnets.
Policies for referencing.
Selector for a list of Subnet in ec2 to populate subnets.
Policies for selection.
The subnets for the connector.
Details of the client authentication used by the Apache Kafka cluster. See below.
Details of encryption in transit to the Apache Kafka cluster. See below.
Details about log delivery. See below.
The workers can send worker logs to different destination types. This configuration specifies the details of these destinations. See below.
Details about delivering logs to Amazon CloudWatch Logs. See below.
Reference to a Group in cloudwatchlogs to populate logGroup.
Policies for referencing.
Selector for a Group in cloudwatchlogs to populate logGroup.
Policies for selection.
Details about delivering logs to Amazon Kinesis Data Firehose. See below.
Reference to a DeliveryStream in firehose to populate deliveryStream.
Policies for referencing.
Selector for a DeliveryStream in firehose to populate deliveryStream.
Policies for selection.
Details about delivering logs to Amazon S3. See below.
Reference to a Bucket in s3 to populate bucket.
Policies for referencing.
Selector for a Bucket in s3 to populate bucket.
Policies for selection.
Specifies which plugins to use for the connector. See below.
Details about a custom plugin. See below.
Reference to a CustomPlugin in kafkaconnect to populate arn.
Policies for referencing.
Selector for a CustomPlugin in kafkaconnect to populate arn.
Policies for selection.
Reference to a Role in iam to populate serviceExecutionRoleArn.
Policies for referencing.
Selector for a Role in iam to populate serviceExecutionRoleArn.
Policies for selection.
Specifies which worker configuration to use with the connector. See below.
Reference to a WorkerConfiguration in kafkaconnect to populate arn.
Policies for referencing.
Selector for a WorkerConfiguration in kafkaconnect to populate arn.
Policies for selection.
THIS IS A BETA FIELD. It is on by default but can be opted out through a Crossplane feature flag. ManagementPolicies specify the array of actions Crossplane is allowed to take on the managed and external resources. This field is planned to replace the DeletionPolicy field in a future release. Currently, both could be set independently and non-default values would be honored if the feature flag is enabled. If both are custom, the DeletionPolicy field will be ignored. See the design doc for more information: https://github.com/crossplane/crossplane/blob/499895a25d1a1a0ba1604944ef98ac7a1a71f197/design/design-doc-observe-only-resources.md?plain=1#L223 and this one: https://github.com/crossplane/crossplane/blob/444267e84783136daa93568b364a5f01228cacbe/design/one-pager-ignore-changes.md
ProviderConfigReference specifies how the provider that will be used to create, observe, update, and delete this managed resource should be configured.
Policies for referencing.
PublishConnectionDetailsTo specifies the connection secret config which contains a name, metadata and a reference to secret store config to which any connection details for this managed resource should be written. Connection details frequently include the endpoint, username, and password required to connect to the managed resource.
WriteConnectionSecretToReference specifies the namespace and name of a Secret to which any connection details for this managed resource should be written. Connection details frequently include the endpoint, username, and password required to connect to the managed resource. This field is planned to be replaced in a future release in favor of PublishConnectionDetailsTo. Currently, both could be set independently and connection details would be published to both without affecting each other.
ConnectorStatus defines the observed state of Connector.
No description provided.
Information about the capacity allocated to the connector. See below.
Information about the auto scaling parameters for the connector. See below.
The scale-in policy for the connector. See below.
The scale-out policy for the connector. See below.
Details about a fixed capacity allocated to a connector. See below.
Specifies which Apache Kafka cluster to connect to. See below.
The Apache Kafka cluster to which the connector is connected.
Details of an Amazon VPC which has network connectivity to the Apache Kafka cluster.
The security groups for the connector.
The subnets for the connector.
Details of the client authentication used by the Apache Kafka cluster. See below.
Details of encryption in transit to the Apache Kafka cluster. See below.
Details about log delivery. See below.
The workers can send worker logs to different destination types. This configuration specifies the details of these destinations. See below.
Details about delivering logs to Amazon Kinesis Data Firehose. See below.
Specifies which plugins to use for the connector. See below.
Conditions of the resource.
connector-nokafka
apiVersion: kafkaconnect.aws.upbound.io/v1beta1
kind: Connector
metadata:
annotations:
meta.upbound.io/example-id: kafkaconnect/v1beta1/connector
uptest.upbound.io/timeout: "2100"
labels:
testing.upbound.io/example-name: connector-nokafka
name: connector-nokafka
spec:
forProvider:
capacity:
- autoscaling:
- maxWorkerCount: 2
mcuCount: 1
minWorkerCount: 1
scaleInPolicy:
- cpuUtilizationPercentage: 20
scaleOutPolicy:
- cpuUtilizationPercentage: 80
connectorConfiguration:
connector.class: org.apache.kafka.connect.file.FileStreamSinkConnector
tasks.max: "1"
topics: example
kafkaCluster:
- apacheKafkaCluster:
- bootstrapServers: localhost:9092
vpc:
- securityGroupSelector:
matchLabels:
testing.upbound.io/example-name: connector-nokafka
subnetSelector:
matchLabels:
testing.upbound.io/example-name: connector-nokafka
kafkaClusterClientAuthentication:
- authenticationType: NONE
kafkaClusterEncryptionInTransit:
- encryptionType: PLAINTEXT
kafkaconnectVersion: 2.7.1
logDelivery:
- workerLogDelivery:
- cloudwatchLogs:
- enabled: true
logGroupSelector:
matchLabels:
testing.upbound.io/example-name: connector-nokafka
firehose:
- enabled: false
name: connector-nokafka
plugin:
- customPlugin:
- arnSelector:
matchLabels:
testing.upbound.io/example-name: connector-nokafka
revision: 1
region: us-east-2
serviceExecutionRoleArnSelector:
matchLabels:
testing.upbound.io/example-name: connector-nokafka
connector
apiVersion: kafkaconnect.aws.upbound.io/v1beta1
kind: Connector
metadata:
annotations:
meta.upbound.io/example-id: kafkaconnect/v1beta1/connector
upjet.upbound.io/manual-intervention: This resource requires a valid kafka broker connectionString
uptest.upbound.io/timeout: "7200"
labels:
testing.upbound.io/example-name: connector
name: connector
spec:
forProvider:
capacity:
- autoscaling:
- maxWorkerCount: 2
mcuCount: 1
minWorkerCount: 1
scaleInPolicy:
- cpuUtilizationPercentage: 20
scaleOutPolicy:
- cpuUtilizationPercentage: 80
connectorConfiguration:
connector.class: org.apache.kafka.connect.file.FileStreamSinkConnector
tasks.max: "1"
topics: example
kafkaCluster:
- apacheKafkaCluster:
- bootstrapServers: REPLACE-ME plaintext kafka broker string:9092
vpc:
- securityGroupSelector:
matchLabels:
testing.upbound.io/example-name: connector
subnetSelector:
matchLabels:
testing.upbound.io/example-name: connector
kafkaClusterClientAuthentication:
- authenticationType: NONE
kafkaClusterEncryptionInTransit:
- encryptionType: PLAINTEXT
kafkaconnectVersion: 2.7.1
logDelivery:
- workerLogDelivery:
- cloudwatchLogs:
- enabled: true
logGroupSelector:
matchLabels:
testing.upbound.io/example-name: connector
firehose:
- deliveryStreamSelector:
matchLabels:
testing.upbound.io/example-name: connector
enabled: true
name: connector
plugin:
- customPlugin:
- arnSelector:
matchLabels:
testing.upbound.io/example-name: connector
revision: 1
region: us-east-2
serviceExecutionRoleArnSelector:
matchLabels:
testing.upbound.io/example-name: connector
workerConfiguration:
- arnSelector:
matchLabels:
testing.upbound.io/example-name: connector
revision: 1
© 2022 Upbound, Inc.
Discover the building blocksfor your internal cloud platform.