You are viewing documentation for an older version (2.4) of Kafka. For up-to-date documentation, see the latest version.
Kafka Connect Configs
Kafka Connect Configs
Below is the configuration of the Kafka Connect framework.
- config.storage.topic: The name of the Kafka topic where connector configurations are stored
- Type: string
- Default:
- Valid Values:
- Importance: high
- group.id: A unique string that identifies the Connect cluster group this worker belongs to.
- Type: string
- Default:
- Valid Values:
- Importance: high
- key.converter: Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.
- Type: class
- Default:
- Valid Values:
- Importance: high
- offset.storage.topic: The name of the Kafka topic where connector offsets are stored
- Type: string
- Default:
- Valid Values:
- Importance: high
- status.storage.topic: The name of the Kafka topic where connector and task status are stored
- Type: string
- Default:
- Valid Values:
- Importance: high
- value.converter: Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.
- Type: class
- Default:
- Valid Values:
- Importance: high
- bootstrap.servers: A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form
host1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).- Type: list
- Default: localhost:9092
- Valid Values:
- Importance: high
- heartbeat.interval.ms: The expected time between heartbeats to the group coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the worker's session stays active and to facilitate rebalancing when new members join or leave the group. The value must be set lower than
session.timeout.ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.- Type: int
- Default: 3000
- Valid Values:
- Importance: high
- rebalance.timeout.ms: The maximum allowed time for each worker to join the group once a rebalance has begun. This is basically a limit on the amount of time needed for all tasks to flush any pending data and commit offsets. If the timeout is exceeded, then the worker will be removed from the group, which will cause offset commit failures.
- Type: int
- Default: 60000
- Valid Values:
- Importance: high
- session.timeout.ms: The timeout used to detect worker failures. The worker sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove the worker from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by
group.min.session.timeout.msandgroup.max.session.timeout.ms.- Type: int
- Default: 10000
- Valid Values:
- Importance: high
- ssl.key.password: The password of the private key in the key store file. This is optional for client.
- Type: password
- Default: null
- Valid Values:
- Importance: high
- ssl.keystore.location: The location of the key store file. This is optional for client and can be used for two-way authentication for client.
- Type: string
- Default: null
- Valid Values:
- Importance: high
- ssl.keystore.password: The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.
- Type: password
- Default: null
- Valid Values:
- Importance: high
- ssl.truststore.location: The location of the trust store file.
- Type: string
- Default: null
- Valid Values:
- Importance: high
- ssl.truststore.password: The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.
- Type: password
- Default: null
- Valid Values:
- Importance: high
- client.dns.lookup: Controls how the client uses DNS lookups. If set to
use_all_dns_ipsthen, when the lookup returns multiple IP addresses for a hostname, they will all be attempted to connect to before failing the connection. Applies to both bootstrap and advertised servers. If the value isresolve_canonical_bootstrap_servers_onlyeach entry will be resolved and expanded into a list of canonical names.- Type: string
- Default: default
- Valid Values: [default, use_all_dns_ips, resolve_canonical_bootstrap_servers_only]
- Importance: medium
- connections.max.idle.ms: Close idle connections after the number of milliseconds specified by this config.
- Type: long
- Default: 540000
- Valid Values:
- Importance: medium
- connector.client.config.override.policy: Class name or alias of implementation of
ConnectorClientConfigOverridePolicy. Defines what client configurations can be overriden by the connector. The default implementation is `None`. The other possible policies in the framework include `All` and `Principal`.- Type: string
- Default: None
- Valid Values:
- Importance: medium
- receive.buffer.bytes: The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.
- Type: int
- Default: 32768
- Valid Values: [0,...]
- Importance: medium
- request.timeout.ms: The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.
- Type: int
- Default: 40000
- Valid Values: [0,...]
- Importance: medium
- sasl.client.callback.handler.class: The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.
- Type: class
- Default: null
- Valid Values:
- Importance: medium
- sasl.jaas.config: JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: '
loginModuleClass controlFlag (optionName=optionValue)*;'. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;- Type: password
- Default: null
- Valid Values:
- Importance: medium
- sasl.kerberos.service.name: The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.
- Type: string
- Default: null
- Valid Values:
- Importance: medium
- sasl.login.callback.handler.class: The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler
- Type: class
- Default: null
- Valid Values:
- Importance: medium
- sasl.login.class: The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin
- Type: class
- Default: null
- Valid Values:
- Importance: medium
- sasl.mechanism: SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.
- Type: string
- Default: GSSAPI
- Valid Values:
- Importance: medium
- security.protocol: Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.
- Type: string
- Default: PLAINTEXT
- Valid Values:
- Importance: medium
- send.buffer.bytes: The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.
- Type: int
- Default: 131072
- Valid Values: [0,...]
- Importance: medium
- ssl.enabled.protocols: The list of protocols enabled for SSL connections.
- Type: list
- Default: TLSv1.2,TLSv1.1,TLSv1
- Valid Values:
- Importance: medium
- ssl.keystore.type: The file format of the key store file. This is optional for client.
- Type: string
- Default: JKS
- Valid Values:
- Importance: medium
- ssl.protocol: The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.
- Type: string
- Default: TLS
- Valid Values:
- Importance: medium
- ssl.provider: The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.
- Type: string
- Default: null
- Valid Values:
- Importance: medium
- ssl.truststore.type: The file format of the trust store file.
- Type: string
- Default: JKS
- Valid Values:
- Importance: medium
- worker.sync.timeout.ms: When the worker is out of sync with other workers and needs to resynchronize configurations, wait up to this amount of time before giving up, leaving the group, and waiting a backoff period before rejoining.
- Type: int
- Default: 3000
- Valid Values:
- Importance: medium
- worker.unsync.backoff.ms: When the worker is out of sync with other workers and fails to catch up within worker.sync.timeout.ms, leave the Connect cluster for this long before rejoining.
- Type: int
- Default: 300000
- Valid Values:
- Importance: medium
- access.control.allow.methods: Sets the methods supported for cross origin requests by setting the Access-Control-Allow-Methods header. The default value of the Access-Control-Allow-Methods header allows cross origin requests for GET, POST and HEAD.
- Type: string
- Default: ""
- Valid Values:
- Importance: low
- access.control.allow.origin: Value to set the Access-Control-Allow-Origin header to for REST API requests.To enable cross origin access, set this to the domain of the application that should be permitted to access the API, or '*' to allow access from any domain. The default value only allows access from the domain of the REST API.
- Type: string
- Default: ""
- Valid Values:
- Importance: low
- admin.listeners: List of comma-separated URIs the Admin REST API will listen on. The supported protocols are HTTP and HTTPS. An empty or blank string will disable this feature. The default behavior is to use the regular listener (specified by the 'listeners' property).
- Type: list
- Default: null
- Valid Values: org.apache.kafka.connect.runtime.WorkerConfig$AdminListenersValidator@77468bd9
- Importance: low
- client.id: An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.
- Type: string
- Default: ""
- Valid Values:
- Importance: low
- config.providers: Comma-separated names of
ConfigProviderclasses, loaded and used in the order specified. Implementing the interfaceConfigProviderallows you to replace variable references in connector configurations, such as for externalized secrets.- Type: list
- Default: ""
- Valid Values:
- Importance: low
- config.storage.replication.factor: Replication factor used when creating the configuration storage topic
- Type: short
- Default: 3
- Valid Values: [1,...]
- Importance: low
- connect.protocol: Compatibility mode for Kafka Connect Protocol
- Type: string
- Default: sessioned
- Valid Values: [eager, compatible, sessioned]
- Importance: low
- header.converter: HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the header values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header values to strings and deserialize them by inferring the schemas.
- Type: class
- Default: org.apache.kafka.connect.storage.SimpleHeaderConverter
- Valid Values:
- Importance: low
- inter.worker.key.generation.algorithm: The algorithm to use for generating internal request keys
- Type: string
- Default: HmacSHA256
- Valid Values: Any KeyGenerator algorithm supported by the worker JVM
- Importance: low
- inter.worker.key.size: The size of the key to use for signing internal requests, in bits. If null, the default key size for the key generation algorithm will be used.
- Type: int
- Default: null
- Valid Values:
- Importance: low
- inter.worker.key.ttl.ms: The TTL of generated session keys used for internal request validation (in milliseconds)
- Type: int
- Default: 3600000
- Valid Values: [0,...,2147483647]
- Importance: low
- inter.worker.signature.algorithm: The algorithm used to sign internal requests
- Type: string
- Default: HmacSHA256
- Valid Values: Any MAC algorithm supported by the worker JVM
- Importance: low
- inter.worker.verification.algorithms: A list of permitted algorithms for verifying internal requests
- Type: list
- Default: HmacSHA256
- Valid Values: A list of one or more MAC algorithms, each supported by the worker JVM
- Importance: low
- internal.key.converter: Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. This setting controls the format used for internal bookkeeping data used by the framework, such as configs and offsets, so users can typically use any functioning Converter implementation. Deprecated; will be removed in an upcoming version.
- Type: class
- Default: org.apache.kafka.connect.json.JsonConverter
- Valid Values:
- Importance: low
- internal.value.converter: Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. This setting controls the format used for internal bookkeeping data used by the framework, such as configs and offsets, so users can typically use any functioning Converter implementation. Deprecated; will be removed in an upcoming version.
- Type: class
- Default: org.apache.kafka.connect.json.JsonConverter
- Valid Values:
- Importance: low
- listeners: List of comma-separated URIs the REST API will listen on. The supported protocols are HTTP and HTTPS.
Specify hostname as 0.0.0.0 to bind to all interfaces.
Leave hostname empty to bind to default interface.
Examples of legal listener lists: HTTP://myhost:8083,HTTPS://myhost:8084
- Type: list
- Default: null
- Valid Values:
- Importance: low
- metadata.max.age.ms: The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.
- Type: long
- Default: 300000
- Valid Values: [0,...]
- Importance: low
- metric.reporters: A list of classes to use as metrics reporters. Implementing the
org.apache.kafka.common.metrics.MetricsReporterinterface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.- Type: list
- Default: ""
- Valid Values:
- Importance: low
- metrics.num.samples: The number of samples maintained to compute metrics.
- Type: int
- Default: 2
- Valid Values: [1,...]
- Importance: low
- metrics.recording.level: The highest recording level for metrics.
- Type: string
- Default: INFO
- Valid Values: [INFO, DEBUG]
- Importance: low
- metrics.sample.window.ms: The window of time a metrics sample is computed over.
- Type: long
- Default: 30000
- Valid Values: [0,...]
- Importance: low
- offset.flush.interval.ms: Interval at which to try committing offsets for tasks.
- Type: long
- Default: 60000
- Valid Values:
- Importance: low
- offset.flush.timeout.ms: Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt.
- Type: long
- Default: 5000
- Valid Values:
- Importance: low
- offset.storage.partitions: The number of partitions used when creating the offset storage topic
- Type: int
- Default: 25
- Valid Values: [1,...]
- Importance: low
- offset.storage.replication.factor: Replication factor used when creating the offset storage topic
- Type: short
- Default: 3
- Valid Values: [1,...]
- Importance: low
- plugin.path: List of paths separated by commas (,) that contain plugins (connectors, converters, transformations). The list should consist of top level directories that include any combination of:
a) directories immediately containing jars with plugins and their dependencies
b) uber-jars with plugins and their dependencies
c) directories immediately containing the package directory structure of classes of plugins and their dependencies
Note: symlinks will be followed to discover dependencies or plugins.
Examples: plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors
- Type: list
- Default: null
- Valid Values:
- Importance: low
- reconnect.backoff.max.ms: The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.
- Type: long
- Default: 1000
- Valid Values: [0,...]
- Importance: low
- reconnect.backoff.ms: The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.
- Type: long
- Default: 50
- Valid Values: [0,...]
- Importance: low
- rest.advertised.host.name: If this is set, this is the hostname that will be given out to other workers to connect to.
- Type: string
- Default: null
- Valid Values:
- Importance: low
- rest.advertised.listener: Sets the advertised listener (HTTP or HTTPS) which will be given to other workers to use.
- Type: string
- Default: null
- Valid Values:
- Importance: low
- rest.advertised.port: If this is set, this is the port that will be given out to other workers to connect to.
- Type: int
- Default: null
- Valid Values:
- Importance: low
- rest.extension.classes: Comma-separated names of
ConnectRestExtensionclasses, loaded and called in the order specified. Implementing the interfaceConnectRestExtensionallows you to inject into Connect's REST API user defined resources like filters. Typically used to add custom capability like logging, security, etc.- Type: list
- Default: ""
- Valid Values:
- Importance: low
- rest.host.name: Hostname for the REST API. If this is set, it will only bind to this interface.
- Type: string
- Default: null
- Valid Values:
- Importance: low
- rest.port: Port for the REST API to listen on.
- Type: int
- Default: 8083
- Valid Values:
- Importance: low
- retry.backoff.ms: The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.
- Type: long
- Default: 100
- Valid Values: [0,...]
- Importance: low
- sasl.kerberos.kinit.cmd: Kerberos kinit command path.
- Type: string
- Default: /usr/bin/kinit
- Valid Values:
- Importance: low
- sasl.kerberos.min.time.before.relogin: Login thread sleep time between refresh attempts.
- Type: long
- Default: 60000
- Valid Values:
- Importance: low
- sasl.kerberos.ticket.renew.jitter: Percentage of random jitter added to the renewal time.
- Type: double
- Default: 0.05
- Valid Values:
- Importance: low
- sasl.kerberos.ticket.renew.window.factor: Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.
- Type: double
- Default: 0.8
- Valid Values:
- Importance: low
- sasl.login.refresh.buffer.seconds: The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.
- Type: short
- Default: 300
- Valid Values: [0,...,3600]
- Importance: low
- sasl.login.refresh.min.period.seconds: The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.
- Type: short
- Default: 60
- Valid Values: [0,...,900]
- Importance: low
- sasl.login.refresh.window.factor: Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.
- Type: double
- Default: 0.8
- Valid Values: [0.5,...,1.0]
- Importance: low
- sasl.login.refresh.window.jitter: The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.
- Type: double
- Default: 0.05
- Valid Values: [0.0,...,0.25]
- Importance: low
- scheduled.rebalance.max.delay.ms: The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned
- Type: int
- Default: 300000
- Valid Values: [0,...,2147483647]
- Importance: low
- ssl.cipher.suites: A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.
- Type: list
- Default: null
- Valid Values:
- Importance: low
- ssl.client.auth: Configures kafka broker to request client authentication. The following settings are common:
ssl.client.auth=requiredIf set to required client authentication is required.ssl.client.auth=requestedThis means client authentication is optional. unlike requested , if this option is set client can choose not to provide authentication information about itselfssl.client.auth=noneThis means client authentication is not needed.
- Type: string
- Default: none
- Valid Values:
- Importance: low
- ssl.endpoint.identification.algorithm: The endpoint identification algorithm to validate server hostname using server certificate.
- Type: string
- Default: https
- Valid Values:
- Importance: low
- ssl.keymanager.algorithm: The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.
- Type: string
- Default: SunX509
- Valid Values:
- Importance: low
- ssl.secure.random.implementation: The SecureRandom PRNG implementation to use for SSL cryptography operations.
- Type: string
- Default: null
- Valid Values:
- Importance: low
- ssl.trustmanager.algorithm: The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.
- Type: string
- Default: PKIX
- Valid Values:
- Importance: low
- status.storage.partitions: The number of partitions used when creating the status storage topic
- Type: int
- Default: 5
- Valid Values: [1,...]
- Importance: low
- status.storage.replication.factor: Replication factor used when creating the status storage topic
- Type: short
- Default: 3
- Valid Values: [1,...]
- Importance: low
- task.shutdown.graceful.timeout.ms: Amount of time to wait for tasks to shutdown gracefully. This is the total amount of time, not per task. All task have shutdown triggered, then they are waited on sequentially.
- Type: long
- Default: 5000
- Valid Values:
- Importance: low
Source Connector Configs
Below is the configuration of a source connector.
- name: Globally unique name to use for this connector.
- Type: string
- Default:
- Valid Values: non-empty string without ISO control characters
- Importance: high
- connector.class: Name or alias of the class for this connector. Must be a subclass of org.apache.kafka.connect.connector.Connector. If the connector is org.apache.kafka.connect.file.FileStreamSinkConnector, you can either specify this full name, or use "FileStreamSink" or "FileStreamSinkConnector" to make the configuration a bit shorter
- Type: string
- Default:
- Valid Values:
- Importance: high
- tasks.max: Maximum number of tasks to use for this connector.
- Type: int
- Default: 1
- Valid Values: [1,...]
- Importance: high
- key.converter: Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.
- Type: class
- Default: null
- Valid Values:
- Importance: low
- value.converter: Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.
- Type: class
- Default: null
- Valid Values:
- Importance: low
- header.converter: HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the header values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header values to strings and deserialize them by inferring the schemas.
- Type: class
- Default: null
- Valid Values:
- Importance: low
- config.action.reload: The action that Connect should take on the connector when changes in external configuration providers result in a change in the connector's configuration properties. A value of 'none' indicates that Connect will do nothing. A value of 'restart' indicates that Connect should restart/reload the connector with the updated configuration properties.The restart may actually be scheduled in the future if the external configuration provider indicates that a configuration value will expire in the future.
- Type: string
- Default: restart
- Valid Values: [none, restart]
- Importance: low
- transforms: Aliases for the transformations to be applied to records.
- Type: list
- Default: ""
- Valid Values: non-null string, unique transformation aliases
- Importance: low
- errors.retry.timeout: The maximum duration in milliseconds that a failed operation will be reattempted. The default is 0, which means no retries will be attempted. Use -1 for infinite retries.
- Type: long
- Default: 0
- Valid Values:
- Importance: medium
- errors.retry.delay.max.ms: The maximum duration in milliseconds between consecutive retry attempts. Jitter will be added to the delay once this limit is reached to prevent thundering herd issues.
- Type: long
- Default: 60000
- Valid Values:
- Importance: medium
- errors.tolerance: Behavior for tolerating errors during connector operation. 'none' is the default value and signals that any error will result in an immediate connector task failure; 'all' changes the behavior to skip over problematic records.
- Type: string
- Default: none
- Valid Values: [none, all]
- Importance: medium
- errors.log.enable: If true, write each error and the details of the failed operation and problematic record to the Connect application log. This is 'false' by default, so that only errors that are not tolerated are reported.
- Type: boolean
- Default: false
- Valid Values:
- Importance: medium
- errors.log.include.messages: Whether to the include in the log the Connect record that resulted in a failure. This is 'false' by default, which will prevent record keys, values, and headers from being written to log files, although some information such as topic and partition number will still be logged.
- Type: boolean
- Default: false
- Valid Values:
- Importance: medium
Sink Connector Configs
Below is the configuration of a sink connector.
- name: Globally unique name to use for this connector.
- Type: string
- Default:
- Valid Values: non-empty string without ISO control characters
- Importance: high
- connector.class: Name or alias of the class for this connector. Must be a subclass of org.apache.kafka.connect.connector.Connector. If the connector is org.apache.kafka.connect.file.FileStreamSinkConnector, you can either specify this full name, or use "FileStreamSink" or "FileStreamSinkConnector" to make the configuration a bit shorter
- Type: string
- Default:
- Valid Values:
- Importance: high
- tasks.max: Maximum number of tasks to use for this connector.
- Type: int
- Default: 1
- Valid Values: [1,...]
- Importance: high
- topics: List of topics to consume, separated by commas
- Type: list
- Default: ""
- Valid Values:
- Importance: high
- topics.regex: Regular expression giving topics to consume. Under the hood, the regex is compiled to a
java.util.regex.Pattern. Only one of topics or topics.regex should be specified.- Type: string
- Default: ""
- Valid Values: valid regex
- Importance: high
- key.converter: Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.
- Type: class
- Default: null
- Valid Values:
- Importance: low
- value.converter: Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.
- Type: class
- Default: null
- Valid Values:
- Importance: low
- header.converter: HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the header values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header values to strings and deserialize them by inferring the schemas.
- Type: class
- Default: null
- Valid Values:
- Importance: low
- config.action.reload: The action that Connect should take on the connector when changes in external configuration providers result in a change in the connector's configuration properties. A value of 'none' indicates that Connect will do nothing. A value of 'restart' indicates that Connect should restart/reload the connector with the updated configuration properties.The restart may actually be scheduled in the future if the external configuration provider indicates that a configuration value will expire in the future.
- Type: string
- Default: restart
- Valid Values: [none, restart]
- Importance: low
- transforms: Aliases for the transformations to be applied to records.
- Type: list
- Default: ""
- Valid Values: non-null string, unique transformation aliases
- Importance: low
- errors.retry.timeout: The maximum duration in milliseconds that a failed operation will be reattempted. The default is 0, which means no retries will be attempted. Use -1 for infinite retries.
- Type: long
- Default: 0
- Valid Values:
- Importance: medium
- errors.retry.delay.max.ms: The maximum duration in milliseconds between consecutive retry attempts. Jitter will be added to the delay once this limit is reached to prevent thundering herd issues.
- Type: long
- Default: 60000
- Valid Values:
- Importance: medium
- errors.tolerance: Behavior for tolerating errors during connector operation. 'none' is the default value and signals that any error will result in an immediate connector task failure; 'all' changes the behavior to skip over problematic records.
- Type: string
- Default: none
- Valid Values: [none, all]
- Importance: medium
- errors.log.enable: If true, write each error and the details of the failed operation and problematic record to the Connect application log. This is 'false' by default, so that only errors that are not tolerated are reported.
- Type: boolean
- Default: false
- Valid Values:
- Importance: medium
- errors.log.include.messages: Whether to the include in the log the Connect record that resulted in a failure. This is 'false' by default, which will prevent record keys, values, and headers from being written to log files, although some information such as topic and partition number will still be logged.
- Type: boolean
- Default: false
- Valid Values:
- Importance: medium
- errors.deadletterqueue.topic.name: The name of the topic to be used as the dead letter queue (DLQ) for messages that result in an error when processed by this sink connector, or its transformations or converters. The topic name is blank by default, which means that no messages are to be recorded in the DLQ.
- Type: string
- Default: ""
- Valid Values:
- Importance: medium
- errors.deadletterqueue.topic.replication.factor: Replication factor used to create the dead letter queue topic when it doesn't already exist.
- Type: short
- Default: 3
- Valid Values:
- Importance: medium
- errors.deadletterqueue.context.headers.enable: If true, add headers containing error context to the messages written to the dead letter queue. To avoid clashing with headers from the original record, all error context header keys, all error context header keys will start with
__connect.errors.- Type: boolean
- Default: false
- Valid Values:
- Importance: medium