Interface SinkTaskContext
SinkTask
s, allowing them to access utilities in the Kafka Connect runtime.-
Method Summary
Modifier and TypeMethodDescriptionGet the current set of assigned TopicPartitions for this task.configs()
Get the Task configuration.default ErrantRecordReporter
Get the reporter to which the sink task can report problematic or failedrecords
passed to theSinkTask.put(java.util.Collection)
method.void
offset
(Map<TopicPartition, Long> offsets) Reset the consumer offsets for the given topic partitions.void
offset
(TopicPartition tp, long offset) Reset the consumer offsets for the given topic partition.void
pause
(TopicPartition... partitions) Pause consumption of messages from the specified TopicPartitions.void
Request an offset commit.void
resume
(TopicPartition... partitions) Resume consumption of messages from previously paused TopicPartitions.void
timeout
(long timeoutMs) Set the timeout in milliseconds.
-
Method Details
-
configs
Get the Task configuration. This is the latest configuration and may differ from that passed on startup.For example, this method can be used to obtain the latest configuration if an external secret has changed, and the configuration is using variable references such as those compatible with
ConfigTransformer
. -
offset
Reset the consumer offsets for the given topic partitions. SinkTasks should use this if they manage offsets in the sink data store rather than using Kafka consumer offsets. For example, an HDFS connector might record offsets in HDFS to provide exactly once delivery. When the SinkTask is started or a rebalance occurs, the task would reload offsets from HDFS and use this method to reset the consumer to those offsets.SinkTasks that do not manage their own offsets do not need to use this method.
- Parameters:
offsets
- map of offsets for topic partitions
-
offset
Reset the consumer offsets for the given topic partition. SinkTasks should use this if they manage offsets in the sink data store rather than using Kafka consumer offsets. For example, an HDFS connector might record offsets in HDFS to provide exactly once delivery. When the topic partition is recovered the task would reload offsets from HDFS and use this method to reset the consumer to the offset.SinkTasks that do not manage their own offsets do not need to use this method.
- Parameters:
tp
- the topic partition to reset offset.offset
- the offset to reset to.
-
timeout
void timeout(long timeoutMs) Set the timeout in milliseconds. SinkTasks should use this to indicate that they need to retry certain operations after the timeout. SinkTasks may have certain operations on external systems that may need to be retried in case of failures. For example, appending a record to an HDFS file may fail due to temporary network issues. SinkTasks can use this method to set how long to wait before retrying.- Parameters:
timeoutMs
- the backoff timeout in milliseconds.
-
assignment
Set<TopicPartition> assignment()Get the current set of assigned TopicPartitions for this task.- Returns:
- the set of currently assigned TopicPartitions
-
pause
Pause consumption of messages from the specified TopicPartitions.- Parameters:
partitions
- the partitions which should be paused
-
resume
Resume consumption of messages from previously paused TopicPartitions.- Parameters:
partitions
- the partitions to resume
-
requestCommit
void requestCommit()Request an offset commit. Sink tasks can use this to minimize the potential for redelivery by requesting an offset commit as soon as they flush data to the destination system.It is only a hint to the runtime and no timing guarantee should be assumed.
-
errantRecordReporter
Get the reporter to which the sink task can report problematic or failedrecords
passed to theSinkTask.put(java.util.Collection)
method. When reporting a failed record, the sink task will receive aFuture
that the task can optionally use to wait until the failed record and exception have been written to Kafka. Note that the result of this method may be null if this connector has not been configured to use a reporter.This method was added in Apache Kafka 2.6. Sink tasks that use this method but want to maintain backward compatibility so they can also be deployed to older Connect runtimes should guard the call to this method with a try-catch block, since calling this method will result in a
NoSuchMethodError
orNoClassDefFoundError
when the sink connector is deployed to Connect runtimes older than Kafka 2.6. For example:ErrantRecordReporter reporter; try { reporter = context.errantRecordReporter(); } catch (NoSuchMethodError | NoClassDefFoundError e) { reporter = null; }
- Returns:
- the reporter; null if no error reporter has been configured for the connector
- Since:
- 2.6
-