@InterfaceStability.Unstable public class KafkaConsumer<K,V> extends java.lang.Object implements Consumer<K,V>
It will transparently handle the failure of servers in the Kafka cluster, and transparently adapt as partitions of data it fetches migrate within the cluster. This client also interacts with the server to allow groups of consumers to load balance consumption using consumer groups (as described below).
The consumer maintains TCP connections to the necessary brokers to fetch data. Failure to close the consumer after use will leak these connections. The consumer is not thread-safe. See Multi-threaded Processing for more details.
The position
of the consumer gives the offset of the next record that will be given
out. It will be one larger than the highest offset the consumer has seen in that partition. It automatically advances
every time the consumer receives data calls poll(long)
and receives messages.
The committed position
is the last offset that has been saved securely. Should the
process fail and restart, this is the offset that it will recover to. The consumer can either automatically commit
offsets periodically; or it can choose to control this committed position manually by calling
commitSync
, which will block until the offsets have been successfully committed
or fatal error has happened during the commit process, or commitAsync
which is non-blocking
and will trigger OffsetCommitCallback
upon either successfully committed or fatally failed.
This distinction gives the consumer control over when a record is considered consumed. It is discussed in further detail below.
Each Kafka consumer is able to configure a consumer group that it belongs to, and can dynamically set the
list of topics it wants to subscribe to through subscribe(List, ConsumerRebalanceListener)
,
or subscribe to all topics matching certain pattern through subscribe(Pattern, ConsumerRebalanceListener)
.
Kafka will deliver each message in the
subscribed topics to one process in each consumer group. This is achieved by balancing the partitions in the topic
over the consumer processes in each group. So if there is a topic with four partitions, and a consumer group with two
processes, each process would consume from two partitions. This group membership is maintained dynamically: if a
process fails the partitions assigned to it will be reassigned to other processes in the same group, and if a new
process joins the group, partitions will be moved from existing consumers to this new process.
So if two processes subscribe to a topic both specifying different groups they will each get all the records in that topic; if they both specify the same group they will each get about half the records.
Conceptually you can think of a consumer group as being a single logical subscriber that happens to be made up of multiple processes. As a multi-subscriber system, Kafka naturally supports having any number of consumer groups for a given topic without duplicating data (additional consumers are actually quite cheap).
This is a slight generalization of the functionality that is common in messaging systems. To get semantics similar to a queue in a traditional messaging system all processes would be part of a single consumer group and hence record delivery would be balanced over the group like with a queue. Unlike a traditional messaging system, though, you can have multiple such groups. To get semantics similar to pub-sub in a traditional messaging system each process would have its own consumer group, so each process would subscribe to all the records published to the topic.
In addition, when group reassignment happens automatically, consumers can be notified through ConsumerRebalanceListener
,
which allows them to finish necessary application-level logic such as state cleanup, manual offset
commits (note that offsets are always committed for a given consumer group), etc.
See Storing Offsets Outside Kafka for more details
It is also possible for the consumer to manually specify the partitions that are assigned to it through assign(List)
,
which disables this dynamic partition assignment.
Properties props = new Properties(); props.put("bootstrap.servers", "localhost:9092"); props.put("group.id", "test"); props.put("enable.auto.commit", "true"); props.put("auto.commit.interval.ms", "1000"); props.put("session.timeout.ms", "30000"); props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props); consumer.subscribe(Arrays.asList("foo", "bar")); while (true) { ConsumerRecords<String, String> records = consumer.poll(100); for (ConsumerRecord<String, String> record : records) System.out.printf("offset = %d, key = %s, value = %s", record.offset(), record.key(), record.value()); }Setting
enable.auto.commit
means that offsets are committed automatically with a frequency controlled by
the config auto.commit.interval.ms
.
The connection to the cluster is bootstrapped by specifying a list of one or more brokers to contact using the
configuration bootstrap.servers
. This list is just used to discover the rest of the brokers in the
cluster and need not be an exhaustive list of servers in the cluster (though you may want to specify more than one in
case there are servers down when the client is connecting).
In this example the client is subscribing to the topics foo and bar as part of a group of consumers called test as described above.
The broker will automatically detect failed processes in the test group by using a heartbeat mechanism. The
consumer will automatically ping the cluster periodically, which lets the cluster know that it is alive. As long as
the consumer is able to do this it is considered alive and retains the right to consume from the partitions assigned
to it. If it stops heartbeating for a period of time longer than session.timeout.ms
then it will be
considered dead and its partitions will be assigned to another process.
The deserializer settings specify how to turn bytes into objects. For example, by specifying string deserializers, we are saying that our record's key and value will just be simple strings.
Properties props = new Properties(); props.put("bootstrap.servers", "localhost:9092"); props.put("group.id", "test"); props.put("enable.auto.commit", "false"); props.put("auto.commit.interval.ms", "1000"); props.put("session.timeout.ms", "30000"); props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props); consumer.subscribe(Arrays.asList("foo", "bar")); final int minBatchSize = 200; List<ConsumerRecord<String, String>> buffer = new ArrayList<>(); while (true) { ConsumerRecords<String, String> records = consumer.poll(100); for (ConsumerRecord<String, String> record : records) { buffer.add(record); } if (buffer.size() >= minBatchSize) { insertIntoDb(buffer); consumer.commitSync(); buffer.clear(); } }The above example uses
commitSync
to mark all received messages as committed. In some cases
you may wish to have even finer control over which messages have been committed by specifying an offset explicitly.
In the example below we commit offset after we finish handling the messages in each partition.
try { while(running) { ConsumerRecords<String, String> records = consumer.poll(Long.MAX_VALUE); for (TopicPartition partition : records.partitions()) { List<ConsumerRecord<String, String>> partitionRecords = records.records(partition); for (ConsumerRecord<String, String> record : partitionRecords) { System.out.println(record.offset() + ": " + record.value()); } long lastOffset = partitionRecords.get(partitionRecords.size() - 1).offset(); consumer.commitSync(Collections.singletonMap(partition, new OffsetAndMetadata(lastOffset + 1))); } } } finally { consumer.close(); }Note: The committed offset should always be the offset of the next message that your application will read. Thus, when calling
commitSync(offsets)
you should add one to the offset of the last message processed.
In this mode the consumer will just get the partitions it subscribes to and if the consumer instance fails no attempt will be made to rebalance partitions to other instances.
There are several cases where this makes sense:
This mode is easy to specify, rather than subscribing to the topic, the consumer just subscribes to particular partitions:
String topic = "foo"; TopicPartition partition0 = new TopicPartition(topic, 0); TopicPartition partition1 = new TopicPartition(topic, 1); consumer.assign(Arrays.asList(partition0, partition1));The group that the consumer specifies is still used for committing offsets, but now the set of partitions will only be changed if the consumer specifies new partitions, and no attempt at failure detection will be made.
It isn't possible to mix both subscription to specific partitions (with no load balancing) and to topics (with load balancing) using the same consumer instance.
Here are a couple of examples of this type of usage:
Each record comes with its own offset, so to manage your own offset you just need to do the following:
enable.auto.commit=false
ConsumerRecord
to save your position.
seek(TopicPartition, long)
.
This type of usage is simplest when the partition assignment is also done manually (this would be likely in the
search index use case described above). If the partition assignment is done automatically special care is
needed to handle the case where partition assignments change. This can be done by providing a
ConsumerRebalanceListener
instance in the call to subscribe(List, ConsumerRebalanceListener)
and subscribe(Pattern, ConsumerRebalanceListener)
.
For example, when partitions are taken from a consumer the consumer will want to commit its offset for those partitions by
implementing ConsumerRebalanceListener.onPartitionsRevoked(Collection)
. When partitions are assigned to a
consumer, the consumer will want to look up the offset for those new partitions and correctly initialize the consumer
to that position by implementing ConsumerRebalanceListener.onPartitionsAssigned(Collection)
.
Another common use for ConsumerRebalanceListener
is to flush any caches the application maintains for
partitions that are moved elsewhere.
There are several instances where manually controlling the consumer's position can be useful.
One case is for time-sensitive record processing it may make sense for a consumer that falls far enough behind to not attempt to catch up processing all records, but rather just skip to the most recent records.
Another use case is for a system that maintains local state as described in the previous section. In such a system the consumer will want to initialize its position on start-up to whatever is contained in the local store. Likewise if the local state is destroyed (say because the disk is lost) the state may be recreated on a new machine by re-consuming all the data and recreating the state (assuming that Kafka is retaining sufficient history).
Kafka allows specifying the position using seek(TopicPartition, long)
to specify the new position. Special
methods for seeking to the earliest and latest offset the server maintains are also available (
seekToBeginning(TopicPartition...)
and seekToEnd(TopicPartition...)
respectively).
One of such cases is stream processing, where processor fetches from two topics and performs the join on these two streams. When one of the topic is long lagging behind the other, the processor would like to pause fetching from the ahead topic in order to get the lagging stream to catch up. Another example is bootstraping upon consumer starting up where there are a lot of history data to catch up, the applciations usually wants to get the latest data on some of the topics before consider fetching other topics.
Kafka supports dynamic controlling of consumption flows by using pause(TopicPartition...)
and resume(TopicPartition...)
to pause the consumption on the specified assigned partitions and resume the consumption
on the specified paused partitions respectively in the future poll(long)
calls.
ConcurrentModificationException
.
The only exception to this rule is wakeup()
, which can safely be used from an external thread to
interrupt an active operation. In this case, a WakeupException
will be
thrown from the thread blocking on the operation. This can be used to shutdown the consumer from another thread.
The following snippet shows the typical pattern:
public class KafkaConsumerRunner implements Runnable { private final AtomicBoolean closed = new AtomicBoolean(false); private final KafkaConsumer consumer; public void run() { try { consumer.subscribe(Arrays.asList("topic")); while (!closed.get()) { ConsumerRecords records = consumer.poll(10000); // Handle new records } } catch (WakeupException e) { // Ignore exception if closing if (!closed.get()) throw e; } finally { consumer.close(); } } // Shutdown hook which can be called from a separate thread public void shutdown() { closed.set(true); consumer.wakeup(); } }Then in a separate thread, the consumer can be shutdown by setting the closed flag and waking up the consumer.
closed.set(true); consumer.wakeup();
We have intentionally avoided implementing a particular threading model for processing. This leaves several options for implementing multi-threaded processing of records.
ConsumerRecords
instances to a blocking queue consumed by a pool of processor threads that actually handle
the record processing.
This option likewise has pros and cons:
Constructor and Description |
---|
KafkaConsumer(java.util.Map<java.lang.String,java.lang.Object> configs)
A consumer is instantiated by providing a set of key-value pairs as configuration.
|
KafkaConsumer(java.util.Map<java.lang.String,java.lang.Object> configs,
Deserializer<K> keyDeserializer,
Deserializer<V> valueDeserializer)
A consumer is instantiated by providing a set of key-value pairs as configuration, a
ConsumerRebalanceListener implementation, a key and a value Deserializer . |
KafkaConsumer(java.util.Properties properties)
A consumer is instantiated by providing a
Properties object as configuration. |
KafkaConsumer(java.util.Properties properties,
Deserializer<K> keyDeserializer,
Deserializer<V> valueDeserializer)
A consumer is instantiated by providing a
Properties object as configuration and a
ConsumerRebalanceListener implementation, a key and a value Deserializer . |
Modifier and Type | Method and Description |
---|---|
void |
assign(java.util.List<TopicPartition> partitions)
Manually assign a list of partition to this consumer.
|
java.util.Set<TopicPartition> |
assignment()
Get the set of partitions currently assigned to this consumer.
|
void |
close()
Close the consumer, waiting indefinitely for any needed cleanup.
|
void |
commitAsync()
Commit offsets returned on the last
poll() for all the subscribed list of topics and partition. |
void |
commitAsync(java.util.Map<TopicPartition,OffsetAndMetadata> offsets,
OffsetCommitCallback callback)
Commit the specified offsets for the specified list of topics and partitions to Kafka.
|
void |
commitAsync(OffsetCommitCallback callback)
Commit offsets returned on the last
poll() for the subscribed list of topics and partitions. |
void |
commitSync()
Commit offsets returned on the last
poll() for all the subscribed list of topics and partitions. |
void |
commitSync(java.util.Map<TopicPartition,OffsetAndMetadata> offsets)
Commit the specified offsets for the specified list of topics and partitions.
|
OffsetAndMetadata |
committed(TopicPartition partition)
Get the last committed offset for the given partition (whether the commit happened by this process or
another).
|
java.util.Map<java.lang.String,java.util.List<PartitionInfo>> |
listTopics()
Get metadata about partitions for all topics that the user is authorized to view.
|
java.util.Map<MetricName,? extends Metric> |
metrics()
Get the metrics kept by the consumer
|
java.util.List<PartitionInfo> |
partitionsFor(java.lang.String topic)
Get metadata about the partitions for a given topic.
|
void |
pause(TopicPartition... partitions)
Suspend fetching from the requested partitions.
|
ConsumerRecords<K,V> |
poll(long timeout)
Fetch data for the topics or partitions specified using one of the subscribe/assign APIs.
|
long |
position(TopicPartition partition)
Get the offset of the next record that will be fetched (if a record with that offset exists).
|
void |
resume(TopicPartition... partitions)
Resume specified partitions which have been paused with
pause(TopicPartition...) . |
void |
seek(TopicPartition partition,
long offset)
Overrides the fetch offsets that the consumer will use on the next
poll(timeout) . |
void |
seekToBeginning(TopicPartition... partitions)
Seek to the first offset for each of the given partitions.
|
void |
seekToEnd(TopicPartition... partitions)
Seek to the last offset for each of the given partitions.
|
void |
subscribe(java.util.List<java.lang.String> topics)
Subscribe to the given list of topics to get dynamically assigned partitions.
|
void |
subscribe(java.util.List<java.lang.String> topics,
ConsumerRebalanceListener listener)
Subscribe to the given list of topics to get dynamically
assigned partitions.
|
void |
subscribe(java.util.regex.Pattern pattern,
ConsumerRebalanceListener listener)
Subscribe to all topics matching specified pattern to get dynamically assigned partitions.
|
java.util.Set<java.lang.String> |
subscription()
Get the current subscription.
|
void |
unsubscribe()
Unsubscribe from topics currently subscribed with
subscribe(List) . |
void |
wakeup()
Wakeup the consumer.
|
public KafkaConsumer(java.util.Map<java.lang.String,java.lang.Object> configs)
Valid configuration strings are documented at ConsumerConfig
configs
- The consumer configspublic KafkaConsumer(java.util.Map<java.lang.String,java.lang.Object> configs, Deserializer<K> keyDeserializer, Deserializer<V> valueDeserializer)
ConsumerRebalanceListener
implementation, a key and a value Deserializer
.
Valid configuration strings are documented at ConsumerConfig
configs
- The consumer configskeyDeserializer
- The deserializer for key that implements Deserializer
. The configure() method
won't be called in the consumer when the deserializer is passed in directly.valueDeserializer
- The deserializer for value that implements Deserializer
. The configure() method
won't be called in the consumer when the deserializer is passed in directly.public KafkaConsumer(java.util.Properties properties)
Properties
object as configuration. Valid
configuration strings are documented at ConsumerConfig
A consumer is instantiated by providing a
Properties
object as configuration. Valid configuration strings are documented at
ConsumerConfig
public KafkaConsumer(java.util.Properties properties, Deserializer<K> keyDeserializer, Deserializer<V> valueDeserializer)
Properties
object as configuration and a
ConsumerRebalanceListener
implementation, a key and a value Deserializer
.
Valid configuration strings are documented at ConsumerConfig
properties
- The consumer configuration propertieskeyDeserializer
- The deserializer for key that implements Deserializer
. The configure() method
won't be called in the consumer when the deserializer is passed in directly.valueDeserializer
- The deserializer for value that implements Deserializer
. The configure() method
won't be called in the consumer when the deserializer is passed in directly.public java.util.Set<TopicPartition> assignment()
assign(List)
then this will simply return the same partitions that
were assigned. If topic subscription was used, then this will give the set of topic partitions currently assigned
to the consumer (which may be none if the assignment hasn't happened yet, or the partitions are in the
process of getting reassigned).assignment
in interface Consumer<K,V>
assignment()
public java.util.Set<java.lang.String> subscription()
subscribe(List, ConsumerRebalanceListener)
, or an empty set if no such call has been made.subscription
in interface Consumer<K,V>
subscription()
public void subscribe(java.util.List<java.lang.String> topics, ConsumerRebalanceListener listener)
assign(List)
.
If the given list of topics is empty, it is treated the same as unsubscribe()
.
As part of group management, the consumer will keep track of the list of consumers that belong to a particular group and will trigger a rebalance operation if one of the following events trigger -
When any of these events are triggered, the provided listener will be invoked first to indicate that
the consumer's assignment has been revoked, and then again when the new assignment has been received.
Note that this listener will immediately override any listener set in a previous call to subscribe.
It is guaranteed, however, that the partitions revoked/assigned through this interface are from topics
subscribed in this call. See ConsumerRebalanceListener
for more details.
subscribe
in interface Consumer<K,V>
topics
- The list of topics to subscribe tolistener
- Non-null listener instance to get notifications on partition assignment/revocation for the
subscribed topicssubscribe(List, ConsumerRebalanceListener)
public void subscribe(java.util.List<java.lang.String> topics)
assign(List)
.
If the given list of topics is empty, it is treated the same as unsubscribe()
.
This is a short-hand for subscribe(List, ConsumerRebalanceListener)
, which
uses a noop listener. If you need the ability to either seek to particular offsets, you should prefer
subscribe(List, ConsumerRebalanceListener)
, since group rebalances will cause partition offsets
to be reset. You should also prefer to provide your own listener if you are doing your own offset
management since the listener gives you an opportunity to commit offsets before a rebalance finishes.
subscribe
in interface Consumer<K,V>
topics
- The list of topics to subscribe tosubscribe(List)
public void subscribe(java.util.regex.Pattern pattern, ConsumerRebalanceListener listener)
As part of group management, the consumer will keep track of the list of consumers that belong to a particular group and will trigger a rebalance operation if one of the following events trigger -
subscribe
in interface Consumer<K,V>
pattern
- Pattern to subscribe tosubscribe(Pattern, ConsumerRebalanceListener)
public void unsubscribe()
subscribe(List)
. This
also clears any partitions directly assigned through assign(List)
.unsubscribe
in interface Consumer<K,V>
unsubscribe()
public void assign(java.util.List<TopicPartition> partitions)
Manual topic assignment through this method does not use the consumer's group management
functionality. As such, there will be no rebalance operation triggered when group membership or cluster and topic
metadata change. Note that it is not possible to use both manual partition assignment with assign(List)
and group assignment with subscribe(List, ConsumerRebalanceListener)
.
assign
in interface Consumer<K,V>
partitions
- The list of partitions to assign this consumerassign(List)
public ConsumerRecords<K,V> poll(long timeout)
On each poll, consumer will try to use the last consumed offset as the starting offset and fetch sequentially. The last
consumed offset can be manually set through seek(TopicPartition, long)
or automatically set as the last committed
offset for the subscribed list of partitions
poll
in interface Consumer<K,V>
timeout
- The time, in milliseconds, spent waiting in poll if data is not available. If 0, returns
immediately with any records that are available now. Must not be negative.InvalidOffsetException
- if the offset for a partition or set of
partitions is undefined or out of range and no offset reset policy has been configuredWakeupException
- if wakeup()
is called before or while this
function is calledAuthorizationException
- if caller does Read access to any of the subscribed
topics or to the configured groupIdKafkaException
- for any other unrecoverable errors (e.g. invalid groupId or
session timeout, errors deserializing key/value pairs, or any new error cases in future versions)poll(long)
public void commitSync()
poll()
for all the subscribed list of topics and partitions.
This commits offsets only to Kafka. The offsets committed using this API will be used on the first fetch after every rebalance and also on startup. As such, if you need to store offsets in anything other than Kafka, this API should not be used.
This is a synchronous commits and will block until either the commit succeeds or an unrecoverable error is encountered (in which case it is thrown to the caller).
commitSync
in interface Consumer<K,V>
CommitFailedException
- if the commit failed and cannot be retried.
This can only occur if you are using automatic group management with subscribe(List)
,
or if there is an active group with the same groupId which is using group management.WakeupException
- if wakeup()
is called before or while this
function is calledAuthorizationException
- if not authorized to the topic or to the
configured groupIdKafkaException
- for any other unrecoverable errors (e.g. if offset metadata
is too large or if the committed offset is invalid).commitSync()
public void commitSync(java.util.Map<TopicPartition,OffsetAndMetadata> offsets)
This commits offsets to Kafka. The offsets committed using this API will be used on the first fetch after every rebalance and also on startup. As such, if you need to store offsets in anything other than Kafka, this API should not be used. The committed offset should be the next message your application will consume, i.e. lastProcessedMessageOffset + 1.
This is a synchronous commits and will block until either the commit succeeds or an unrecoverable error is encountered (in which case it is thrown to the caller).
commitSync
in interface Consumer<K,V>
offsets
- A map of offsets by partition with associated metadataCommitFailedException
- if the commit failed and cannot be retried.
This can only occur if you are using automatic group management with subscribe(List)
,
or if there is an active group with the same groupId which is using group management.WakeupException
- if wakeup()
is called before or while this
function is calledAuthorizationException
- if not authorized to the topic or to the
configured groupIdKafkaException
- for any other unrecoverable errors (e.g. if offset metadata
is too large or if the committed offset is invalid).commitSync(Map)
public void commitAsync()
poll()
for all the subscribed list of topics and partition.
Same as commitAsync(null)
commitAsync
in interface Consumer<K,V>
commitAsync()
public void commitAsync(OffsetCommitCallback callback)
poll()
for the subscribed list of topics and partitions.
This commits offsets only to Kafka. The offsets committed using this API will be used on the first fetch after every rebalance and also on startup. As such, if you need to store offsets in anything other than Kafka, this API should not be used.
This is an asynchronous call and will not block. Any errors encountered are either passed to the callback (if provided) or discarded.
commitAsync
in interface Consumer<K,V>
callback
- Callback to invoke when the commit completescommitAsync(OffsetCommitCallback)
public void commitAsync(java.util.Map<TopicPartition,OffsetAndMetadata> offsets, OffsetCommitCallback callback)
This commits offsets to Kafka. The offsets committed using this API will be used on the first fetch after every rebalance and also on startup. As such, if you need to store offsets in anything other than Kafka, this API should not be used. The committed offset should be the next message your application will consume, i.e. lastProcessedMessageOffset + 1.
This is an asynchronous call and will not block. Any errors encountered are either passed to the callback (if provided) or discarded.
commitAsync
in interface Consumer<K,V>
offsets
- A map of offsets by partition with associate metadata. This map will be copied internally, so it
is safe to mutate the map after returning.callback
- Callback to invoke when the commit completescommitAsync(Map, OffsetCommitCallback)
public void seek(TopicPartition partition, long offset)
poll(timeout)
. If this API
is invoked for the same partition more than once, the latest offset will be used on the next poll(). Note that
you may lose data if this API is arbitrarily used in the middle of consumption, to reset the fetch offsetsseek
in interface Consumer<K,V>
seek(TopicPartition, long)
public void seekToBeginning(TopicPartition... partitions)
poll(long)
or position(TopicPartition)
are called.seekToBeginning
in interface Consumer<K,V>
seekToBeginning(TopicPartition...)
public void seekToEnd(TopicPartition... partitions)
poll(long)
or position(TopicPartition)
are called.seekToEnd
in interface Consumer<K,V>
seekToEnd(TopicPartition...)
public long position(TopicPartition partition)
position
in interface Consumer<K,V>
partition
- The partition to get the position forInvalidOffsetException
- if no offset is currently defined for
the partitionWakeupException
- if wakeup()
is called before or while this
function is calledAuthorizationException
- if not authorized to the topic or to the
configured groupIdKafkaException
- for any other unrecoverable errorsposition(TopicPartition)
public OffsetAndMetadata committed(TopicPartition partition)
This call may block to do a remote call if the partition in question isn't assigned to this consumer or if the consumer hasn't yet initialized its cache of committed offsets.
committed
in interface Consumer<K,V>
partition
- The partition to checkWakeupException
- if wakeup()
is called before or while this
function is calledAuthorizationException
- if not authorized to the topic or to the
configured groupIdKafkaException
- for any other unrecoverable errorscommitted(TopicPartition)
public java.util.Map<MetricName,? extends Metric> metrics()
public java.util.List<PartitionInfo> partitionsFor(java.lang.String topic)
partitionsFor
in interface Consumer<K,V>
topic
- The topic to get partition metadata forWakeupException
- if wakeup()
is called before or while this
function is calledAuthorizationException
- if not authorized to the specified topicTimeoutException
- if the topic metadata could not be fetched before
expiration of the configured request timeoutKafkaException
- for any other unrecoverable errorspartitionsFor(String)
public java.util.Map<java.lang.String,java.util.List<PartitionInfo>> listTopics()
listTopics
in interface Consumer<K,V>
WakeupException
- if wakeup()
is called before or while this
function is calledTimeoutException
- if the topic metadata could not be fetched before
expiration of the configured request timeoutKafkaException
- for any other unrecoverable errorslistTopics()
public void pause(TopicPartition... partitions)
poll(long)
will not return
any records from these partitions until they have been resumed using resume(TopicPartition...)
.
Note that this method does not affect partition subscription. In particular, it does not cause a group
rebalance when automatic assignment is used.pause
in interface Consumer<K,V>
partitions
- The partitions which should be pausedpause(TopicPartition...)
public void resume(TopicPartition... partitions)
pause(TopicPartition...)
. New calls to
poll(long)
will return records from these partitions if there are any to be fetched.
If the partitions were not previously paused, this method is a no-op.resume
in interface Consumer<K,V>
partitions
- The partitions which should be resumedresume(TopicPartition...)
public void close()
wakeup()
cannot be use to interrupt close.