public class KStreamBuilder extends TopologyBuilder
KStreamBuilder provide the high-level Kafka Streams DSL to specify a Kafka Streams topology.TopologyBuilder,
KStream,
KTable,
GlobalKTableTopologyBuilder.AutoOffsetReset, TopologyBuilder.TopicsInfo| Constructor and Description |
|---|
KStreamBuilder() |
| Modifier and Type | Method and Description |
|---|---|
<K,V> GlobalKTable<K,V> |
globalTable(Serde<K> keySerde,
Serde<V> valSerde,
String topic,
String storeName)
Create a
GlobalKTable for the specified topic. |
<K,V> GlobalKTable<K,V> |
globalTable(String topic,
String storeName)
Create a
GlobalKTable for the specified topic. |
<K,V> KStream<K,V> |
merge(KStream<K,V>... streams)
|
String |
newName(String prefix)
This function is only for internal usage only and should not be called.
|
<K,V> KStream<K,V> |
stream(Pattern topicPattern)
Create a
KStream from the specified topic pattern. |
<K,V> KStream<K,V> |
stream(Serde<K> keySerde,
Serde<V> valSerde,
Pattern topicPattern)
Create a
KStream from the specified topic pattern. |
<K,V> KStream<K,V> |
stream(Serde<K> keySerde,
Serde<V> valSerde,
String... topics)
Create a
KStream from the specified topics. |
<K,V> KStream<K,V> |
stream(String... topics)
Create a
KStream from the specified topics. |
<K,V> KStream<K,V> |
stream(TopologyBuilder.AutoOffsetReset offsetReset,
Pattern topicPattern)
Create a
KStream from the specified topic pattern. |
<K,V> KStream<K,V> |
stream(TopologyBuilder.AutoOffsetReset offsetReset,
Serde<K> keySerde,
Serde<V> valSerde,
Pattern topicPattern)
Create a
KStream from the specified topic pattern. |
<K,V> KStream<K,V> |
stream(TopologyBuilder.AutoOffsetReset offsetReset,
Serde<K> keySerde,
Serde<V> valSerde,
String... topics)
Create a
KStream from the specified topics. |
<K,V> KStream<K,V> |
stream(TopologyBuilder.AutoOffsetReset offsetReset,
String... topics)
Create a
KStream from the specified topics. |
<K,V> KTable<K,V> |
table(Serde<K> keySerde,
Serde<V> valSerde,
String topic,
String storeName)
Create a
KTable for the specified topic. |
<K,V> KTable<K,V> |
table(String topic,
String storeName)
Create a
KTable for the specified topic. |
<K,V> KTable<K,V> |
table(TopologyBuilder.AutoOffsetReset offsetReset,
Serde<K> keySerde,
Serde<V> valSerde,
String topic,
String storeName)
Create a
KTable for the specified topic. |
<K,V> KTable<K,V> |
table(TopologyBuilder.AutoOffsetReset offsetReset,
String topic,
String storeName)
Create a
KTable for the specified topic. |
addGlobalStore, addInternalTopic, addProcessor, addSink, addSink, addSink, addSink, addSource, addSource, addSource, addSource, addSource, addSource, addSource, addSource, addStateStore, build, buildGlobalStateTopology, connectProcessorAndStateStores, connectProcessors, connectSourceStoreAndTopic, copartitionGroups, copartitionSources, earliestResetTopicsPattern, globalStateStores, latestResetTopicsPattern, nodeGroups, setApplicationId, sourceTopicPattern, stateStoreNameToSourceTopics, topicGroups, updateSubscriptionspublic <K,V> KStream<K,V> stream(String... topics)
KStream from the specified topics.
The default "auto.offset.reset" strategy and default key and value deserializers as specified in the
config are used.
If multiple topics are specified there is no ordering guarantee for records from different topics.
Note that the specified input topics must be partitioned by key.
If this is not the case it is the user's responsibility to repartition the date before any key based operation
(like aggregation or join) is applied to the returned KStream.
topics - the topic names; must contain at least one topic nameKStream for the specified topicspublic <K,V> KStream<K,V> stream(TopologyBuilder.AutoOffsetReset offsetReset, String... topics)
KStream from the specified topics.
The default key and value deserializers as specified in the config are used.
If multiple topics are specified there is no ordering guarantee for records from different topics.
Note that the specified input topics must be partitioned by key.
If this is not the case it is the user's responsibility to repartition the date before any key based operation
(like aggregation or join) is applied to the returned KStream.
offsetReset - the "auto.offset.reset" policy to use for the specified topics if no valid committed
offsets are availabletopics - the topic names; must contain at least one topic nameKStream for the specified topicspublic <K,V> KStream<K,V> stream(Pattern topicPattern)
KStream from the specified topic pattern.
The default "auto.offset.reset" strategy and default key and value deserializers as specified in the
config are used.
If multiple topics are matched by the specified pattern, the created KStream will read data from all of
them and there is no ordering guarantee between records from different topics.
Note that the specified input topics must be partitioned by key.
If this is not the case it is the user's responsibility to repartition the date before any key based operation
(like aggregation or join) is applied to the returned KStream.
topicPattern - the pattern to match for topic namesKStream for topics matching the regex pattern.public <K,V> KStream<K,V> stream(TopologyBuilder.AutoOffsetReset offsetReset, Pattern topicPattern)
KStream from the specified topic pattern.
The default key and value deserializers as specified in the config are used.
If multiple topics are matched by the specified pattern, the created KStream will read data from all of
them and there is no ordering guarantee between records from different topics.
Note that the specified input topics must be partitioned by key.
If this is not the case it is the user's responsibility to repartition the date before any key based operation
(like aggregation or join) is applied to the returned KStream.
offsetReset - the "auto.offset.reset" policy to use for the matched topics if no valid committed
offsets are availabletopicPattern - the pattern to match for topic namesKStream for topics matching the regex pattern.public <K,V> KStream<K,V> stream(Serde<K> keySerde, Serde<V> valSerde, String... topics)
KStream from the specified topics.
The default "auto.offset.reset" strategy as specified in the config is used.
If multiple topics are specified there is no ordering guarantee for records from different topics.
Note that the specified input topics must be partitioned by key.
If this is not the case it is the user's responsibility to repartition the date before any key based operation
(like aggregation or join) is applied to the returned KStream.
keySerde - key serde used to read this source KStream,
if not specified the default serde defined in the configs will be usedvalSerde - value serde used to read this source KStream,
if not specified the default serde defined in the configs will be usedtopics - the topic names; must contain at least one topic nameKStream for the specified topicspublic <K,V> KStream<K,V> stream(TopologyBuilder.AutoOffsetReset offsetReset, Serde<K> keySerde, Serde<V> valSerde, String... topics)
KStream from the specified topics.
If multiple topics are specified there is no ordering guarantee for records from different topics.
Note that the specified input topics must be partitioned by key.
If this is not the case it is the user's responsibility to repartition the date before any key based operation
(like aggregation or join) is applied to the returned KStream.
offsetReset - the "auto.offset.reset" policy to use for the specified topics if no valid committed
offsets are availablekeySerde - key serde used to read this source KStream,
if not specified the default serde defined in the configs will be usedvalSerde - value serde used to read this source KStream,
if not specified the default serde defined in the configs will be usedtopics - the topic names; must contain at least one topic nameKStream for the specified topicspublic <K,V> KStream<K,V> stream(Serde<K> keySerde, Serde<V> valSerde, Pattern topicPattern)
KStream from the specified topic pattern.
The default "auto.offset.reset" strategy as specified in the config is used.
If multiple topics are matched by the specified pattern, the created KStream will read data from all of
them and there is no ordering guarantee between records from different topics.
Note that the specified input topics must be partitioned by key.
If this is not the case it is the user's responsibility to repartition the date before any key based operation
(like aggregation or join) is applied to the returned KStream.
keySerde - key serde used to read this source KStream,
if not specified the default serde defined in the configs will be usedvalSerde - value serde used to read this source KStream,
if not specified the default serde defined in the configs will be usedtopicPattern - the pattern to match for topic namesKStream for topics matching the regex pattern.public <K,V> KStream<K,V> stream(TopologyBuilder.AutoOffsetReset offsetReset, Serde<K> keySerde, Serde<V> valSerde, Pattern topicPattern)
KStream from the specified topic pattern.
If multiple topics are matched by the specified pattern, the created KStream will read data from all of
them and there is no ordering guarantee between records from different topics.
Note that the specified input topics must be partitioned by key.
If this is not the case it is the user's responsibility to repartition the date before any key based operation
(like aggregation or join) is applied to the returned KStream.
offsetReset - the "auto.offset.reset" policy to use for the matched topics if no valid committed
offsets are availablekeySerde - key serde used to read this source KStream,
if not specified the default serde defined in the configs will be usedvalSerde - value serde used to read this source KStream,
if not specified the default serde defined in the configs will be usedtopicPattern - the pattern to match for topic namesKStream for topics matching the regex pattern.public <K,V> KTable<K,V> table(String topic, String storeName)
KTable for the specified topic.
The default "auto.offset.reset" strategy and default key and value deserializers as specified in the
config are used.
Input records with null key will be dropped.
Note that the specified input topic must be partitioned by key.
If this is not the case the returned KTable will be corrupted.
The resulting KTable will be materialized in a local KeyValueStore with the given
storeName.
However, no internal changelog topic is created since the original input topic can be used for recovery (cf.
methods of KGroupedStream and KGroupedTable that return a KTable).
To query the local KeyValueStore it must be obtained via
KafkaStreams#store(...):
KafkaStreams streams = ...
ReadOnlyKeyValueStore<String,Long> localStore = streams.store(storeName, QueryableStoreTypes.<String, Long>keyValueStore());
String key = "some-key";
Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
For non-local keys, a custom RPC mechanism must be implemented using KafkaStreams.allMetadata() to
query the value of the key on a parallel running instance of your Kafka Streams application.topic - the topic name; cannot be nullstoreName - the state store name; cannot be nullKTable for the specified topicpublic <K,V> KTable<K,V> table(TopologyBuilder.AutoOffsetReset offsetReset, String topic, String storeName)
KTable for the specified topic.
The default key and value deserializers as specified in the config are used.
Input records with null key will be dropped.
Note that the specified input topic must be partitioned by key.
If this is not the case the returned KTable will be corrupted.
The resulting KTable will be materialized in a local KeyValueStore with the given
storeName.
However, no internal changelog topic is created since the original input topic can be used for recovery (cf.
methods of KGroupedStream and KGroupedTable that return a KTable).
To query the local KeyValueStore it must be obtained via
KafkaStreams#store(...):
KafkaStreams streams = ...
ReadOnlyKeyValueStore<String,Long> localStore = streams.store(storeName, QueryableStoreTypes.<String, Long>keyValueStore());
String key = "some-key";
Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
For non-local keys, a custom RPC mechanism must be implemented using KafkaStreams.allMetadata() to
query the value of the key on a parallel running instance of your Kafka Streams application.offsetReset - the "auto.offset.reset" policy to use for the specified topic if no valid committed
offsets are availabletopic - the topic name; cannot be nullstoreName - the state store name; cannot be nullKTable for the specified topicpublic <K,V> KTable<K,V> table(Serde<K> keySerde, Serde<V> valSerde, String topic, String storeName)
KTable for the specified topic.
The default "auto.offset.reset" strategy as specified in the config is used.
Input records with null key will be dropped.
Note that the specified input topic must be partitioned by key.
If this is not the case the returned KTable will be corrupted.
The resulting KTable will be materialized in a local KeyValueStore with the given
storeName.
However, no internal changelog topic is created since the original input topic can be used for recovery (cf.
methods of KGroupedStream and KGroupedTable that return a KTable).
To query the local KeyValueStore it must be obtained via
KafkaStreams#store(...):
KafkaStreams streams = ...
ReadOnlyKeyValueStore<String,Long> localStore = streams.store(storeName, QueryableStoreTypes.<String, Long>keyValueStore());
String key = "some-key";
Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
For non-local keys, a custom RPC mechanism must be implemented using KafkaStreams.allMetadata() to
query the value of the key on a parallel running instance of your Kafka Streams application.keySerde - key serde used to send key-value pairs,
if not specified the default key serde defined in the configuration will be usedvalSerde - value serde used to send key-value pairs,
if not specified the default value serde defined in the configuration will be usedtopic - the topic name; cannot be nullstoreName - the state store name; cannot be nullKTable for the specified topicpublic <K,V> KTable<K,V> table(TopologyBuilder.AutoOffsetReset offsetReset, Serde<K> keySerde, Serde<V> valSerde, String topic, String storeName)
KTable for the specified topic.
Input records with null key will be dropped.
Note that the specified input topic must be partitioned by key.
If this is not the case the returned KTable will be corrupted.
The resulting KTable will be materialized in a local KeyValueStore with the given
storeName.
However, no internal changelog topic is created since the original input topic can be used for recovery (cf.
methods of KGroupedStream and KGroupedTable that return a KTable).
To query the local KeyValueStore it must be obtained via
KafkaStreams#store(...):
KafkaStreams streams = ...
ReadOnlyKeyValueStore<String,Long> localStore = streams.store(storeName, QueryableStoreTypes.<String, Long>keyValueStore());
String key = "some-key";
Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
For non-local keys, a custom RPC mechanism must be implemented using KafkaStreams.allMetadata() to
query the value of the key on a parallel running instance of your Kafka Streams application.offsetReset - the "auto.offset.reset" policy to use for the specified topic if no valid committed
offsets are availablekeySerde - key serde used to send key-value pairs,
if not specified the default key serde defined in the configuration will be usedvalSerde - value serde used to send key-value pairs,
if not specified the default value serde defined in the configuration will be usedtopic - the topic name; cannot be nullstoreName - the state store name; cannot be nullKTable for the specified topicpublic <K,V> GlobalKTable<K,V> globalTable(String topic, String storeName)
GlobalKTable for the specified topic.
The default key and value deserializers as specified in the config are used.
Input records with null key will be dropped.
The resulting GlobalKTable will be materialized in a local KeyValueStore with the given
storeName.
However, no internal changelog topic is created since the original input topic can be used for recovery (cf.
methods of KGroupedStream and KGroupedTable that return a KTable).
To query the local KeyValueStore it must be obtained via
KafkaStreams#store(...):
KafkaStreams streams = ...
ReadOnlyKeyValueStore<String,Long> localStore = streams.store(storeName, QueryableStoreTypes.<String, Long>keyValueStore());
String key = "some-key";
Long valueForKey = localStore.get(key);
Note that GlobalKTable always applies "auto.offset.reset" strategy "earliest"
regardless of the specified value in StreamsConfig.topic - the topic name; cannot be nullstoreName - the state store name; cannot be nullGlobalKTable for the specified topicpublic <K,V> GlobalKTable<K,V> globalTable(Serde<K> keySerde, Serde<V> valSerde, String topic, String storeName)
GlobalKTable for the specified topic.
The default key and value deserializers as specified in the config are used.
Input records with null key will be dropped.
The resulting GlobalKTable will be materialized in a local KeyValueStore with the given
storeName.
However, no internal changelog topic is created since the original input topic can be used for recovery (cf.
methods of KGroupedStream and KGroupedTable that return a KTable).
To query the local KeyValueStore it must be obtained via
KafkaStreams#store(...):
KafkaStreams streams = ...
ReadOnlyKeyValueStore<String,Long> localStore = streams.store(storeName, QueryableStoreTypes.<String, Long>keyValueStore());
String key = "some-key";
Long valueForKey = localStore.get(key);
Note that GlobalKTable always applies "auto.offset.reset" strategy "earliest"
regardless of the specified value in StreamsConfig.keySerde - key serde used to send key-value pairs,
if not specified the default key serde defined in the configuration will be usedvalSerde - value serde used to send key-value pairs,
if not specified the default value serde defined in the configuration will be usedtopic - the topic name; cannot be nullstoreName - the state store name; cannot be nullGlobalKTable for the specified topic