Interface ProcessorContext
- All Known Implementing Classes:
MockProcessorContext
-
Method Summary
Modifier and TypeMethodDescriptionReturn all the application config properties as key/value pairs.appConfigsWithPrefix
(String prefix) Return all the application config properties with the given key prefix, as key/value pairs stripping the prefix.Return the application id.void
commit()
Request a commit.long
Return the current stream-time in milliseconds.long
Return the current system timestamp (also called wall-clock time) in milliseconds.<K,
V> void forward
(K key, V value) Forward a key/value pair to all downstream processors.<K,
V> void Forward a key/value pair to the specified downstream processors.<S extends StateStore>
SgetStateStore
(String name) Get the state store given the store name.headers()
Return the headers of the current input record; could be an empty header if it is not available.Serde<?>
keySerde()
Return the default key serde.metrics()
Return Metrics instance.long
offset()
Return the offset of the current input record; could be-1
if it is not available.int
Return the partition id of the current input record; could be-1
if it is not available.void
register
(StateStore store, StateRestoreCallback stateRestoreCallback) Register and possibly restores the specified storage engine.schedule
(Duration interval, PunctuationType type, Punctuator callback) Schedule a periodic operation for processors.stateDir()
Return the state directory for the partition.taskId()
Return the task id.long
Return the current timestamp.topic()
Return the topic name of the current input record; could benull
if it is not available.Serde<?>
Return the default value serde.
-
Method Details
-
applicationId
String applicationId()Return the application id.- Returns:
- the application id
-
taskId
TaskId taskId()Return the task id.- Returns:
- the task id
-
keySerde
Serde<?> keySerde()Return the default key serde.- Returns:
- the key serializer
-
valueSerde
Serde<?> valueSerde()Return the default value serde.- Returns:
- the value serializer
-
stateDir
File stateDir()Return the state directory for the partition.- Returns:
- the state directory
-
metrics
StreamsMetrics metrics()Return Metrics instance.- Returns:
- StreamsMetrics
-
register
Register and possibly restores the specified storage engine.- Parameters:
store
- the storage enginestateRestoreCallback
- the restoration callback logic for log-backed state stores upon restart- Throws:
IllegalStateException
- If store gets registered after initialized is already finishedStreamsException
- if the store's change log does not contain the partition
-
getStateStore
Get the state store given the store name.- Type Parameters:
S
- The type or interface of the store to return- Parameters:
name
- The store name- Returns:
- The state store instance
- Throws:
ClassCastException
- if the return type isn't a type or interface of the actual returned store.
-
schedule
Schedule a periodic operation for processors. A processor may call this method duringinitialization
orprocessing
to schedule a periodic callback — called a punctuation — toPunctuator.punctuate(long)
. The type parameter controls what notion of time is used for punctuation:PunctuationType.STREAM_TIME
— uses "stream time", which is advanced by the processing of messages in accordance with the timestamp as extracted by theTimestampExtractor
in use. The first punctuation will be triggered by the first record that is processed. NOTE: Only advanced if messages arrivePunctuationType.WALL_CLOCK_TIME
— uses system time (the wall-clock time), which is advanced independent of whether new messages arrive. The first punctuation will be triggered after interval has elapsed. NOTE: This is best effort only as its granularity is limited by how long an iteration of the processing loop takes to complete
- with
PunctuationType.STREAM_TIME
, when stream time advances more than interval - with
PunctuationType.WALL_CLOCK_TIME
, on GC pause, too short interval, ...
- Parameters:
interval
- the time interval between punctuations (supported minimum is 1 millisecond)type
- one of:PunctuationType.STREAM_TIME
,PunctuationType.WALL_CLOCK_TIME
callback
- a function consuming timestamps representing the current stream or system time- Returns:
- a handle allowing cancellation of the punctuation schedule established by this method
- Throws:
IllegalArgumentException
- if the interval is not representable in milliseconds
-
forward
<K,V> void forward(K key, V value) Forward a key/value pair to all downstream processors. Used the input record's timestamp as timestamp for the output record.If this method is called with
Punctuator.punctuate(long)
the record that is sent downstream won't have any associated record metadata like topic, partition, or offset.- Parameters:
key
- keyvalue
- value
-
forward
Forward a key/value pair to the specified downstream processors. Can be used to set the timestamp of the output record.If this method is called with
Punctuator.punctuate(long)
the record that is sent downstream won't have any associated record metadata like topic, partition, or offset.- Parameters:
key
- keyvalue
- valueto
- the options to use when forwarding
-
commit
void commit()Request a commit. -
topic
String topic()Return the topic name of the current input record; could benull
if it is not available.For example, if this method is invoked within a
punctuation callback
, or while processing a record that was forwarded by a punctuation callback, the record won't have an associated topic. Another example isKTable.transformValues(ValueTransformerWithKeySupplier, String...)
(and siblings), that do not always guarantee to provide a valid topic name, as they might be executed "out-of-band" due to some internal optimizations applied by the Kafka Streams DSL.- Returns:
- the topic name
-
partition
int partition()Return the partition id of the current input record; could be-1
if it is not available.For example, if this method is invoked within a
punctuation callback
, or while processing a record that was forwarded by a punctuation callback, the record won't have an associated partition id. Another example isKTable.transformValues(ValueTransformerWithKeySupplier, String...)
(and siblings), that do not always guarantee to provide a valid partition id, as they might be executed "out-of-band" due to some internal optimizations applied by the Kafka Streams DSL.- Returns:
- the partition id
-
offset
long offset()Return the offset of the current input record; could be-1
if it is not available.For example, if this method is invoked within a
punctuation callback
, or while processing a record that was forwarded by a punctuation callback, the record won't have an associated offset. Another example isKTable.transformValues(ValueTransformerWithKeySupplier, String...)
(and siblings), that do not always guarantee to provide a valid offset, as they might be executed "out-of-band" due to some internal optimizations applied by the Kafka Streams DSL.- Returns:
- the offset
-
headers
Headers headers()Return the headers of the current input record; could be an empty header if it is not available.For example, if this method is invoked within a
punctuation callback
, or while processing a record that was forwarded by a punctuation callback, the record might not have any associated headers. Another example isKTable.transformValues(ValueTransformerWithKeySupplier, String...)
(and siblings), that do not always guarantee to provide valid headers, as they might be executed "out-of-band" due to some internal optimizations applied by the Kafka Streams DSL.- Returns:
- the headers
-
timestamp
long timestamp()Return the current timestamp.If it is triggered while processing a record streamed from the source processor, timestamp is defined as the timestamp of the current input record; the timestamp is extracted from
ConsumerRecord
byTimestampExtractor
. Note, that an upstreamProcessor
might have set a new timestamp by callingforward(..., To.all().withTimestamp(...))
. In particular, some Kafka Streams DSL operators set result record timestamps explicitly, to guarantee deterministic results.If it is triggered while processing a record generated not from the source processor (for example, if this method is invoked from the punctuate call), timestamp is defined as the current task's stream time, which is defined as the largest timestamp of any record processed by the task.
- Returns:
- the timestamp
-
appConfigs
Return all the application config properties as key/value pairs.The config properties are defined in the
StreamsConfig
object and associated to the ProcessorContext.The type of the values is dependent on the
type
of the property (e.g. the value ofDEFAULT_KEY_SERDE_CLASS_CONFIG
will be of typeClass
, even if it was specified as a String toStreamsConfig(Map)
).- Returns:
- all the key/values from the StreamsConfig properties
-
appConfigsWithPrefix
Return all the application config properties with the given key prefix, as key/value pairs stripping the prefix.The config properties are defined in the
StreamsConfig
object and associated to the ProcessorContext.- Parameters:
prefix
- the properties prefix- Returns:
- the key/values matching the given prefix from the StreamsConfig properties.
-
currentSystemTimeMs
long currentSystemTimeMs()Return the current system timestamp (also called wall-clock time) in milliseconds.Note: this method returns the internally cached system timestamp from the Kafka Stream runtime. Thus, it may return a different value compared to
System.currentTimeMillis()
.- Returns:
- the current system timestamp in milliseconds
-
currentStreamTimeMs
long currentStreamTimeMs()Return the current stream-time in milliseconds.Stream-time is the maximum observed
record timestamp
so far (including the currently processed record), i.e., it can be considered a high-watermark. Stream-time is tracked on a per-task basis and is preserved across restarts and during task migration.Note: this method is not supported for global processors (cf.
Topology.addGlobalStore(org.apache.kafka.streams.state.StoreBuilder<?>, java.lang.String, org.apache.kafka.common.serialization.Deserializer<K>, org.apache.kafka.common.serialization.Deserializer<V>, java.lang.String, java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier<K, V>)
(...) andStreamsBuilder.addGlobalStore(org.apache.kafka.streams.state.StoreBuilder<?>, java.lang.String, org.apache.kafka.streams.kstream.Consumed<K, V>, org.apache.kafka.streams.processor.ProcessorSupplier<K, V>)
(...), because there is no concept of stream-time for this case. Calling this method in a global processor will result in anUnsupportedOperationException
.- Returns:
- the current stream-time in milliseconds
-