Class MockProcessorContext
- All Implemented Interfaces:
org.apache.kafka.streams.processor.internals.RecordCollector.Supplier
,ProcessorContext
public class MockProcessorContext extends Object implements ProcessorContext, org.apache.kafka.streams.processor.internals.RecordCollector.Supplier
MockProcessorContext
is a mock of ProcessorContext
for users to test their Processor
,
Transformer
, and ValueTransformer
implementations.
The tests for this class (org.apache.kafka.streams.MockProcessorContextTest) include several behavioral tests that serve as example usage.
Note that this class does not take any automated actions (such as firing scheduled punctuators).
It simply captures any data it witnesses.
If you require more automated tests, we recommend wrapping your Processor
in a minimal source-processor-sink
Topology
and using the TopologyTestDriver
.
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static class
MockProcessorContext.CapturedForward
static class
MockProcessorContext.CapturedPunctuator
MockProcessorContext.CapturedPunctuator
holds captured punctuators, along with their scheduling information. -
Constructor Summary
Constructors Constructor Description MockProcessorContext()
MockProcessorContext(Properties config)
MockProcessorContext(Properties config, TaskId taskId, File stateDir)
Create aMockProcessorContext
with a specified taskId and null stateDir. -
Method Summary
Modifier and Type Method Description Map<String,Object>
appConfigs()
Return all the application config properties as key/value pairs.Map<String,Object>
appConfigsWithPrefix(String prefix)
Return all the application config properties with the given key prefix, as key/value pairs stripping the prefix.String
applicationId()
Return the application id.void
commit()
Request a commit.boolean
committed()
WhetherProcessorContext.commit()
has been called in this context.long
currentStreamTimeMs()
Return the current stream-time in milliseconds.long
currentSystemTimeMs()
Return the current system timestamp (also called wall-clock time) in milliseconds.<K, V> void
forward(K key, V value)
Forward a key/value pair to all downstream processors.<K, V> void
forward(K key, V value, To to)
Forward a key/value pair to the specified downstream processors.List<MockProcessorContext.CapturedForward>
forwarded()
Get all the forwarded data this context has observed.List<MockProcessorContext.CapturedForward>
forwarded(String childName)
Get all the forwarded data this context has observed for a specific child by name.<S extends StateStore>
SgetStateStore(String name)
Get the state store given the store name.Headers
headers()
Returns the headers of the current input record; could benull
if it is not available.Serde<?>
keySerde()
Return the default key serde.StreamsMetrics
metrics()
Return Metrics instance.long
offset()
Return the offset of the current input record; could be-1
if it is not available.int
partition()
Return the partition id of the current input record; could be-1
if it is not available.org.apache.kafka.streams.processor.internals.RecordCollector
recordCollector()
void
register(StateStore store, StateRestoreCallback stateRestoreCallbackIsIgnoredInMock)
Register and possibly restores the specified storage engine.void
resetCommit()
Reset the commit capture tofalse
(whether or not it was previouslytrue
).void
resetForwards()
Clear the captured forwarded data.Cancellable
schedule(Duration interval, PunctuationType type, Punctuator callback)
Schedule a periodic operation for processors.List<MockProcessorContext.CapturedPunctuator>
scheduledPunctuators()
Get the punctuators scheduled so far.void
setCurrentStreamTimeMs(long currentStreamTimeMs)
void
setCurrentSystemTimeMs(long currentSystemTimeMs)
void
setHeaders(Headers headers)
The context exposes this metadata for use in the processor.void
setOffset(long offset)
The context exposes this metadata for use in the processor.void
setPartition(int partition)
The context exposes this metadata for use in the processor.void
setRecordMetadata(String topic, int partition, long offset, Headers headers, long timestamp)
The context exposes these metadata for use in the processor.void
setRecordTimestamp(long recordTimestamp)
The context exposes this metadata for use in the processor.void
setTimestamp(long timestamp)
Deprecated.void
setTopic(String topic)
The context exposes this metadata for use in the processor.File
stateDir()
Return the state directory for the partition.TaskId
taskId()
Return the task id.long
timestamp()
Return the current timestamp.String
topic()
Return the topic name of the current input record; could benull
if it is not available.Serde<?>
valueSerde()
Return the default value serde.
-
Constructor Details
-
MockProcessorContext
public MockProcessorContext()Create aMockProcessorContext
with dummyconfig
andtaskId
andnull
stateDir
. Most unit tests using this mock won't need to know the taskId, and most unit tests should be able to get by with theInMemoryKeyValueStore
, so the stateDir won't matter. -
MockProcessorContext
Create aMockProcessorContext
with dummytaskId
andnull
stateDir
. Most unit tests using this mock won't need to know the taskId, and most unit tests should be able to get by with theInMemoryKeyValueStore
, so the stateDir won't matter.- Parameters:
config
- a Properties object, used to configure the context and the processor.
-
MockProcessorContext
Create aMockProcessorContext
with a specified taskId and null stateDir.- Parameters:
config
- aProperties
object, used to configure the context and the processor.taskId
- aTaskId
, which the context makes available viataskId()
.stateDir
- aFile
, which the context makes available viwstateDir()
.
-
-
Method Details
-
applicationId
Description copied from interface:ProcessorContext
Return the application id.- Specified by:
applicationId
in interfaceProcessorContext
- Returns:
- the application id
-
taskId
Description copied from interface:ProcessorContext
Return the task id.- Specified by:
taskId
in interfaceProcessorContext
- Returns:
- the task id
-
appConfigs
Description copied from interface:ProcessorContext
Return all the application config properties as key/value pairs.The config properties are defined in the
StreamsConfig
object and associated to the ProcessorContext.The type of the values is dependent on the
type
of the property (e.g. the value ofDEFAULT_KEY_SERDE_CLASS_CONFIG
will be of typeClass
, even if it was specified as a String toStreamsConfig(Map)
).- Specified by:
appConfigs
in interfaceProcessorContext
- Returns:
- all the key/values from the StreamsConfig properties
-
appConfigsWithPrefix
Description copied from interface:ProcessorContext
Return all the application config properties with the given key prefix, as key/value pairs stripping the prefix.The config properties are defined in the
StreamsConfig
object and associated to the ProcessorContext.- Specified by:
appConfigsWithPrefix
in interfaceProcessorContext
- Parameters:
prefix
- the properties prefix- Returns:
- the key/values matching the given prefix from the StreamsConfig properties.
-
currentSystemTimeMs
public long currentSystemTimeMs()Description copied from interface:ProcessorContext
Return the current system timestamp (also called wall-clock time) in milliseconds.Note: this method returns the internally cached system timestamp from the Kafka Stream runtime. Thus, it may return a different value compared to
System.currentTimeMillis()
.- Specified by:
currentSystemTimeMs
in interfaceProcessorContext
- Returns:
- the current system timestamp in milliseconds
-
currentStreamTimeMs
public long currentStreamTimeMs()Description copied from interface:ProcessorContext
Return the current stream-time in milliseconds.Stream-time is the maximum observed
record timestamp
so far (including the currently processed record), i.e., it can be considered a high-watermark. Stream-time is tracked on a per-task basis and is preserved across restarts and during task migration.Note: this method is not supported for global processors (cf.
Topology.addGlobalStore(org.apache.kafka.streams.state.StoreBuilder<?>, java.lang.String, org.apache.kafka.common.serialization.Deserializer<K>, org.apache.kafka.common.serialization.Deserializer<V>, java.lang.String, java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier<K, V>)
(...) andStreamsBuilder.addGlobalStore(org.apache.kafka.streams.state.StoreBuilder<?>, java.lang.String, org.apache.kafka.streams.kstream.Consumed<K, V>, org.apache.kafka.streams.processor.ProcessorSupplier<K, V>)
(...), because there is no concept of stream-time for this case. Calling this method in a global processor will result in anUnsupportedOperationException
.- Specified by:
currentStreamTimeMs
in interfaceProcessorContext
- Returns:
- the current stream-time in milliseconds
-
keySerde
Description copied from interface:ProcessorContext
Return the default key serde.- Specified by:
keySerde
in interfaceProcessorContext
- Returns:
- the key serializer
-
valueSerde
Description copied from interface:ProcessorContext
Return the default value serde.- Specified by:
valueSerde
in interfaceProcessorContext
- Returns:
- the value serializer
-
stateDir
Description copied from interface:ProcessorContext
Return the state directory for the partition.- Specified by:
stateDir
in interfaceProcessorContext
- Returns:
- the state directory
-
metrics
Description copied from interface:ProcessorContext
Return Metrics instance.- Specified by:
metrics
in interfaceProcessorContext
- Returns:
- StreamsMetrics
-
setRecordMetadata
public void setRecordMetadata(String topic, int partition, long offset, Headers headers, long timestamp)The context exposes these metadata for use in the processor. Normally, they are set by the Kafka Streams framework, but for the purpose of driving unit tests, you can set them directly.- Parameters:
topic
- A topic namepartition
- A partition numberoffset
- A record offsettimestamp
- A record timestamp
-
setTopic
The context exposes this metadata for use in the processor. Normally, they are set by the Kafka Streams framework, but for the purpose of driving unit tests, you can set it directly. Setting this attribute doesn't affect the others.- Parameters:
topic
- A topic name
-
setPartition
public void setPartition(int partition)The context exposes this metadata for use in the processor. Normally, they are set by the Kafka Streams framework, but for the purpose of driving unit tests, you can set it directly. Setting this attribute doesn't affect the others.- Parameters:
partition
- A partition number
-
setOffset
public void setOffset(long offset)The context exposes this metadata for use in the processor. Normally, they are set by the Kafka Streams framework, but for the purpose of driving unit tests, you can set it directly. Setting this attribute doesn't affect the others.- Parameters:
offset
- A record offset
-
setHeaders
The context exposes this metadata for use in the processor. Normally, they are set by the Kafka Streams framework, but for the purpose of driving unit tests, you can set it directly. Setting this attribute doesn't affect the others.- Parameters:
headers
- Record headers
-
setTimestamp
Deprecated.Since 3.0.0; usesetRecordTimestamp(long)
instead.The context exposes this metadata for use in the processor. Normally, they are set by the Kafka Streams framework, but for the purpose of driving unit tests, you can set it directly. Setting this attribute doesn't affect the others.- Parameters:
timestamp
- A record timestamp
-
setRecordTimestamp
public void setRecordTimestamp(long recordTimestamp)The context exposes this metadata for use in the processor. Normally, they are set by the Kafka Streams framework, but for the purpose of driving unit tests, you can set it directly. Setting this attribute doesn't affect the others.- Parameters:
recordTimestamp
- A record timestamp
-
setCurrentSystemTimeMs
public void setCurrentSystemTimeMs(long currentSystemTimeMs) -
setCurrentStreamTimeMs
public void setCurrentStreamTimeMs(long currentStreamTimeMs) -
topic
Description copied from interface:ProcessorContext
Return the topic name of the current input record; could benull
if it is not available.For example, if this method is invoked within a
punctuation callback
, or while processing a record that was forwarded by a punctuation callback, the record won't have an associated topic. Another example isKTable.transformValues(ValueTransformerWithKeySupplier, String...)
(and siblings), that do not always guarantee to provide a valid topic name, as they might be executed "out-of-band" due to some internal optimizations applied by the Kafka Streams DSL.- Specified by:
topic
in interfaceProcessorContext
- Returns:
- the topic name
-
partition
public int partition()Description copied from interface:ProcessorContext
Return the partition id of the current input record; could be-1
if it is not available.For example, if this method is invoked within a
punctuation callback
, or while processing a record that was forwarded by a punctuation callback, the record won't have an associated partition id. Another example isKTable.transformValues(ValueTransformerWithKeySupplier, String...)
(and siblings), that do not always guarantee to provide a valid partition id, as they might be executed "out-of-band" due to some internal optimizations applied by the Kafka Streams DSL.- Specified by:
partition
in interfaceProcessorContext
- Returns:
- the partition id
-
offset
public long offset()Description copied from interface:ProcessorContext
Return the offset of the current input record; could be-1
if it is not available.For example, if this method is invoked within a
punctuation callback
, or while processing a record that was forwarded by a punctuation callback, the record won't have an associated offset. Another example isKTable.transformValues(ValueTransformerWithKeySupplier, String...)
(and siblings), that do not always guarantee to provide a valid offset, as they might be executed "out-of-band" due to some internal optimizations applied by the Kafka Streams DSL.- Specified by:
offset
in interfaceProcessorContext
- Returns:
- the offset
-
headers
Returns the headers of the current input record; could benull
if it is not available.Note, that headers should never be
null
in the actual Kafka Streams runtime, even if they could be empty. However, this mock does not guarantee non-null
headers. Thus, you either need to add anull
check to your production code to use this mock for testing or you always need to set headers manually viasetHeaders(Headers)
to avoid aNullPointerException
from yourProcessor
implementation.- Specified by:
headers
in interfaceProcessorContext
- Returns:
- the headers
-
timestamp
public long timestamp()Description copied from interface:ProcessorContext
Return the current timestamp.If it is triggered while processing a record streamed from the source processor, timestamp is defined as the timestamp of the current input record; the timestamp is extracted from
ConsumerRecord
byTimestampExtractor
. Note, that an upstreamProcessor
might have set a new timestamp by callingforward(..., To.all().withTimestamp(...))
. In particular, some Kafka Streams DSL operators set result record timestamps explicitly, to guarantee deterministic results.If it is triggered while processing a record generated not from the source processor (for example, if this method is invoked from the punctuate call), timestamp is defined as the current task's stream time, which is defined as the largest timestamp of any record processed by the task.
- Specified by:
timestamp
in interfaceProcessorContext
- Returns:
- the timestamp
-
register
Description copied from interface:ProcessorContext
Register and possibly restores the specified storage engine.- Specified by:
register
in interfaceProcessorContext
- Parameters:
store
- the storage enginestateRestoreCallbackIsIgnoredInMock
- the restoration callback logic for log-backed state stores upon restart
-
getStateStore
Description copied from interface:ProcessorContext
Get the state store given the store name.- Specified by:
getStateStore
in interfaceProcessorContext
- Type Parameters:
S
- The type or interface of the store to return- Parameters:
name
- The store name- Returns:
- The state store instance
-
schedule
public Cancellable schedule(Duration interval, PunctuationType type, Punctuator callback) throws IllegalArgumentExceptionDescription copied from interface:ProcessorContext
Schedule a periodic operation for processors. A processor may call this method duringinitialization
orprocessing
to schedule a periodic callback — called a punctuation — toPunctuator.punctuate(long)
. The type parameter controls what notion of time is used for punctuation:PunctuationType.STREAM_TIME
— uses "stream time", which is advanced by the processing of messages in accordance with the timestamp as extracted by theTimestampExtractor
in use. The first punctuation will be triggered by the first record that is processed. NOTE: Only advanced if messages arrivePunctuationType.WALL_CLOCK_TIME
— uses system time (the wall-clock time), which is advanced independent of whether new messages arrive. The first punctuation will be triggered after interval has elapsed. NOTE: This is best effort only as its granularity is limited by how long an iteration of the processing loop takes to complete
- with
PunctuationType.STREAM_TIME
, when stream time advances more than interval - with
PunctuationType.WALL_CLOCK_TIME
, on GC pause, too short interval, ...
- Specified by:
schedule
in interfaceProcessorContext
- Parameters:
interval
- the time interval between punctuations (supported minimum is 1 millisecond)type
- one of:PunctuationType.STREAM_TIME
,PunctuationType.WALL_CLOCK_TIME
callback
- a function consuming timestamps representing the current stream or system time- Returns:
- a handle allowing cancellation of the punctuation schedule established by this method
- Throws:
IllegalArgumentException
- if the interval is not representable in milliseconds
-
scheduledPunctuators
Get the punctuators scheduled so far. The returned list is not affected by subsequent calls toschedule(...)
.- Returns:
- A list of captured punctuators.
-
forward
public <K, V> void forward(K key, V value)Description copied from interface:ProcessorContext
Forward a key/value pair to all downstream processors. Used the input record's timestamp as timestamp for the output record.If this method is called with
Punctuator.punctuate(long)
the record that is sent downstream won't have any associated record metadata like topic, partition, or offset.- Specified by:
forward
in interfaceProcessorContext
- Parameters:
key
- keyvalue
- value
-
forward
Description copied from interface:ProcessorContext
Forward a key/value pair to the specified downstream processors. Can be used to set the timestamp of the output record.If this method is called with
Punctuator.punctuate(long)
the record that is sent downstream won't have any associated record metadata like topic, partition, or offset.- Specified by:
forward
in interfaceProcessorContext
- Parameters:
key
- keyvalue
- valueto
- the options to use when forwarding
-
forwarded
Get all the forwarded data this context has observed. The returned list will not be affected by subsequent interactions with the context. The data in the list is in the same order as the calls toforward(...)
.- Returns:
- A list of key/value pairs that were previously passed to the context.
-
forwarded
Get all the forwarded data this context has observed for a specific child by name. The returned list will not be affected by subsequent interactions with the context. The data in the list is in the same order as the calls toforward(...)
.- Parameters:
childName
- The child name to retrieve forwards for- Returns:
- A list of key/value pairs that were previously passed to the context.
-
resetForwards
public void resetForwards()Clear the captured forwarded data. -
commit
public void commit()Description copied from interface:ProcessorContext
Request a commit.- Specified by:
commit
in interfaceProcessorContext
-
committed
public boolean committed()WhetherProcessorContext.commit()
has been called in this context.- Returns:
true
iffProcessorContext.commit()
has been called in this context since construction or reset.
-
resetCommit
public void resetCommit()Reset the commit capture tofalse
(whether or not it was previouslytrue
). -
recordCollector
public org.apache.kafka.streams.processor.internals.RecordCollector recordCollector()- Specified by:
recordCollector
in interfaceorg.apache.kafka.streams.processor.internals.RecordCollector.Supplier
-
setRecordTimestamp(long)
instead.