Class MockProcessorContext

java.lang.Object
org.apache.kafka.streams.processor.MockProcessorContext
All Implemented Interfaces:
org.apache.kafka.streams.processor.internals.RecordCollector.Supplier, ProcessorContext

public class MockProcessorContext extends Object implements ProcessorContext, org.apache.kafka.streams.processor.internals.RecordCollector.Supplier
MockProcessorContext is a mock of ProcessorContext for users to test their Processor, Transformer, and ValueTransformer implementations.

The tests for this class (org.apache.kafka.streams.MockProcessorContextTest) include several behavioral tests that serve as example usage.

Note that this class does not take any automated actions (such as firing scheduled punctuators). It simply captures any data it witnesses. If you require more automated tests, we recommend wrapping your Processor in a minimal source-processor-sink Topology and using the TopologyTestDriver.

  • Constructor Details

    • MockProcessorContext

      public MockProcessorContext()
      Create a MockProcessorContext with dummy config and taskId and null stateDir. Most unit tests using this mock won't need to know the taskId, and most unit tests should be able to get by with the InMemoryKeyValueStore, so the stateDir won't matter.
    • MockProcessorContext

      public MockProcessorContext(Properties config)
      Create a MockProcessorContext with dummy taskId and null stateDir. Most unit tests using this mock won't need to know the taskId, and most unit tests should be able to get by with the InMemoryKeyValueStore, so the stateDir won't matter.
      Parameters:
      config - a Properties object, used to configure the context and the processor.
    • MockProcessorContext

      public MockProcessorContext(Properties config, TaskId taskId, File stateDir)
      Create a MockProcessorContext with a specified taskId and null stateDir.
      Parameters:
      config - a Properties object, used to configure the context and the processor.
      taskId - a TaskId, which the context makes available via taskId().
      stateDir - a File, which the context makes available viw stateDir().
  • Method Details

    • applicationId

      public String applicationId()
      Description copied from interface: ProcessorContext
      Return the application id.
      Specified by:
      applicationId in interface ProcessorContext
      Returns:
      the application id
    • taskId

      public TaskId taskId()
      Description copied from interface: ProcessorContext
      Return the task id.
      Specified by:
      taskId in interface ProcessorContext
      Returns:
      the task id
    • appConfigs

      public Map<String,Object> appConfigs()
      Description copied from interface: ProcessorContext
      Return all the application config properties as key/value pairs.

      The config properties are defined in the StreamsConfig object and associated to the ProcessorContext.

      The type of the values is dependent on the type of the property (e.g. the value of DEFAULT_KEY_SERDE_CLASS_CONFIG will be of type Class, even if it was specified as a String to StreamsConfig(Map)).

      Specified by:
      appConfigs in interface ProcessorContext
      Returns:
      all the key/values from the StreamsConfig properties
    • appConfigsWithPrefix

      public Map<String,Object> appConfigsWithPrefix(String prefix)
      Description copied from interface: ProcessorContext
      Return all the application config properties with the given key prefix, as key/value pairs stripping the prefix.

      The config properties are defined in the StreamsConfig object and associated to the ProcessorContext.

      Specified by:
      appConfigsWithPrefix in interface ProcessorContext
      Parameters:
      prefix - the properties prefix
      Returns:
      the key/values matching the given prefix from the StreamsConfig properties.
    • currentSystemTimeMs

      public long currentSystemTimeMs()
      Description copied from interface: ProcessorContext
      Return the current system timestamp (also called wall-clock time) in milliseconds.

      Note: this method returns the internally cached system timestamp from the Kafka Stream runtime. Thus, it may return a different value compared to System.currentTimeMillis().

      Specified by:
      currentSystemTimeMs in interface ProcessorContext
      Returns:
      the current system timestamp in milliseconds
    • currentStreamTimeMs

      public long currentStreamTimeMs()
      Description copied from interface: ProcessorContext
      Return the current stream-time in milliseconds.

      Stream-time is the maximum observed record timestamp so far (including the currently processed record), i.e., it can be considered a high-watermark. Stream-time is tracked on a per-task basis and is preserved across restarts and during task migration.

      Note: this method is not supported for global processors (cf. Topology.addGlobalStore(org.apache.kafka.streams.state.StoreBuilder<?>, java.lang.String, org.apache.kafka.common.serialization.Deserializer<K>, org.apache.kafka.common.serialization.Deserializer<V>, java.lang.String, java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier<K, V>) (...) and StreamsBuilder.addGlobalStore(org.apache.kafka.streams.state.StoreBuilder<?>, java.lang.String, org.apache.kafka.streams.kstream.Consumed<K, V>, org.apache.kafka.streams.processor.ProcessorSupplier<K, V>) (...), because there is no concept of stream-time for this case. Calling this method in a global processor will result in an UnsupportedOperationException.

      Specified by:
      currentStreamTimeMs in interface ProcessorContext
      Returns:
      the current stream-time in milliseconds
    • keySerde

      public Serde<?> keySerde()
      Description copied from interface: ProcessorContext
      Return the default key serde.
      Specified by:
      keySerde in interface ProcessorContext
      Returns:
      the key serializer
    • valueSerde

      public Serde<?> valueSerde()
      Description copied from interface: ProcessorContext
      Return the default value serde.
      Specified by:
      valueSerde in interface ProcessorContext
      Returns:
      the value serializer
    • stateDir

      public File stateDir()
      Description copied from interface: ProcessorContext
      Return the state directory for the partition.
      Specified by:
      stateDir in interface ProcessorContext
      Returns:
      the state directory
    • metrics

      public StreamsMetrics metrics()
      Description copied from interface: ProcessorContext
      Return Metrics instance.
      Specified by:
      metrics in interface ProcessorContext
      Returns:
      StreamsMetrics
    • setRecordMetadata

      public void setRecordMetadata(String topic, int partition, long offset, Headers headers, long timestamp)
      The context exposes these metadata for use in the processor. Normally, they are set by the Kafka Streams framework, but for the purpose of driving unit tests, you can set them directly.
      Parameters:
      topic - A topic name
      partition - A partition number
      offset - A record offset
      timestamp - A record timestamp
    • setTopic

      public void setTopic(String topic)
      The context exposes this metadata for use in the processor. Normally, they are set by the Kafka Streams framework, but for the purpose of driving unit tests, you can set it directly. Setting this attribute doesn't affect the others.
      Parameters:
      topic - A topic name
    • setPartition

      public void setPartition(int partition)
      The context exposes this metadata for use in the processor. Normally, they are set by the Kafka Streams framework, but for the purpose of driving unit tests, you can set it directly. Setting this attribute doesn't affect the others.
      Parameters:
      partition - A partition number
    • setOffset

      public void setOffset(long offset)
      The context exposes this metadata for use in the processor. Normally, they are set by the Kafka Streams framework, but for the purpose of driving unit tests, you can set it directly. Setting this attribute doesn't affect the others.
      Parameters:
      offset - A record offset
    • setHeaders

      public void setHeaders(Headers headers)
      The context exposes this metadata for use in the processor. Normally, they are set by the Kafka Streams framework, but for the purpose of driving unit tests, you can set it directly. Setting this attribute doesn't affect the others.
      Parameters:
      headers - Record headers
    • setTimestamp

      @Deprecated public void setTimestamp(long timestamp)
      Deprecated.
      Since 3.0.0; use setRecordTimestamp(long) instead.
      The context exposes this metadata for use in the processor. Normally, they are set by the Kafka Streams framework, but for the purpose of driving unit tests, you can set it directly. Setting this attribute doesn't affect the others.
      Parameters:
      timestamp - A record timestamp
    • setRecordTimestamp

      public void setRecordTimestamp(long recordTimestamp)
      The context exposes this metadata for use in the processor. Normally, they are set by the Kafka Streams framework, but for the purpose of driving unit tests, you can set it directly. Setting this attribute doesn't affect the others.
      Parameters:
      recordTimestamp - A record timestamp
    • setCurrentSystemTimeMs

      public void setCurrentSystemTimeMs(long currentSystemTimeMs)
    • setCurrentStreamTimeMs

      public void setCurrentStreamTimeMs(long currentStreamTimeMs)
    • topic

      public String topic()
      Description copied from interface: ProcessorContext
      Return the topic name of the current input record; could be null if it is not available.

      For example, if this method is invoked within a punctuation callback, or while processing a record that was forwarded by a punctuation callback, the record won't have an associated topic. Another example is KTable.transformValues(ValueTransformerWithKeySupplier, String...) (and siblings), that do not always guarantee to provide a valid topic name, as they might be executed "out-of-band" due to some internal optimizations applied by the Kafka Streams DSL.

      Specified by:
      topic in interface ProcessorContext
      Returns:
      the topic name
    • partition

      public int partition()
      Description copied from interface: ProcessorContext
      Return the partition id of the current input record; could be -1 if it is not available.

      For example, if this method is invoked within a punctuation callback, or while processing a record that was forwarded by a punctuation callback, the record won't have an associated partition id. Another example is KTable.transformValues(ValueTransformerWithKeySupplier, String...) (and siblings), that do not always guarantee to provide a valid partition id, as they might be executed "out-of-band" due to some internal optimizations applied by the Kafka Streams DSL.

      Specified by:
      partition in interface ProcessorContext
      Returns:
      the partition id
    • offset

      public long offset()
      Description copied from interface: ProcessorContext
      Return the offset of the current input record; could be -1 if it is not available.

      For example, if this method is invoked within a punctuation callback, or while processing a record that was forwarded by a punctuation callback, the record won't have an associated offset. Another example is KTable.transformValues(ValueTransformerWithKeySupplier, String...) (and siblings), that do not always guarantee to provide a valid offset, as they might be executed "out-of-band" due to some internal optimizations applied by the Kafka Streams DSL.

      Specified by:
      offset in interface ProcessorContext
      Returns:
      the offset
    • headers

      public Headers headers()
      Returns the headers of the current input record; could be null if it is not available.

      Note, that headers should never be null in the actual Kafka Streams runtime, even if they could be empty. However, this mock does not guarantee non-null headers. Thus, you either need to add a null check to your production code to use this mock for testing or you always need to set headers manually via setHeaders(Headers) to avoid a NullPointerException from your Processor implementation.

      Specified by:
      headers in interface ProcessorContext
      Returns:
      the headers
    • timestamp

      public long timestamp()
      Description copied from interface: ProcessorContext
      Return the current timestamp.

      If it is triggered while processing a record streamed from the source processor, timestamp is defined as the timestamp of the current input record; the timestamp is extracted from ConsumerRecord by TimestampExtractor. Note, that an upstream Processor might have set a new timestamp by calling forward(..., To.all().withTimestamp(...)). In particular, some Kafka Streams DSL operators set result record timestamps explicitly, to guarantee deterministic results.

      If it is triggered while processing a record generated not from the source processor (for example, if this method is invoked from the punctuate call), timestamp is defined as the current task's stream time, which is defined as the largest timestamp of any record processed by the task.

      Specified by:
      timestamp in interface ProcessorContext
      Returns:
      the timestamp
    • register

      public void register(StateStore store, StateRestoreCallback stateRestoreCallbackIsIgnoredInMock)
      Description copied from interface: ProcessorContext
      Register and possibly restores the specified storage engine.
      Specified by:
      register in interface ProcessorContext
      Parameters:
      store - the storage engine
      stateRestoreCallbackIsIgnoredInMock - the restoration callback logic for log-backed state stores upon restart
    • getStateStore

      public <S extends StateStore> S getStateStore(String name)
      Description copied from interface: ProcessorContext
      Get the state store given the store name.
      Specified by:
      getStateStore in interface ProcessorContext
      Type Parameters:
      S - The type or interface of the store to return
      Parameters:
      name - The store name
      Returns:
      The state store instance
    • schedule

      public Cancellable schedule(Duration interval, PunctuationType type, Punctuator callback) throws IllegalArgumentException
      Description copied from interface: ProcessorContext
      Schedule a periodic operation for processors. A processor may call this method during initialization or processing to schedule a periodic callback — called a punctuation — to Punctuator.punctuate(long). The type parameter controls what notion of time is used for punctuation:
      • PunctuationType.STREAM_TIME — uses "stream time", which is advanced by the processing of messages in accordance with the timestamp as extracted by the TimestampExtractor in use. The first punctuation will be triggered by the first record that is processed. NOTE: Only advanced if messages arrive
      • PunctuationType.WALL_CLOCK_TIME — uses system time (the wall-clock time), which is advanced independent of whether new messages arrive. The first punctuation will be triggered after interval has elapsed. NOTE: This is best effort only as its granularity is limited by how long an iteration of the processing loop takes to complete
      Skipping punctuations: Punctuations will not be triggered more than once at any given timestamp. This means that "missed" punctuation will be skipped. It's possible to "miss" a punctuation if:
      Specified by:
      schedule in interface ProcessorContext
      Parameters:
      interval - the time interval between punctuations (supported minimum is 1 millisecond)
      type - one of: PunctuationType.STREAM_TIME, PunctuationType.WALL_CLOCK_TIME
      callback - a function consuming timestamps representing the current stream or system time
      Returns:
      a handle allowing cancellation of the punctuation schedule established by this method
      Throws:
      IllegalArgumentException - if the interval is not representable in milliseconds
    • scheduledPunctuators

      public List<MockProcessorContext.CapturedPunctuator> scheduledPunctuators()
      Get the punctuators scheduled so far. The returned list is not affected by subsequent calls to schedule(...).
      Returns:
      A list of captured punctuators.
    • forward

      public <K, V> void forward(K key, V value)
      Description copied from interface: ProcessorContext
      Forward a key/value pair to all downstream processors. Used the input record's timestamp as timestamp for the output record.

      If this method is called with Punctuator.punctuate(long) the record that is sent downstream won't have any associated record metadata like topic, partition, or offset.

      Specified by:
      forward in interface ProcessorContext
      Parameters:
      key - key
      value - value
    • forward

      public <K, V> void forward(K key, V value, To to)
      Description copied from interface: ProcessorContext
      Forward a key/value pair to the specified downstream processors. Can be used to set the timestamp of the output record.

      If this method is called with Punctuator.punctuate(long) the record that is sent downstream won't have any associated record metadata like topic, partition, or offset.

      Specified by:
      forward in interface ProcessorContext
      Parameters:
      key - key
      value - value
      to - the options to use when forwarding
    • forwarded

      Get all the forwarded data this context has observed. The returned list will not be affected by subsequent interactions with the context. The data in the list is in the same order as the calls to forward(...).
      Returns:
      A list of key/value pairs that were previously passed to the context.
    • forwarded

      public List<MockProcessorContext.CapturedForward> forwarded(String childName)
      Get all the forwarded data this context has observed for a specific child by name. The returned list will not be affected by subsequent interactions with the context. The data in the list is in the same order as the calls to forward(...).
      Parameters:
      childName - The child name to retrieve forwards for
      Returns:
      A list of key/value pairs that were previously passed to the context.
    • resetForwards

      public void resetForwards()
      Clear the captured forwarded data.
    • commit

      public void commit()
      Description copied from interface: ProcessorContext
      Request a commit.
      Specified by:
      commit in interface ProcessorContext
    • committed

      public boolean committed()
      Whether ProcessorContext.commit() has been called in this context.
      Returns:
      true iff ProcessorContext.commit() has been called in this context since construction or reset.
    • resetCommit

      public void resetCommit()
      Reset the commit capture to false (whether or not it was previously true).
    • recordCollector

      public org.apache.kafka.streams.processor.internals.RecordCollector recordCollector()
      Specified by:
      recordCollector in interface org.apache.kafka.streams.processor.internals.RecordCollector.Supplier