All Classes and Interfaces

Class
Description
 
 
A convenient base class for configurations to extend.
 
Deprecated.
Since 3.0.
Provides the current status for a connector or a task, along with an identifier for its Connect worker
Represents an access control entry.
Represents a filter which matches access control entries.
Represents a binding between a resource pattern and an access control entry.
A filter which can match AclBinding objects.
 
 
Delete result for each ACL binding that matched a delete filter.
Represents an operation which an ACL grants or denies permission to perform.
Represents whether an ACL grants or denies permissions.
 
The administrative client for Kafka, which supports managing and inspecting topics, brokers, configurations and ACLs.
The base class for in-built admin clients.
The AdminClient configuration class, which also contains constants for configuration entry names.
The Aggregator interface for aggregating values of the given key.
Indicates the operation tried to create an entity that already exists.
A class representing a alter configuration entry containing name, value and operation type.
 
An interface for enforcing a policy on alter configs requests.
Class containing the create request parameters.
The result of the Admin.alterConfigs(Map) call.
Any API exception that is part of the public protocol and should be a subclass of this class and be part of this package.
A read-only metadata class representing the state of the application and the current rebalance.
Assignment related configs for the Kafka Streams TaskAssignor.
 
An object representing contextual information from the authentication session.
This exception indicates that SASL authentication has failed.
Request context interface that provides data from request header as well as connection and authentication information to plugins.
 
 
Pluggable authorizer interface for Kafka brokers.
An exception that indicates that the authorizer is not ready to receive the request yet.
Runtime broker configuration metadata provided to authorizers during start up.
A SampledStat that maintains a simple average over its samples.
Interface for batching restoration of a StateStore It is expected that implementations of this class will not call the StateRestoreCallback.restore(byte[], byte[]) method.
 
 
The Branched class is used to define the optional parameters when building branches with BranchedKStream.
Branches the records in the original stream based on the predicates supplied for the branch definitions.
 
 
Indicates that none of the specified brokers could be found.
This exception is thrown if the producer cannot allocate memory for a record within max.block.ms due to the buffer being too full.
Collection of builtin DslStoreSuppliers for Kafka Streams.
A DslStoreSuppliers that supplies all stores backed by an in-memory map
A DslStoreSuppliers that supplies all stores backed by RocksDB
 
 
 
ByteBufferSerializer always rewinds the position of the input buffer to zero for serialization.
 
 
A callback interface that the user can implement to allow code to execute when the request is complete.
Checkpoint records emitted from MirrorCheckpointConnector.
Encapsulates the client instance id used for metrics collection by producers, consumers, and the admin client used by Kafka Streams.
 
Describes a configuration alteration to be made to a client quota entity.
 
Quota callback interface for brokers that enables customization of client quota computation.
Describes a client quota entity, which is a mapping of entity types to their names.
The metadata for an entity for which quota is configured.
Interface representing a quota configuration entity.
Describes a client quota entity filter.
Describes a component for applying a client quota filter.
Types of quotas that may be configured on brokers for client requests.
A MetricsReporter may implement this interface to indicate support for collecting client telemetry on the server side.
A client telemetry payload as sent by the client to the telemetry receiver.
ClientTelemetryReceiver defines the behaviour for telemetry receiver on the broker side which receives client telemetry metrics.
An immutable representation of a subset of the nodes, topics, and partitions in the Kafka cluster.
 
The ClusterResource class encapsulates metadata for a Kafka cluster.
A callback interface that users can implement when they wish to get notified about changes in the Cluster metadata.
CogroupedKStream is an abstraction of multiple grouped record streams of KeyValue pairs.
Stores can register this callback to be notified upon successful commit.
This exception is raised when an offset commit with KafkaConsumer.commitSync() fails with an unrecoverable error.
A compound stat is a stat where a single measurement and associated data structure feeds many metrics.
 
 
A configuration object containing the configuration entries for a resource.
 
A callback passed to ConfigProvider for subscribing to changes.
Configuration data from a ConfigProvider.
This class is used for specifying the set of expected configurations.
 
 
 
The importance level for a configuration
 
 
 
 
 
Validation logic for numeric ranges
This is used by the ConfigDef.validate(Map) to get valid values for a configuration given the current configuration values in order to perform full configuration validation and visibility modification.
The type for a configuration value
Validation logic the user may provide to perform single configuration validation.
 
 
The width of a configuration value
A class representing a configuration entry containing name, value and additional metadata.
Source of configuration entries.
Class representing a configuration synonym of a ConfigEntry.
Data type of configuration entry.
Thrown if the user supplies an invalid configuration
A provider of configuration data, which may optionally support subscriptions to configuration changes.
A class representing resources that have configs.
Type of resource.
This class wraps a set of ConfigProvider instances and uses them to perform transformations.
The result of a transformation from ConfigTransformer.
A Mix-in style interface for classes that are instantiated by reflection and need to take configuration parameters
 
Provides immutable Connect cluster information, such as the ID of the backing Kafka cluster.
Provides the ability to lookup connector metadata, including status and configurations, as well as immutable cluster information such as Kafka cluster ID.
Provides a set of StoreBuilders that will be automatically added to the topology and connected to the associated processor.
ConnectException is the top-level exception type generated by Kafka Connect and connector implementations.
A basic Headers implementation.
Connectors manage integration of Kafka Connect with another system, either as an input that ingests data into Kafka or an output that passes data to an external system.
An interface for enforcing a policy on overriding of Kafka client configs via the connector configs.
 
 
ConnectorContext allows Connectors to proactively interact with the Kafka Connect runtime.
Provides basic health information about the connector and its tasks.
Describes the status, worker ID, and any errors associated with a connector.
An enum to represent the level of support for connector-defined transaction boundaries.
Enum definition that identifies the type of the connector.
Utilities that connector implementations might find useful.
Base class for records containing data to be copied to/from Kafka.
A plugin interface to allow registration of new JAX-RS resources like Filters, REST endpoints, providers, etc.
The interface provides the ability for ConnectRestExtension implementations to access the JAX-RS Configurable and cluster state ConnectClusterState.
 
The Consumed class is used to define the optional parameters when using StreamsBuilder to build instances of KStream, KTable, and GlobalKTable.
 
The consumer configuration keys
A detailed description of a single consumer group in the cluster.
A listing of a consumer group in the cluster.
A metadata struct containing the consumer group information.
Server-side partition assignor for consumer groups used by the GroupCoordinator.
The consumer group state.
A plugin interface that allows you to intercept (and possibly mutate) records received by the consumer.
This interface is used to define custom partition assignment for use in KafkaConsumer.
 
 
 
The rebalance protocol defines partition assignment and revocation semantics.
 
A callback interface that the user can implement to trigger custom actions when the set of partitions assigned to the consumer changes.
A key/value pair to be received from Kafka.
A container that holds the list ConsumerRecord per partition for a particular topic.
An abstract implementation of FixedKeyProcessor that manages the FixedKeyProcessorContext instance and provides default no-op implementation of FixedKeyProcessor.close().
An abstract implementation of Processor that manages the ProcessorContext instance and provides default no-op implementation of Processor.close().
 
The Converter interface provides support for translating between Kafka Connect's runtime data format and byte[].
Abstract class that defines the configuration options for Converter and HeaderConverter instances.
The type of Converter and HeaderConverter.
A cooperative version of the AbstractStickyAssignor.
In the context of the group coordinator, the broker returns this error code for any coordinator request if it is still loading the group metadata (e.g.
In the context of the group coordinator, the broker returns this error code for metadata or offset commit requests if the group metadata topic has not been created yet.
This exception indicates a record has failed its internal CRC check, this generally indicates network or disk corruption.
The result of the Admin.createAcls(Collection) call.
The result of the Admin.createPartitions(Map) call.
An interface for enforcing a policy on create topics requests.
Class containing the create request parameters.
 
A non-sampled version of WindowedCount maintained over all time.
An non-sampled cumulative total maintained over all time.
Base class for all Kafka Connect data API exceptions.
A date representing a calendar day with no time of day or timezone.
An arbitrary-precision signed decimal number.
A decoder is a method of turning byte arrays into objects.
The default implementation does nothing, just returns the same byte array it takes in.
ProductionExceptionHandler that always instructs streams to fail when an exception happens while attempting to produce result records.
Defines remote topics like "us-west.topic1".
A class representing a delegation token.
 
 
 
 
 
Options for the Admin.deleteAcls(Collection) call.
The result of the Admin.deleteAcls(Collection) call.
A class containing either the deleted ACL binding or an exception if the delete failed.
A class containing the results of the delete ACLs operation.
Represents information about deleted records The API for this class is still evolving and we may break compatibility in minor releases, if necessary.
The result of the Admin.deleteRecords(Map) call.
The result of the Admin.deleteTopics(Collection) call.
The result of the Admin.describeAcls(AclBindingFilter) call.
The result of the Admin.describeCluster() call.
The result of the Admin.describeConfigs(Collection) call.
Options for Admin.describeLogDirs(Collection) The API of this class is evolving, see Admin for details.
The result of the Admin.describeLogDirs(Collection) call.
 
 
 
The result of the Admin.describeTopics(Collection) call.
 
The result of the Admin.describeUserScramCredentials() call.
Interface that specifies how an exception from source node deserialization (e.g., reading from Kafka) should be handled.
Enumeration that describes the response from the exception handler.
An interface for converting bytes to objects.
An implementation of ConfigProvider based on a directory of files.
Server disconnected before a request could be completed.
 
 
DslKeyValueParams is a wrapper class for all parameters that function as inputs to DslStoreSuppliers.keyValueStore(DslKeyValueParams).
DslSessionParams is a wrapper class for all parameters that function as inputs to DslStoreSuppliers.sessionStore(DslSessionParams).
DslStoreSuppliers defines a grouping of factories to construct stores for each of the types of state store implementations in Kafka Streams.
DslWindowParams is a wrapper class for all parameters that function as inputs to DslStoreSuppliers.windowStore(DslWindowParams).
 
Exception thrown due to a request that illegally refers to the same resource twice (for example, trying to both create and delete the same SCRAM credential for a particular user in a single request).
 
 
The result of Admin.electLeaders(ElectionType, Set, ElectLeadersOptions) The API of this class is evolving, see Admin for details.
 
This interface controls the strategy that can be used to control how we emit results in a processor.
 
Represents a broker endpoint.
Identifies the endpoint type, as specified by KIP-919.
An implementation of ConfigProvider based on environment variables.
Component that a SinkTask can use to report problematic records (and their corresponding problems) as it writes them through SinkTask.put(java.util.Collection).
An enum to represent the level of support for exactly-once semantics from a source connector.
Retrieves embedded metadata timestamps from Kafka messages.
This enumeration type captures the various top-level reasons that a particular partition of a store would fail to execute a query.
Encapsulates details about finalized as well as supported features.
Encapsulates details about an update to a finalized feature.
 
 
 
The request contained a leader epoch which is smaller than that on the broker that received the request.
 
Options for Admin.fenceProducers(Collection, FenceProducersOptions) The API of this class is evolving.
The result of the Admin.fenceProducers(Collection) call.
 
 
A field in a Struct, consisting of a field name, index, and Schema for the field value.
An implementation of ConfigProvider that represents a Properties file.
Represents a range of version levels supported by every broker in a cluster for some feature.
A processor of key-value pair records where keys are immutable.
Processor context interface for FixedKeyRecord.
A processor supplier that can create one or more FixedKeyProcessor instances.
A data class representing an incoming record with fixed key for processing in a FixedKeyProcessor or a record to forward to downstream processors via FixedKeyProcessorContext.
 
 
The ForeachAction interface for performing an action on a key-value pair.
 
ForwardingAdmin is the default value of forwarding.admin.class in MirrorMaker.
A CompoundStat that represents a normalized distribution with a Frequency metric for each bucketed value.
Definition of a frequency metric used in a Frequencies compound statistic.
A gauge metric is an instantaneous reading of a particular value.
GlobalKTable is an abstraction of a changelog stream from a primary-keyed table.
The partition assignment for a consumer group.
 
The class that is used to capture the key and value Serdes and set the part of name used for repartition topics when performing KStream.groupBy(KeyValueMapper, Grouped), KStream.groupByKey(Grouped), or KTable.groupBy(KeyValueMapper, Grouped) operations.
 
Indicates that a consumer group is already at its configured maximum capacity and cannot accommodate more members
 
 
The group metadata specifications required to compute the target assignment.
 
 
 
A Header is a key-value pair, and multiple headers can be included with the key, value, and timestamp in each Kafka message.
The HeaderConverter interface provides support for translating between Kafka Connect's runtime data format and byte[].
 
A mutable ordered collection of Header objects.
A function to transform the supplied Header.
Heartbeat message sent from MirrorHeartbeatTask to target cluster.
 
An algorithm for determining the bin in which a value is to be placed as well as calculating the upper end of each bin.
A scheme for calculating the bins where the width of each bin is a constant determined by the range of values and the number of bins.
A scheme for calculating the bins where the width of each bin is one more than the previous bin, and therefore the bin widths are increasing at a linear rate.
Represents a user defined endpoint in a KafkaStreams application.
IdentityReplicationPolicy does not rename remote topics.
 
This exception indicates unexpected requests prior to SASL authentication.
Indicates that a method has been invoked illegally or at an invalid time by a connector or task.
 
 
 
 
 
The Initializer interface for creating an initial value in aggregations.
The integer decoder translates bytes into integers.
 
 
Annotation to inform users of how much to rely on a particular package, class or method not changing over time.
Compatibility may be broken at minor release (i.e.
Compatibility is maintained in major, minor and patch releases with one exception: compatibility may be broken in a major release (i.e.
No guarantee is provided as to reliability or stability across any level of release granularity.
 
An unchecked wrapper for InterruptedException
 
 
 
 
 
An exception that may indicate the client's metadata is out of date
Thrown when the offset for a set of partitions is invalid (either undefined or out of range), and no reset policy has been configured.
Thrown when the offset for a set of partitions is invalid (either undefined or out of range), and no reset policy has been configured.
 
 
 
This exception indicates that the produce request sent to the partition leader contains a non-matching producer epoch.
 
Thrown when a broker registration request is considered invalid by the controller.
 
 
Thrown when a request breaks basic wire protocol rules.
 
 
Indicates that there was a problem when trying to access a StateStore.
Indicates that the specific state store being queried via StoreQueryParameters used a partitioning that is not assigned to this instance.
Indicate the timestamp of a record is invalid.
The client has attempted to perform an operation on an invalid topic.
 
The transaction coordinator returns this error code if the timeout received via the InitProducerIdRequest is larger than the `transaction.max.timeout.ms` config value.
 
 
Register metrics in JMX as dynamic mbeans based on the metric names
The Joined class represents optional params that can be passed to KStream#join(KTable,...) and KStream#leftJoin(KTable,...) operations.
The window specifications used for joins.
The default implementation of Admin.
KafkaClientSupplier can be used to provide custom Kafka clients to a KafkaStreams instance.
A client that consumes records from a Kafka cluster.
The base class of all other Kafka exceptions
A flexible future which supports call chaining and other asynchronous programming patterns.
A function which takes objects of type A and returns objects of type B.
A consumer of two different types of object.
Deprecated.
Since Kafka 3.0.
 
A implementation of MetricsContext, it encapsulates required metrics context properties for Kafka services and clients
Principals in Kafka are defined by a type and a name.
Pluggable principal builder interface which supports both SSL authentication through SslAuthenticationContext and SASL through SaslAuthenticationContext.
Serializer/Deserializer interface for KafkaPrincipal for the purpose of inter-broker forwarding.
A Kafka client that publishes records to the Kafka cluster.
Miscellaneous disk-related IOException occurred when handling a request.
A Kafka client that allows for performing continuous computation on input coming from one or more input topics and sends output to zero, one, or more output topics.
Class that handles options passed in case of KafkaStreams instance scale down
Kafka Streams states are the possible state that a Kafka Streams instance can be in.
Listen to KafkaStreams.State change events.
A simple container class for the assignor to return the desired placement of active and standby tasks on KafkaStreams clients.
 
 
A read-only metadata class representing the current state of each KafkaStreams client with at least one StreamThread participating in this rebalance
Interactive query for retrieving a single record based on its key.
Represents all the metadata related to a key, where a particular key resides in a KafkaStreams application.
A key-value pair defined for a single Kafka Streams record.
A store supplier that can be used to create one or more KeyValueStore<Bytes, byte[]> instances of type <Bytes, byte[]>.
Iterator interface of KeyValue.
The KeyValueMapper interface for mapping a key-value pair to a new value of arbitrary type.
A key-value store that supports put/get/delete and range queries.
KGroupedStream is an abstraction of a grouped record stream of KeyValue pairs.
KGroupedTable is an abstraction of a re-grouped changelog stream from a primary-keyed table, usually on a different grouping key than the original primary key.
KStream is an abstraction of a record stream of KeyValue pairs, i.e., each record is an independent entity/event in the real world.
KTable is an abstraction of a changelog stream from a primary-keyed table.
Encapsulates information about lag, at a store partition replica (active or standby).
There is no currently available leader for the given partition (either because a leadership election is in progress or because all replicas are down).
The result of the Admin.listClientMetricsResources() call.
Specification of consumer group offsets to list using Admin.listConsumerGroupOffsets(java.util.Map).
The result of the Admin.listConsumerGroups() call.
 
The leader does not have an endpoint corresponding to the listener on which metadata was requested.
The result of the Admin.listOffsets(Map) call.
 
Options for Admin.listPartitionReassignments(ListPartitionReassignmentsOptions) The API of this class is evolving.
 
Options for Admin.listTopics().
The result of the Admin.listTopics() call.
The result of the Admin.listTransactions() call.
Indicates that the state store directory lock could not be acquired because another thread holds the lock.
Deserialization handler that logs a deserialization exception and then signals the processing pipeline to continue processing more records.
Deserialization handler that logs a deserialization exception and then signals the processing pipeline to stop processing more records and fail.
Retrieves embedded metadata timestamps from Kafka messages.
A description of a log directory on a particular broker.
Thrown when a request is made for a log directory that is not present on the broker
Login interface for authentication.
This class holds definitions for log level configurations related to Kafka's application logging.
This represents all the required data and indexes for a specific log segment that needs to be stored in the remote storage.
In the event of an unclean leader election, the log will be truncated, previously committed data will be lost, and new data will be written over these offsets.
The long decoder translates bytes into longs.
 
 
Used to describe how a StateStore should be materialized.
 
A SampledStat that gives the max over its samples.
A measurable quantity that can be registered as a metric
A MeasurableStat is a Stat that is also Measurable (i.e.
A description of the assignments of a specific group member.
The partition assignment for a consumer group member.
A detailed description of a single group instance in the cluster.
 
Interface representing the subscription metadata for a group member.
A struct containing information about the member to be removed.
The interface for merging aggregate values for SessionWindows with the given key.
This interface allows to define Formatters that can be used to parse and format records read by a Consumer instance for display.
A compound stat that includes a rate metric and a cumulative total metric.
A metric tracked for monitoring purposes.
Configuration values for metrics
The MetricName class encapsulates a metric's name, logical group and its related attributes.
A template for a MetricName.
A registry of sensors and metrics.
MetricsContext encapsulates additional contextLabels about metrics exposed via a MetricsReporter
A plugin interface to allow things to listen as new metrics are created so they can be reported.
Super-interface for Measurable or Gauge that provides metric values.
A SampledStat that gives the min over its samples.
Interprets MM2's internal topics (checkpoints, heartbeats) on a given cluster.
Configuration required for MirrorClient to talk to a given target cluster.
 
 
This connector provides support for mocking certain connector behaviors.
A mock of the Consumer interface you can use for testing code that uses Kafka.
MockProcessorContext is a mock of ProcessorContext for users to test their Processor, Transformer, and ValueTransformer implementations.
MockProcessorContext is a mock of ProcessorContext for users to test their Processor, Transformer, and ValueTransformer implementations.
 
 
MockProcessorContext.CapturedPunctuator holds captured punctuators, along with their scheduling information.
MockProcessorContext.CapturedPunctuator holds captured punctuators, along with their scheduling information.
A mock of the producer interface you can use for testing code that uses Kafka.
Mock sink implementation which delegates to MockConnector.
 
Mock source implementation which delegates to MockConnector.
 
Interactive query for retrieving a set of records with the same specified key and different timestamps within the specified time range.
 
A misc.
 
A new partition reassignment, which can be applied via Admin.alterPartitionReassignments(Map, AlterPartitionReassignmentsOptions).
Describes new partitions for a particular topic in a call to Admin.createPartitions(Map).
A new topic to be created via Admin.createTopics(Collection).
Information about a Kafka node
Indicates that there is no stored offset for a partition and no defined offset reset policy.
Thrown if a reassignment cannot be cancelled because none is in progress.
 
In the context of the group coordinator, the broker returns this error code if it receives an offset fetch or commit request for a group it's not the coordinator of.
Number of insync replicas for the partition is lower than min.insync.replicas This exception is raised when the low ISR size is discovered *after* the message was already appended to the log.
Number of insync replicas for the partition is lower than min.insync.replicas
Indicates that an operation attempted to modify or delete a connector or task that is not present on the worker.
Deprecated.
since 2.6.
Broker returns this error if a request could not be processed because the broker is not the leader or follower for a topic partition.
A Callback for use by the SaslServer implementation when it needs to validate the SASL extensions for the OAUTHBEARER mechanism Callback handlers should use the OAuthBearerExtensionsValidatorCallback.valid(String) method to communicate valid extensions back to the SASL server.
OAuthBearerLoginCallbackHandler is an AuthenticateCallbackHandler that accepts OAuthBearerTokenCallback and SaslExtensionsCallback callbacks to perform the steps to request a JWT from an OAuth/OIDC provider using the clientcredentials.
Deprecated.
See org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginCallbackHandler
The LoginModule for the SASL/OAUTHBEARER mechanism.
The b64token value as defined in RFC 6750 Section 2.1 along with the token's specific scope and lifetime and principal name.
A Callback for use by the SaslClient and Login implementations when they require an OAuth 2 bearer token.
A Callback for use by the SaslServer implementation when it needs to provide an OAuth 2 bearer token compact serialization for validation.
OAuthBearerValidatorCallbackHandler is an AuthenticateCallbackHandler that accepts OAuthBearerValidatorCallback and OAuthBearerExtensionsValidatorCallback callbacks to implement OAuth/OIDC validation.
Deprecated.
See org.apache.kafka.common.security.oauthbearer.OAuthBearerValidatorCallbackHandler
The Kafka offset commit API allows users to provide additional metadata (in the form of a string) when an offset is committed.
A container class for offset and timestamp.
A callback interface that the user can implement to trigger custom actions when a commit request completes.
The client has tried to save its offset with associated metadata larger than the maximum size allowed by the server.
 
Indicates that the leader is not able to guarantee monotonically increasing offsets due to the high watermark lagging behind the epoch start offset after a recent leader election
No reset policy has been defined, and the offsets for these partitions are either larger or smaller than the range of offsets the server has for the given partition.
No reset policy has been defined, and the offsets for these partitions are either larger or smaller than the range of offsets the server has for the given partition.
 
This class allows to specify the desired offsets when using KafkaAdminClient.listOffsets(Map, ListOffsetsOptions)
 
 
 
 
OffsetStorageReader provides access to the offset storage used by sources.
Indicates that the broker did not attempt to execute this operation.
This exception indicates that the broker received an unexpected sequence number from the producer, which means that data may have been lost.
Server-side partition assignor used by the GroupCoordinator.
Partitioner Interface
This is used to describe per-partition state in the MetadataResponse.
A partition reassignment, which has been listed via Admin.listPartitionReassignments().
Resource pattern type.
 
A compound stat that reports one or more percentiles
 
 
 
 
Exception thrown if a create topics request does not satisfy the configured policy for a topic.
A representation of a position vector with respect to a set of topic partitions.
A class bounding the processing state Position during queries.
 
A predicate on records.
The Predicate interface represents a predicate (boolean-valued function) of a KeyValue pair.
 
Exception used to indicate a kafka principal deserialization failure during request forwarding.
An object to define the options used when printing a KStream.
A simple wrapper around UUID that abstracts a Process ID
Processor context interface.
A processor of key-value pair records.
Deprecated.
Since 3.0.
Processor context interface for Record.
Processor context interface.
Indicates a processor state operation (e.g.
A processor supplier that can create one or more Processor instances.
Deprecated.
Since 3.0.
This class is used to provide the optional parameters when producing to new topics using KStream.to(String, Produced).
The interface for the KafkaProducer
Configuration for the Kafka Producer.
This fatal exception indicates that another producer with the same transactional.id has been started.
A plugin interface that allows you to intercept (and possibly mutate) the records received by the producer before they are published to the Kafka cluster.
A key/value pair to be sent to Kafka.
 
Interface that specifies how an exception when attempting to produce a result to Kafka should be handled.
 
Controls what notion of time is used for punctuation scheduled via ProcessorContext.schedule(Duration, PunctuationType, Punctuator) schedule}: STREAM_TIME - uses "stream time", which is advanced by the processing of messages in accordance with the timestamp as extracted by the TimestampExtractor in use.
A functional interface used as an argument to ProcessorContext.schedule(Duration, PunctuationType, Punctuator).
Marker interface that all interactive queries must implement (see KafkaStreams.query(StateQueryRequest)).
Used to enable querying of custom StateStore types via the KafkaStreams API.
Provides access to the QueryableStoreTypes provided with KafkaStreams.
 
 
 
Runtime configuration parameters
Container for a single partition's result when executing a StateQueryRequest.
This class is used to describe the state of the quorum received in DescribeQuorumResponse.
 
An upper or lower bound for metrics
Thrown when a sensor records a value that causes a metric to go outside the bounds configured as its quota
The range assignor works on a per-topic basis.
Interactive query for issuing range queries and scans over KeyValue stores.
The rate of the given quantity.
A key-value store that only supports read operations.
A session store that only supports read operations.
A window store that only supports read operations.
Thrown if a request cannot be completed because a partition reassignment is in progress.
 
Interface for reconfigurable classes that support dynamic configuration.
A data class representing an incoming record for processing in a Processor or a record to forward to downstream processors via ProcessorContext.
This record batch is larger than the maximum allowable size
The context associated with the current record being processed by a Processor
This exception is raised for any error that occurs while deserializing records received by the consumer using the configured Deserializer.
 
The metadata for a record that has been acknowledged by the server
 
Typical implementations of this interface convert data from an `InputStream` received via `readRecords` into a iterator of `ProducerRecord` instance.
Describe records to delete in a call to Admin.deleteRecords(Map) The API of this class is evolving, see Admin for details.
This record is larger than the maximum allowable size
The Reducer interface for combining two values of the same type into a new value.
Convenience methods for multi-cluster environments.
Base class for remote log metadata objects like RemoteLogSegmentMetadata, RemoteLogSegmentMetadataUpdate, and RemotePartitionDeleteMetadata.
This interface provides storing and fetching remote log segment metadata with strongly consistent semantics.
This class represents a universally unique identifier associated to a topic partition's log segment.
It describes the metadata about a topic partition's remote log segment in the remote storage.
Custom metadata from a RemoteStorageManager plugin.
It describes the metadata update about the log segment in the remote storage.
This enum indicates the state of the remote log segment.
This class represents the metadata about the remote partition.
This enum indicates the deletion state of the remote topic partition.
Exception thrown when a resource is not found on the remote storage.
Exception thrown when there is a remote storage error.
This interface provides the lifecycle of remote log segments that includes copy, fetch, and delete from remote storage.
Type of the index file.
This class contains the metrics related to tiered storage feature, which is to have a centralized place to store them, so that we can verify all of them easily.
This class is used to provide the optional parameters for internal repartition topics.
A description of a replica on a particular broker.
The replica is not available for the requested topic partition.
Defines which topics are "remote topics".
Represents a cluster resource with a tuple of (type, name).
Exception thrown due to a request for a resource that does not exist.
Represents a pattern that is used by ACLs to match zero or more Resources.
Represents a filter that can match ResourcePattern.
Represents a type of resource which an ACL can be applied to.
 
 
A retriable exception is a transient exception that if retried may succeed.
An exception that indicates the operation can be reattempted.
An interface to that allows developers to customize the RocksDB settings for a given Store.
The round robin assignor lays out all the available partitions and all the available consumers.
The "Round-Robin" partitioner This partitioning strategy can be used when user wants to distribute the writes to all partitions equally.
A SampledStat records a single scalar value measured over one or more samples.
 
This exception indicates that SASL authentication has failed.
 
A simple immutable value object class holding customizable SASL extensions.
Optional callback used for SASL mechanisms if any extensions need to be set in the SASL exchange.
Definition of an abstract data type.
The type of a schema.
A composite containing a Schema and associated value
SchemaBuilder provides a fluent API for constructing Schema objects.
Indicates an error while building a schema via SchemaBuilder
SchemaProjector is a utility to project a value between compatible schemas and throw exceptions when non compatible schemas are provided.
Indicates an error while projecting a schema via SchemaProjector
A simple source connector that is capable of producing static data with Struct schemas.
 
SCRAM credential class that encapsulates the credential data persisted for each user that is accessible to the server.
Callback used for SCRAM mechanisms.
Mechanism and iterations for a SASL/SCRAM credential associated with a user.
Optional callback used for SCRAM mechanisms if any extensions need to be set in the SASL/SCRAM exchange.
 
Representation of a SASL/SCRAM Mechanism.
Contains the common security config for SSL and SASL
An error indicating that security is disabled on the broker.
 
An interface for generating security providers.
A sensor applies a continuous sequence of numerical values to a set of associated metrics.
 
The interface for wrapping a serializer and deserializer for the given data type.
Factory for creating serializers / deserializers.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Any exception during serialization in the producer
An interface for converting objects to bytes.
A store supplier that can be used to create one or more SessionStore<Byte, byte[]> instances.
Interface for storing the aggregated values of sessions.
SessionWindowedCogroupKStream is an abstraction of a windowed record stream of KeyValue pairs.
 
SessionWindowedKStream is an abstraction of a windowed record stream of KeyValue pairs.
 
A session based window specification used for aggregating events into sessions.
 
 
A HeaderConverter that serializes header values as strings and that deserializes header values to the most appropriate numeric, boolean, array, or map representation.
A simple rate the rate is incrementally calculated based on the elapsed time between the earliest reading and now.
SinkConnectors implement the Connector interface to send Kafka data to another system.
A context to allow a SinkConnector to interact with the Kafka Connect runtime.
SinkRecord is a ConnectRecord that has been read from Kafka and includes the original Kafka record's topic, partition and offset (before any transformations have been applied) in addition to the standard fields.
SinkTask is a Task that takes records loaded from Kafka and sends them to another system.
Context passed to SinkTasks, allowing them to access utilities in the Kafka Connect runtime.
A sliding window used for aggregating events.
 
Directional pair of clusters, where source is replicated to target.
SourceConnectors implement the connector interface to pull data from another system and send it to Kafka.
A context to allow a SourceConnector to interact with the Kafka Connect runtime.
SourceRecords are generated by SourceTasks and passed to Kafka Connect for storage in Kafka.
SourceTask is a Task that pulls records from another system for storage in Kafka.
Represents the permitted values for the SourceTask.TRANSACTION_BOUNDARY_CONFIG property.
SourceTaskContext is provided to SourceTasks to allow them to interact with the underlying runtime.
 
This exception indicates that SSL handshake has failed.
Describes whether the server should require or request client authentication.
 
Plugin interface for allowing creation of SSLEngine object in a custom way.
 
The StaleMemberEpochException is used in the context of the new consumer group protocol (KIP-848).
 
 
A Stat is a quantity such as average, max, etc that is computed off the stream of updates to a sensor
The request object for Interactive Queries.
A progressive builder interface for creating StoreQueryRequests.
The response object for interactive queries.
Restoration logic for log-backed state stores upon restart, it takes one record at a time from the logs to apply to the restoring state.
Class for listening to various states of the restoration process of a StateStore.
Factory for creating serializers / deserializers for state stores in Kafka Streams.
A storage engine for managing state maintained by a stream processor.
State store context interface.
Indicates that the state store being queried is closed although the Kafka Streams state is RUNNING or REBALANCING.
Indicates that the state store being queried is already closed.
The sticky assignor serves two purposes.
 
Build a StateStore wrapped with optional caching and logging.
StoreQueryParameters allows you to pass a variety of parameters when fetching a store for interactive query.
Factory for creating state stores in Kafka Streams.
A state store supplier which can create one or more StateStore instances.
Class used to configure the name of the join processor, the repartition topic name, state stores or state store names in Stream-Stream join.
Determine how records are distributed among the partitions in a Kafka topic.
StreamsBuilder provides the high-level Kafka Streams DSL to specify a Kafka Streams topology.
Configuration for a KafkaStreams instance.
 
StreamsException is the top-level exception type generated by Kafka Streams, and indicates errors have occurred during a StreamThread's processing.
Deprecated.
since 3.0.0 use StreamsMetadata
Metadata of a Kafka Streams client.
The Kafka Streams metrics interface for adding metric sensors and collecting metric values.
Indicates that Kafka Streams is in state CREATED and thus state stores cannot be queries yet.
Indicates that Kafka Streams is in state REBALANCING and thus cannot be queried by default.
Indicates that Kafka Streams is in a terminating or terminal state, such as KafkaStreams.State.PENDING_SHUTDOWN,KafkaStreams.State.PENDING_ERROR,KafkaStreams.State.NOT_RUNNING, or KafkaStreams.State.ERROR.
 
Enumeration that describes the response from the exception handler.
Converter and HeaderConverter implementation that only supports serializing to strings.
Configuration options for StringConverter instances.
The string decoder translates bytes into strings.
String encoding defaults to UTF8 and can be customized by setting the property key.deserializer.encoding, value.deserializer.encoding or deserializer.encoding.
String encoding defaults to UTF8 and can be customized by setting the property key.serializer.encoding, value.serializer.encoding or serializer.encoding.
A structured record containing a set of named fields with values, each field using an independent Schema.
The subscribed topic describer is used by the PartitionAssignor to obtain topic and partition metadata of the subscribed topics.
The subscription type followed by a consumer group.
Represents a range of versions that a particular broker supports for some feature.
 
 
Marker interface for a buffer configuration that will strictly enforce size constraints (bytes and/or number of records) on the buffer, so it is suitable for reducing duplicate results downstream, but does not promise to eliminate them entirely.
Marker interface for a buffer configuration that is "strict" in the sense that it will strictly enforce the time bound and never emit early.
The TableJoined class represents optional parameters that can be passed to KTable#join(KTable,Function,...) and KTable#leftJoin(KTable,Function,...) operations, for foreign key joins.
Tasks contain the code that actually copies data to/from another system.
Indicates a run time error incurred while trying to assign stream tasks to threads.
A set of utilities to help implement task assignment via the TaskAssignor
 
A simple config container for necessary parameters and optional overrides to apply when running the active or standby task rack-aware optimizations.
A TaskAssignor is responsible for creating a TaskAssignment from a given ApplicationState.
NONE: no error detected ACTIVE_TASK_ASSIGNED_MULTIPLE_TIMES: multiple KafkaStreams clients assigned with the same active task INVALID_STANDBY_TASK: stateless task assigned as a standby task MISSING_PROCESS_ID: ProcessId present in the input ApplicationState was not present in the output TaskAssignment UNKNOWN_PROCESS_ID: unrecognized ProcessId not matching any of the participating consumers UNKNOWN_TASK_ID: unrecognized TaskId not matching any of the tasks to be assigned
Wrapper class for the final assignment of active and standbys tasks to individual KafkaStreams clients.
Indicates a specific task is corrupted and need to be re-initialized.
The task ID representation composed as subtopology (aka topicGroupId) plus the assigned partition ID.
Indicates a run time error incurred while trying parse the task id from the read string.
A simple container class corresponding to a given TaskId.
Deprecated.
since 3.0, use TaskMetadata instead.
Metadata of a task.
Indicates that all tasks belongs to the thread have migrated to another thread.
Describes the state, IDs, and any errors of a connector task.
This is a simple container class used during the assignment process to distinguish TopicPartitions type.
This exception indicates that the size of the telemetry metrics data is too large.
TestInputTopic is used to pipe records to topic in TopologyTestDriver.
TestOutputTopic is used to read records from a topic in TopologyTestDriver.
A key/value pair, including timestamp and record headers, to be sent to or received from TopologyTestDriver.
Deprecated.
since 3.0 use ThreadMetadata instead
Metadata of a stream thread.
Exception thrown if an operation on a resource exceeds the throttling quota.
A time representing a specific point in a day, not tied to any specific date.
Indicates that a request timed out.
A timestamp representing an absolute time, without timezone information.
 
Interactive query for retrieving a single record based on its key from TimestampedKeyValueStore
A key-(value/timestamp) store that supports put/get/delete and range queries.
Interactive query for issuing range queries and scans over TimestampedKeyValueStore
Interface for storing the aggregated values of fixed-size time windows.
An interface that allows the Kafka Streams framework to extract a timestamp from an instance of ConsumerRecord.
TimeWindowedCogroupKStream is an abstraction of a windowed record stream of KeyValue pairs.
 
TimeWindowedKStream is an abstraction of a windowed record stream of KeyValue pairs.
 
The fixed-size time-based window specifications used for aggregations.
This class is used to provide the optional parameters when sending output records to downstream processor using ProcessorContext.forward(Object, Object, To).
The TokenBucket is a MeasurableStat implementing a token bucket algorithm that is usable within a Sensor.
A class representing a delegation token details.
 
A class used to represent a collection of topics.
A class used to represent a collection of topics defined by their topic ID.
A class used to represent a collection of topics defined by their topic name.
Keys that can be used to configure a topic.
 
A detailed description of a single topic in the cluster.
 
This represents universally unique identifier with topic id for a topic partition.
A listing of a topic in the cluster.
An interface that allows to dynamically determine the name of the Kafka topic to send at the sink node of the topology.
A topic name and partition number
A class containing leadership, replicas and ISR information for a topic partition.
The topic name, partition number and the brokerId of the replica
A logical representation of a ProcessorTopology.
Sets the auto.offset.reset configuration when adding a source processor or when creating KStream or KTable via StreamsBuilder.
Streams configs that apply at the topology level.
 
A meta representation of a topology.
Represents a global store.
A node of a topology.
A processor node of a topology.
A sink node of a topology.
A source node of a topology.
A connected sub-graph of a Topology.
Indicates a pre run time error occurred while parsing the logical topology to construct the physical processor topology.
This class makes it easier to write tests to verify the behavior of topologies created with Topology or StreamsBuilder.
 
This is the Exception thrown when we are aborting any undrained batches during a transaction which is aborted without any underlying cause - which likely means that the user chose to abort.
 
 
Provided to source tasks to allow them to define their own producer transaction boundaries when exactly-once support is enabled.
 
 
 
 
Single message transformation for Kafka Connect record types.
The Transformer interface is for stateful mapping of an input record to zero, one, or multiple new output records (both key and value type can be altered arbitrarily).
A TransformerSupplier interface which can create one or more Transformer instances.
Exception thrown when attempting to define a credential that does not meet the criteria for acceptability (for example, attempting to create a SCRAM credential with an empty username or password or too few/many iterations).
Deprecated.
Since 3.3.0, in order to use default partitioning logic remove the partitioner.class configuration setting and set partitioner.ignore.keys=true.
 
The request contained a leader epoch which is larger than that on the broker that received the request.
 
This exception is raised by the broker if it could not locate the producer metadata associated with the producerId in question.
An error occurred on the server for which the client doesn't have a corresponding error code.
Indicates that the state store being queried is unknown, i.e., the state store does either not exist in your topology or it is not queryable.
This exception indicates that the client sent an invalid or outdated SubscriptionId
 
This topic/partition doesn't exist.
Indicates that the NamedTopology being looked up does not exist in this application
The unlimited window specifications used for aggregations.
 
Exception thrown when there are unstable offsets for the requested topic partitions.
 
Authentication mechanism does not support the requested function.
The requesting client does not support the compression type of given partition.
 
The message format version does not support the requested function.
This exception indicates that the SASL mechanism requested by the client is not enabled on the broker.
Indicates that a request API or version needed by the client is not supported by the broker.
Retrieves embedded metadata timestamps from Kafka messages.
A request to alter a user's SASL/SCRAM credentials.
A request to delete a SASL/SCRAM credential for a user.
Representation of all SASL/SCRAM credentials associated with a user that can be retrieved, or an exception indicating why credentials could not be retrieved.
A request to update/insert a SASL/SCRAM credential for a user.
This class defines an immutable universally unique identifier (UUID).
We are converting the byte array to String before deserializing to UUID.
We are converting UUID to String before serializing.
An instantaneous value.
Combines a value from a KeyValue with a timestamp.
The ValueJoiner interface for joining two values into a new value of arbitrary type.
The ValueJoinerWithKey interface for joining two values into a new value of arbitrary type.
The ValueMapper interface for mapping a value to a new value of arbitrary type.
The ValueMapperWithKey interface for mapping a value to a new value of arbitrary type.
Utility for converting from one Connect value to a different form.
The ValueTransformer interface for stateful mapping of a value to a new value (with possible new type).
A ValueTransformerSupplier interface which can create one or more ValueTransformer instances.
The ValueTransformerWithKey interface for stateful mapping of a value to a new value (with possible new type).
A ValueTransformerWithKeySupplier interface which can create one or more ValueTransformerWithKey instances.
 
Counterpart to VerifiableSourceTask that consumes records and logs information about each to stdout.
 
A connector primarily intended for system tests.
Connect requires some components implement this interface to define a version string.
A representation of a versioned key-value store as a KeyValueStore of type <Bytes, byte[]>.
A store supplier that can be used to create one or more versioned key-value stores, specifically, VersionedBytesStore instances.
Interactive query for retrieving a single record from a versioned state store based on its key and timestamp.
A key-value store that stores multiple record versions per key, and supports timestamp-based retrieval operations to return the latest record (per key) as of a specified timestamp.
Combines a value (from a key-value record) with a timestamp, for use as the return type from VersionedKeyValueStore.get(Object, long) and related methods.
Iterator interface of VersionedRecord.
 
 
Exception used to indicate preemption of a blocking operation by an external thread.
Retrieves current wall clock timestamps as System.currentTimeMillis().
A single window instance, defined by its start and end timestamp.
A store supplier that can be used to create one or more WindowStore<Byte, byte[]> instances of type <Byte, byte[]>.
The result key type of a windowed stream aggregation.
A SampledStat that maintains a simple count of what it has seen.
 
 
 
A SampledStat that maintains the sum of what it has seen.
 
 
Windows<W extends Window>
The window specification for fixed size windows that is used to define window boundaries and grace period.
Interface for storing the aggregated values of fixed-size time windows.
Iterator interface of KeyValue with key typed Long used for WindowStore.fetch(Object, long, long) and WindowStore.fetch(Object, Instant, Instant) Users must call its close method explicitly upon completeness to release resources, or use try-with-resources statement (available since JDK7) for this Closeable class.