All Classes
Class | Description |
---|---|
AbortTransactionOptions | |
AbortTransactionResult | |
AbortTransactionSpec | |
AbstractConfig |
A convenient base class for configurations to extend.
|
AbstractOptions<T extends AbstractOptions> | |
AbstractProcessor<K,V> | Deprecated.
Since 3.0.
|
AbstractState |
Provides the current status along with identifier for Connect worker and tasks.
|
AccessControlEntry |
Represents an access control entry.
|
AccessControlEntryFilter |
Represents a filter which matches access control entries.
|
AclBinding |
Represents a binding between a resource pattern and an access control entry.
|
AclBindingFilter |
A filter which can match AclBinding objects.
|
AclCreateResult | |
AclDeleteResult | |
AclDeleteResult.AclBindingDeleteResult |
Delete result for each ACL binding that matched a delete filter.
|
AclOperation |
Represents an operation which an ACL grants or denies permission to perform.
|
AclPermissionType |
Represents whether an ACL grants or denies permissions.
|
Action | |
Admin |
The administrative client for Kafka, which supports managing and inspecting topics, brokers, configurations and ACLs.
|
AdminClient |
The base class for in-built admin clients.
|
AdminClientConfig |
The AdminClient configuration class, which also contains constants for configuration entry names.
|
Aggregator<K,V,VA> |
The
Aggregator interface for aggregating values of the given key. |
AlreadyExistsException |
Indicates the operation tried to create an entity that already exists.
|
AlterClientQuotasOptions | |
AlterClientQuotasResult |
The result of the
Admin.alterClientQuotas(Collection, AlterClientQuotasOptions) call. |
AlterConfigOp |
A class representing a alter configuration entry containing name, value and operation type.
|
AlterConfigOp.OpType | |
AlterConfigPolicy |
An interface for enforcing a policy on alter configs requests.
|
AlterConfigPolicy.RequestMetadata |
Class containing the create request parameters.
|
AlterConfigsOptions |
Options for
Admin.incrementalAlterConfigs(Map) and Admin.alterConfigs(Map) . |
AlterConfigsResult |
The result of the
Admin.alterConfigs(Map) call. |
AlterConsumerGroupOffsetsOptions |
Options for the
Admin.alterConsumerGroupOffsets(String, Map, AlterConsumerGroupOffsetsOptions) call. |
AlterConsumerGroupOffsetsResult |
The result of the
Admin.alterConsumerGroupOffsets(String, Map) call. |
AlterPartitionReassignmentsOptions |
Options for
Admin.alterPartitionReassignments(Map, AlterPartitionReassignmentsOptions)
The API of this class is evolving. |
AlterPartitionReassignmentsResult | |
AlterReplicaLogDirsOptions | |
AlterReplicaLogDirsResult |
The result of
Admin.alterReplicaLogDirs(Map, AlterReplicaLogDirsOptions) . |
AlterUserScramCredentialsOptions |
Options for
Admin.alterUserScramCredentials(List, AlterUserScramCredentialsOptions)
The API of this class is evolving. |
AlterUserScramCredentialsResult |
The result of the
Admin.alterUserScramCredentials(List) call. |
ApiException |
Any API exception that is part of the public protocol and should be a subclass of this class and be part of this
package.
|
AuthenticateCallbackHandler | |
AuthenticationContext |
An object representing contextual information from the authentication session.
|
AuthenticationException |
This exception indicates that SASL authentication has failed.
|
AuthorizableRequestContext |
Request context interface that provides data from request header as well as connection
and authentication information to plugins.
|
AuthorizationException | |
AuthorizationResult | |
Authorizer |
Pluggable authorizer interface for Kafka brokers.
|
AuthorizerServerInfo |
Runtime broker configuration metadata provided to authorizers during start up.
|
Avg |
A
SampledStat that maintains a simple average over its samples. |
BatchingStateRestoreCallback |
Interface for batching restoration of a
StateStore
It is expected that implementations of this class will not call the StateRestoreCallback.restore(byte[],
byte[]) method. |
Branched<K,V> |
The
Branched class is used to define the optional parameters when building branches with
BranchedKStream . |
BranchedKStream<K,V> |
Branches the records in the original stream based on the predicates supplied for the branch definitions.
|
BrokerIdNotRegisteredException | |
BrokerNotAvailableException | |
BrokerNotFoundException |
Indicates that none of the specified
brokers
could be found. |
BufferExhaustedException |
This exception is thrown if the producer cannot allocate memory for a record within max.block.ms due to the buffer
being too full.
|
ByteArrayDeserializer | |
ByteArraySerializer | |
ByteBufferDeserializer | |
ByteBufferSerializer | |
BytesDeserializer | |
BytesSerializer | |
Callback |
A callback interface that the user can implement to allow code to execute when the request is complete.
|
Cancellable |
Cancellable interface returned in
ProcessorContext.schedule(Duration, PunctuationType, Punctuator) . |
Checkpoint |
Checkpoint records emitted from MirrorCheckpointConnector.
|
ClientQuotaAlteration |
Describes a configuration alteration to be made to a client quota entity.
|
ClientQuotaAlteration.Op | |
ClientQuotaCallback |
Quota callback interface for brokers that enables customization of client quota computation.
|
ClientQuotaEntity |
Describes a client quota entity, which is a mapping of entity types to their names.
|
ClientQuotaEntity |
The metadata for an entity for which quota is configured.
|
ClientQuotaEntity.ConfigEntity |
Interface representing a quota configuration entity.
|
ClientQuotaEntity.ConfigEntityType |
Entity type of a
ClientQuotaEntity.ConfigEntity |
ClientQuotaFilter |
Describes a client quota entity filter.
|
ClientQuotaFilterComponent |
Describes a component for applying a client quota filter.
|
ClientQuotaType |
Types of quotas that may be configured on brokers for client requests.
|
Cluster |
An immutable representation of a subset of the nodes, topics, and partitions in the Kafka cluster.
|
ClusterAuthorizationException | |
ClusterResource |
The
ClusterResource class encapsulates metadata for a Kafka cluster. |
ClusterResourceListener |
A callback interface that users can implement when they wish to get notified about changes in the Cluster metadata.
|
CogroupedKStream<K,VOut> |
CogroupedKStream is an abstraction of multiple grouped record streams of KeyValue pairs. |
CommitFailedException |
This exception is raised when an offset commit with
KafkaConsumer.commitSync() fails
with an unrecoverable error. |
CompoundStat |
A compound stat is a stat where a single measurement and associated data structure feeds many metrics.
|
CompoundStat.NamedMeasurable | |
ConcurrentTransactionsException | |
Config |
A configuration object containing the configuration entries for a resource.
|
Config | |
ConfigChangeCallback |
A callback passed to
ConfigProvider for subscribing to changes. |
ConfigData |
Configuration data from a
ConfigProvider . |
ConfigDef |
This class is used for specifying the set of expected configurations.
|
ConfigDef.CaseInsensitiveValidString | |
ConfigDef.CompositeValidator | |
ConfigDef.ConfigKey | |
ConfigDef.Importance |
The importance level for a configuration
|
ConfigDef.LambdaValidator | |
ConfigDef.NonEmptyString | |
ConfigDef.NonEmptyStringWithoutControlChars | |
ConfigDef.NonNullValidator | |
ConfigDef.Range |
Validation logic for numeric ranges
|
ConfigDef.Recommender |
This is used by the
ConfigDef.validate(Map) to get valid values for a configuration given the current
configuration values in order to perform full configuration validation and visibility modification. |
ConfigDef.Type |
The config types
|
ConfigDef.Validator |
Validation logic the user may provide to perform single configuration validation.
|
ConfigDef.ValidList | |
ConfigDef.ValidString | |
ConfigDef.Width |
The width of a configuration value
|
ConfigEntry |
A class representing a configuration entry containing name, value and additional metadata.
|
ConfigEntry.ConfigSource |
Source of configuration entries.
|
ConfigEntry.ConfigSynonym |
Class representing a configuration synonym of a
ConfigEntry . |
ConfigEntry.ConfigType |
Data type of configuration entry.
|
ConfigException |
Thrown if the user supplies an invalid configuration
|
ConfigProvider |
A provider of configuration data, which may optionally support subscriptions to configuration changes.
|
ConfigResource |
A class representing resources that have configs.
|
ConfigResource.Type |
Type of resource.
|
ConfigTransformer |
This class wraps a set of
ConfigProvider instances and uses them to perform
transformations. |
ConfigTransformerResult |
The result of a transformation from
ConfigTransformer . |
Configurable |
A Mix-in style interface for classes that are instantiated by reflection and need to take configuration parameters
|
ConfigValue | |
ConnectClusterDetails |
Provides immutable Connect cluster information, such as the ID of the backing Kafka cluster.
|
ConnectClusterState |
Provides the ability to lookup connector metadata, including status and configurations, as well
as immutable cluster information such as Kafka cluster ID.
|
ConnectedStoreProvider |
Provides a set of
StoreBuilder s that will be automatically added to the topology and connected to the
associated processor. |
ConnectException |
ConnectException is the top-level exception type generated by Kafka Connect and connector implementations.
|
ConnectHeaders |
A basic
Headers implementation. |
Connector |
Connectors manage integration of Kafka Connect with another system, either as an input that ingests
data into Kafka or an output that passes data to an external system.
|
ConnectorClientConfigOverridePolicy |
An interface for enforcing a policy on overriding of client configs via the connector configs.
|
ConnectorClientConfigRequest | |
ConnectorClientConfigRequest.ClientType | |
ConnectorContext |
ConnectorContext allows Connectors to proactively interact with the Kafka Connect runtime.
|
ConnectorHealth |
Provides basic health information about the connector and its tasks.
|
ConnectorState |
Describes the status, worker ID, and any errors associated with a connector.
|
ConnectorType |
Enum definition that identifies the type of the connector.
|
ConnectorUtils |
Utilities that connector implementations might find useful.
|
ConnectRecord<R extends ConnectRecord<R>> |
Base class for records containing data to be copied to/from Kafka.
|
ConnectRestExtension |
A plugin interface to allow registration of new JAX-RS resources like Filters, REST endpoints, providers, etc.
|
ConnectRestExtensionContext |
The interface provides the ability for
ConnectRestExtension implementations to access the JAX-RS
Configurable and cluster state ConnectClusterState . |
ConnectSchema | |
Consumed<K,V> |
The
Consumed class is used to define the optional parameters when using StreamsBuilder to
build instances of KStream , KTable , and GlobalKTable . |
Consumer<K,V> | |
ConsumerConfig |
The consumer configuration keys
|
ConsumerGroupDescription |
A detailed description of a single consumer group in the cluster.
|
ConsumerGroupListing |
A listing of a consumer group in the cluster.
|
ConsumerGroupMetadata |
A metadata struct containing the consumer group information.
|
ConsumerGroupState |
The consumer group state.
|
ConsumerInterceptor<K,V> |
A plugin interface that allows you to intercept (and possibly mutate) records received by the consumer.
|
ConsumerPartitionAssignor |
This interface is used to define custom partition assignment for use in
KafkaConsumer . |
ConsumerPartitionAssignor.Assignment | |
ConsumerPartitionAssignor.GroupAssignment | |
ConsumerPartitionAssignor.GroupSubscription | |
ConsumerPartitionAssignor.RebalanceProtocol |
The rebalance protocol defines partition assignment and revocation semantics.
|
ConsumerPartitionAssignor.Subscription | |
ConsumerRebalanceListener |
A callback interface that the user can implement to trigger custom actions when the set of partitions assigned to the
consumer changes.
|
ConsumerRecord<K,V> |
A key/value pair to be received from Kafka.
|
ConsumerRecords<K,V> |
A container that holds the list
ConsumerRecord per partition for a
particular topic. |
ContextualProcessor<KIn,VIn,KOut,VOut> |
An abstract implementation of
Processor that manages the ProcessorContext instance and provides default no-op
implementation of Processor.close() . |
ControllerMovedException | |
Converter |
The Converter interface provides support for translating between Kafka Connect's runtime data format
and byte[].
|
ConverterConfig |
Abstract class that defines the configuration options for
Converter and HeaderConverter instances. |
ConverterType |
The type of
Converter and HeaderConverter . |
CooperativeStickyAssignor |
A cooperative version of the
AbstractStickyAssignor . |
CoordinatorLoadInProgressException |
In the context of the group coordinator, the broker returns this error code for any coordinator request if
it is still loading the group metadata (e.g.
|
CoordinatorNotAvailableException |
In the context of the group coordinator, the broker returns this error code for metadata or offset commit
requests if the group metadata topic has not been created yet.
|
CorruptRecordException |
This exception indicates a record has failed its internal CRC check, this generally indicates network or disk
corruption.
|
CreateAclsOptions |
Options for
Admin.createAcls(Collection) . |
CreateAclsResult |
The result of the
Admin.createAcls(Collection) call. |
CreateDelegationTokenOptions | |
CreateDelegationTokenResult |
The result of the
KafkaAdminClient.createDelegationToken(CreateDelegationTokenOptions) call. |
CreatePartitionsOptions |
Options for
Admin.createPartitions(Map) . |
CreatePartitionsResult |
The result of the
Admin.createPartitions(Map) call. |
CreateTopicPolicy |
An interface for enforcing a policy on create topics requests.
|
CreateTopicPolicy.RequestMetadata |
Class containing the create request parameters.
|
CreateTopicsOptions |
Options for
Admin.createTopics(Collection) . |
CreateTopicsResult |
The result of
Admin.createTopics(Collection) . |
CreateTopicsResult.TopicMetadataAndConfig | |
CumulativeCount |
A non-sampled version of
WindowedCount maintained over all time. |
CumulativeSum |
An non-sampled cumulative total maintained over all time.
|
DataException |
Base class for all Kafka Connect data API exceptions.
|
Date |
A date representing a calendar day with no time of day or timezone.
|
Decimal |
An arbitrary-precision signed decimal number.
|
DefaultProductionExceptionHandler |
ProductionExceptionHandler that always instructs streams to fail when an exception
happens while attempting to produce result records. |
DefaultReplicationPolicy |
Defines remote topics like "us-west.topic1".
|
DelegationToken |
A class representing a delegation token.
|
DelegationTokenAuthorizationException | |
DelegationTokenDisabledException | |
DelegationTokenExpiredException | |
DelegationTokenNotFoundException | |
DelegationTokenOwnerMismatchException | |
DeleteAclsOptions |
Options for the
Admin.deleteAcls(Collection) call. |
DeleteAclsResult |
The result of the
Admin.deleteAcls(Collection) call. |
DeleteAclsResult.FilterResult |
A class containing either the deleted ACL binding or an exception if the delete failed.
|
DeleteAclsResult.FilterResults |
A class containing the results of the delete ACLs operation.
|
DeleteConsumerGroupOffsetsOptions |
Options for the
Admin.deleteConsumerGroupOffsets(String, Set) call. |
DeleteConsumerGroupOffsetsResult |
The result of the
Admin.deleteConsumerGroupOffsets(String, Set) call. |
DeleteConsumerGroupsOptions |
Options for the
Admin.deleteConsumerGroups(Collection) call. |
DeleteConsumerGroupsResult |
The result of the
Admin.deleteConsumerGroups(Collection) call. |
DeletedRecords |
Represents information about deleted records
The API for this class is still evolving and we may break compatibility in minor releases, if necessary.
|
DeleteRecordsOptions |
Options for
Admin.deleteRecords(Map, DeleteRecordsOptions) . |
DeleteRecordsResult |
The result of the
Admin.deleteRecords(Map) call. |
DeleteTopicsOptions |
Options for
Admin.deleteTopics(Collection) . |
DeleteTopicsResult |
The result of the
Admin.deleteTopics(Collection) call. |
DescribeAclsOptions |
Options for
Admin.describeAcls(AclBindingFilter) . |
DescribeAclsResult |
The result of the
Admin.describeAcls(AclBindingFilter) call. |
DescribeClientQuotasOptions | |
DescribeClientQuotasResult |
The result of the
Admin.describeClientQuotas(ClientQuotaFilter, DescribeClientQuotasOptions) call. |
DescribeClusterOptions |
Options for
Admin.describeCluster() . |
DescribeClusterResult |
The result of the
Admin.describeCluster() call. |
DescribeConfigsOptions |
Options for
Admin.describeConfigs(Collection) . |
DescribeConfigsResult |
The result of the
Admin.describeConfigs(Collection) call. |
DescribeConsumerGroupsOptions | |
DescribeConsumerGroupsResult |
The result of the
KafkaAdminClient.describeConsumerGroups(Collection, DescribeConsumerGroupsOptions) } call. |
DescribeDelegationTokenOptions | |
DescribeDelegationTokenResult |
The result of the
KafkaAdminClient.describeDelegationToken(DescribeDelegationTokenOptions) call. |
DescribeFeaturesOptions |
Options for
Admin.describeFeatures(DescribeFeaturesOptions) . |
DescribeFeaturesResult |
The result of the
Admin.describeFeatures(DescribeFeaturesOptions) call. |
DescribeLogDirsOptions |
Options for
Admin.describeLogDirs(Collection)
The API of this class is evolving, see Admin for details. |
DescribeLogDirsResult |
The result of the
Admin.describeLogDirs(Collection) call. |
DescribeProducersOptions |
Options for
Admin.describeProducers(Collection) . |
DescribeProducersResult | |
DescribeProducersResult.PartitionProducerState | |
DescribeReplicaLogDirsOptions |
Options for
Admin.describeReplicaLogDirs(Collection) . |
DescribeReplicaLogDirsResult |
The result of
Admin.describeReplicaLogDirs(Collection) . |
DescribeReplicaLogDirsResult.ReplicaLogDirInfo | |
DescribeTopicsOptions |
Options for
Admin.describeTopics(Collection) . |
DescribeTopicsResult |
The result of the
Admin.describeTopics(Collection) call. |
DescribeTransactionsOptions |
Options for
Admin.describeTransactions(Collection) . |
DescribeTransactionsResult | |
DescribeUserScramCredentialsOptions |
Options for
Admin.describeUserScramCredentials(List, DescribeUserScramCredentialsOptions)
The API of this class is evolving. |
DescribeUserScramCredentialsResult |
The result of the
Admin.describeUserScramCredentials() call. |
DeserializationExceptionHandler |
Interface that specifies how an exception from source node deserialization
(e.g., reading from Kafka) should be handled.
|
DeserializationExceptionHandler.DeserializationHandlerResponse |
Enumeration that describes the response from the exception handler.
|
Deserializer<T> |
An interface for converting bytes to objects.
|
DirectoryConfigProvider |
An implementation of
ConfigProvider based on a directory of files. |
DisconnectException |
Server disconnected before a request could be completed.
|
DoubleDeserializer | |
DoubleSerializer | |
DuplicateBrokerRegistrationException | |
DuplicateResourceException |
Exception thrown due to a request that illegally refers to the same resource twice
(for example, trying to both create and delete the same SCRAM credential for a particular user in a single request).
|
DuplicateSequenceException | |
ElectionNotNeededException | |
ElectionType | |
ElectLeadersOptions | |
ElectLeadersResult |
The result of
Admin.electLeaders(ElectionType, Set, ElectLeadersOptions)
The API of this class is evolving, see Admin for details. |
EligibleLeadersNotAvailableException | |
Endpoint |
Represents a broker endpoint.
|
ErrantRecordReporter |
Component that the sink task can use as it
SinkTask.put(java.util.Collection) . |
ExpireDelegationTokenOptions | |
ExpireDelegationTokenResult |
The result of the
KafkaAdminClient.expireDelegationToken(byte[], ExpireDelegationTokenOptions) call. |
FailOnInvalidTimestamp |
Retrieves embedded metadata timestamps from Kafka messages.
|
FeatureMetadata |
Encapsulates details about finalized as well as supported features.
|
FeatureUpdate |
Encapsulates details about an update to a finalized feature.
|
FeatureUpdateFailedException | |
FencedInstanceIdException | |
FencedLeaderEpochException |
The request contained a leader epoch which is smaller than that on the broker that received the
request.
|
FetchSessionIdNotFoundException | |
Field | |
FileConfigProvider |
An implementation of
ConfigProvider that represents a Properties file. |
FinalizedVersionRange |
Represents a range of version levels supported by every broker in a cluster for some feature.
|
FloatDeserializer | |
FloatSerializer | |
ForeachAction<K,V> |
The
ForeachAction interface for performing an action on a key-value
pair . |
ForeachProcessor<K,V> | |
Frequencies |
A
CompoundStat that represents a normalized distribution with a Frequency metric for each
bucketed value. |
Frequency |
Definition of a frequency metric used in a
Frequencies compound statistic. |
Gauge<T> |
A gauge metric is an instantaneous reading of a particular value.
|
GlobalKTable<K,V> |
GlobalKTable is an abstraction of a changelog stream from a primary-keyed table. |
GroupAuthorizationException | |
Grouped<K,V> |
The class that is used to capture the key and value
Serde s and set the part of name used for
repartition topics when performing KStream.groupBy(KeyValueMapper, Grouped) , KStream.groupByKey(Grouped) , or KTable.groupBy(KeyValueMapper, Grouped) operations. |
GroupIdNotFoundException | |
GroupMaxSizeReachedException |
Indicates that a consumer group is already at its configured maximum capacity and cannot accommodate more members
|
GroupNotEmptyException | |
GroupSubscribedToTopicException | |
Header | |
Header |
A
Header is a key-value pair, and multiple headers can be included with the key, value, and timestamp in each Kafka message. |
HeaderConverter | |
Headers | |
Headers |
A mutable ordered collection of
Header objects. |
Headers.HeaderTransform |
A function to transform the supplied
Header . |
Heartbeat |
Heartbeat message sent from MirrorHeartbeatTask to target cluster.
|
Histogram | |
Histogram.BinScheme |
An algorithm for determining the bin in which a value is to be placed as well as calculating the upper end
of each bin.
|
Histogram.ConstantBinScheme |
A scheme for calculating the bins where the width of each bin is a constant determined by the range of values
and the number of bins.
|
Histogram.LinearBinScheme |
A scheme for calculating the bins where the width of each bin is one more than the previous bin, and therefore
the bin widths are increasing at a linear rate.
|
HostInfo |
Represents a user defined endpoint in a
KafkaStreams application. |
IdentityReplicationPolicy |
IdentityReplicationPolicy does not rename remote topics.
|
IllegalGenerationException | |
IllegalSaslStateException |
This exception indicates unexpected requests prior to SASL authentication.
|
IllegalWorkerStateException |
Indicates that a method has been invoked illegally or at an invalid time by a connector or task.
|
InconsistentClusterIdException | |
InconsistentGroupProtocolException | |
InconsistentTopicIdException | |
InconsistentVoterSetException | |
Initializer<VA> |
The
Initializer interface for creating an initial value in aggregations. |
IntegerDeserializer | |
IntegerSerializer | |
InterfaceStability |
Annotation to inform users of how much to rely on a particular package, class or method not changing over time.
|
InterfaceStability.Evolving |
Compatibility may be broken at minor release (i.e.
|
InterfaceStability.Stable |
Compatibility is maintained in major, minor and patch releases with one exception: compatibility may be broken
in a major release (i.e.
|
InterfaceStability.Unstable |
No guarantee is provided as to reliability or stability across any level of release granularity.
|
InterruptException |
An unchecked wrapper for InterruptedException
|
InvalidCommitOffsetSizeException | |
InvalidConfigurationException | |
InvalidFetchSessionEpochException | |
InvalidFetchSizeException | |
InvalidGroupIdException | |
InvalidMetadataException |
An exception that may indicate the client's metadata is out of date
|
InvalidOffsetException |
Thrown when the offset for a set of partitions is invalid (either undefined or out of range),
and no reset policy has been configured.
|
InvalidOffsetException |
Thrown when the offset for a set of partitions is invalid (either undefined or out of range),
and no reset policy has been configured.
|
InvalidPartitionsException | |
InvalidPidMappingException | |
InvalidPrincipalTypeException | |
InvalidProducerEpochException |
This exception indicates that the produce request sent to the partition leader
contains a non-matching producer epoch.
|
InvalidRecordException | |
InvalidReplicaAssignmentException | |
InvalidReplicationFactorException | |
InvalidRequestException |
Thrown when a request breaks basic wire protocol rules.
|
InvalidRequiredAcksException | |
InvalidSessionTimeoutException | |
InvalidStateStoreException |
Indicates that there was a problem when trying to access a
StateStore . |
InvalidStateStorePartitionException |
Indicates that the specific state store being queried via
StoreQueryParameters used a partitioning that is not assigned to this instance. |
InvalidTimestampException |
Indicate the timestamp of a record is invalid.
|
InvalidTopicException |
The client has attempted to perform an operation on an invalid topic.
|
InvalidTxnStateException | |
InvalidTxnTimeoutException |
The transaction coordinator returns this error code if the timeout received via the InitProducerIdRequest is larger than
the `transaction.max.timeout.ms` config value.
|
InvalidUpdateVersionException | |
IsolationLevel | |
JmxReporter |
Register metrics in JMX as dynamic mbeans based on the metric names
|
Joined<K,V,VO> |
The
Joined class represents optional params that can be passed to
KStream#join(KTable,...) and
KStream#leftJoin(KTable,...) operations. |
JoinWindows |
The window specifications used for joins.
|
KafkaAdminClient |
The default implementation of
Admin . |
KafkaClientSupplier |
KafkaClientSupplier can be used to provide custom Kafka clients to a KafkaStreams instance. |
KafkaConsumer<K,V> |
A client that consumes records from a Kafka cluster.
|
KafkaException |
The base class of all other Kafka exceptions
|
KafkaFuture<T> |
A flexible future which supports call chaining and other asynchronous programming patterns.
|
KafkaFuture.BaseFunction<A,B> |
A function which takes objects of type A and returns objects of type B.
|
KafkaFuture.BiConsumer<A,B> |
A consumer of two different types of object.
|
KafkaFuture.Function<A,B> | Deprecated.
Since Kafka 3.0.
|
KafkaMetric | |
KafkaMetricsContext |
A implementation of MetricsContext, it encapsulates required metrics context properties for Kafka services and clients
|
KafkaPrincipal |
Principals in Kafka are defined by a type and a name.
|
KafkaPrincipalBuilder |
Pluggable principal builder interface which supports both SSL authentication through
SslAuthenticationContext and SASL through SaslAuthenticationContext . |
KafkaPrincipalSerde |
Serializer/Deserializer interface for
KafkaPrincipal for the purpose of inter-broker forwarding. |
KafkaProducer<K,V> |
A Kafka client that publishes records to the Kafka cluster.
|
KafkaStorageException |
Miscellaneous disk-related IOException occurred when handling a request.
|
KafkaStreams |
A Kafka client that allows for performing continuous computation on input coming from one or more input topics and
sends output to zero, one, or more output topics.
|
KafkaStreams.State |
Kafka Streams states are the possible state that a Kafka Streams instance can be in.
|
KafkaStreams.StateListener |
Listen to
KafkaStreams.State change events. |
KeyQueryMetadata |
Represents all the metadata related to a key, where a particular key resides in a
KafkaStreams application. |
KeyValue<K,V> |
A key-value pair defined for a single Kafka Streams record.
|
KeyValueBytesStoreSupplier |
A store supplier that can be used to create one or more
KeyValueStore<Bytes, byte[]> instances of type <Bytes, byte[]>. |
KeyValueIterator<K,V> |
Iterator interface of
KeyValue . |
KeyValueMapper<K,V,VR> |
The
KeyValueMapper interface for mapping a key-value pair to a new value of arbitrary type. |
KeyValueStore<K,V> |
A key-value store that supports put/get/delete and range queries.
|
KGroupedStream<K,V> |
KGroupedStream is an abstraction of a grouped record stream of KeyValue pairs. |
KGroupedTable<K,V> |
KGroupedTable is an abstraction of a re-grouped changelog stream from a primary-keyed table,
usually on a different grouping key than the original primary key. |
KStream<K,V> |
KStream is an abstraction of a record stream of KeyValue pairs, i.e., each record is an
independent entity/event in the real world. |
KTable<K,V> |
KTable is an abstraction of a changelog stream from a primary-keyed table. |
LagInfo |
Encapsulates information about lag, at a store partition replica (active or standby).
|
LeaderNotAvailableException |
There is no currently available leader for the given partition (either because a leadership election is in progress
or because all replicas are down).
|
ListConsumerGroupOffsetsOptions |
Options for
Admin.listConsumerGroupOffsets(String) . |
ListConsumerGroupOffsetsResult |
The result of the
Admin.listConsumerGroupOffsets(String) call. |
ListConsumerGroupsOptions |
Options for
Admin.listConsumerGroups() . |
ListConsumerGroupsResult |
The result of the
Admin.listConsumerGroups() call. |
ListDeserializer<Inner> | |
ListenerNotFoundException |
The leader does not have an endpoint corresponding to the listener on which metadata was requested.
|
ListOffsetsOptions |
Options for
Admin.listOffsets(Map) . |
ListOffsetsResult |
The result of the
Admin.listOffsets(Map) call. |
ListOffsetsResult.ListOffsetsResultInfo | |
ListPartitionReassignmentsOptions |
Options for
Admin.listPartitionReassignments(ListPartitionReassignmentsOptions)
The API of this class is evolving. |
ListPartitionReassignmentsResult | |
ListSerializer<Inner> | |
ListTopicsOptions |
Options for
Admin.listTopics() . |
ListTopicsResult |
The result of the
Admin.listTopics() call. |
ListTransactionsOptions |
Options for
Admin.listTransactions() . |
ListTransactionsResult |
The result of the
Admin.listTransactions() call. |
LockException |
Indicates that the state store directory lock could not be acquired because another thread holds the lock.
|
LogAndContinueExceptionHandler |
Deserialization handler that logs a deserialization exception and then
signals the processing pipeline to continue processing more records.
|
LogAndFailExceptionHandler |
Deserialization handler that logs a deserialization exception and then
signals the processing pipeline to stop processing more records and fail.
|
LogAndSkipOnInvalidTimestamp |
Retrieves embedded metadata timestamps from Kafka messages.
|
LogDirDescription |
A description of a log directory on a particular broker.
|
LogDirNotFoundException |
Thrown when a request is made for a log directory that is not present on the broker
|
Login |
Login interface for authentication.
|
LogLevelConfig |
This class holds definitions for log level configurations related to Kafka's application logging.
|
LogSegmentData |
This represents all the required data and indexes for a specific log segment that needs to be stored in the remote
storage.
|
LogTruncationException |
In the event of an unclean leader election, the log will be truncated,
previously committed data will be lost, and new data will be written
over these offsets.
|
LongDeserializer | |
LongSerializer | |
Materialized<K,V,S extends StateStore> |
Used to describe how a
StateStore should be materialized. |
Max |
A
SampledStat that gives the max over its samples. |
Measurable |
A measurable quantity that can be registered as a metric
|
MeasurableStat |
A MeasurableStat is a
Stat that is also Measurable (i.e. |
MemberAssignment |
A description of the assignments of a specific group member.
|
MemberDescription |
A detailed description of a single group instance in the cluster.
|
MemberIdRequiredException | |
MemberToRemove |
A struct containing information about the member to be removed.
|
Merger<K,V> |
The interface for merging aggregate values for
SessionWindows with the given key. |
MessageFormatter |
This interface allows to define Formatters that can be used to parse and format records read by a
Consumer instance for display.
|
Meter |
A compound stat that includes a rate metric and a cumulative total metric.
|
Metric |
A metric tracked for monitoring purposes.
|
MetricConfig |
Configuration values for metrics
|
MetricName |
The
MetricName class encapsulates a metric's name, logical group and its related attributes. |
MetricNameTemplate |
A template for a MetricName.
|
Metrics |
A registry of sensors and metrics.
|
MetricsContext |
MetricsContext encapsulates additional contextLabels about metrics exposed via a
MetricsReporter
The contextLabels map provides following information:
- a _namespace field indicating the component exposing metrics
e.g. |
MetricsReporter |
A plugin interface to allow things to listen as new metrics are created so they can be reported.
|
MetricValueProvider<T> |
Super-interface for
Measurable or Gauge that provides
metric values. |
Min |
A
SampledStat that gives the min over its samples. |
MirrorClient |
Interprets MM2's internal topics (checkpoints, heartbeats) on a given cluster.
|
MirrorClientConfig |
Configuration required for MirrorClient to talk to a given target cluster.
|
MissingSourceTopicException | |
MockConsumer<K,V> |
A mock of the
Consumer interface you can use for testing code that uses Kafka. |
MockProcessorContext<KForward,VForward> |
MockProcessorContext is a mock of ProcessorContext for users to test their Processor ,
Transformer , and ValueTransformer implementations. |
MockProcessorContext |
MockProcessorContext is a mock of ProcessorContext for users to test their Processor ,
Transformer , and ValueTransformer implementations. |
MockProcessorContext.CapturedForward<K,V> | |
MockProcessorContext.CapturedForward | |
MockProcessorContext.CapturedPunctuator |
MockProcessorContext.CapturedPunctuator holds captured punctuators, along with their scheduling information. |
MockProcessorContext.CapturedPunctuator |
MockProcessorContext.CapturedPunctuator holds captured punctuators, along with their scheduling information. |
MockProducer<K,V> |
A mock of the producer interface you can use for testing code that uses Kafka.
|
Named | |
NetworkException |
A misc.
|
NewPartitionReassignment |
A new partition reassignment, which can be applied via
Admin.alterPartitionReassignments(Map, AlterPartitionReassignmentsOptions) . |
NewPartitions |
Describes new partitions for a particular topic in a call to
Admin.createPartitions(Map) . |
NewTopic |
A new topic to be created via
Admin.createTopics(Collection) . |
Node |
Information about a Kafka node
|
NoOffsetForPartitionException |
Indicates that there is no stored offset for a partition and no defined offset
reset policy.
|
NoReassignmentInProgressException |
Thrown if a reassignment cannot be cancelled because none is in progress.
|
NotControllerException | |
NotCoordinatorException |
In the context of the group coordinator, the broker returns this error code if it receives an offset fetch
or commit request for a group it's not the coordinator of.
|
NotEnoughReplicasAfterAppendException |
Number of insync replicas for the partition is lower than min.insync.replicas This exception is raised when the low
ISR size is discovered *after* the message was already appended to the log.
|
NotEnoughReplicasException |
Number of insync replicas for the partition is lower than min.insync.replicas
|
NotFoundException |
Indicates that an operation attempted to modify or delete a connector or task that is not present on the worker.
|
NotLeaderForPartitionException | Deprecated.
since 2.6.
|
NotLeaderOrFollowerException |
Broker returns this error if a request could not be processed because the broker is not the leader
or follower for a topic partition.
|
OAuthBearerExtensionsValidatorCallback |
A
Callback for use by the SaslServer implementation when it
needs to validate the SASL extensions for the OAUTHBEARER mechanism
Callback handlers should use the OAuthBearerExtensionsValidatorCallback.valid(String)
method to communicate valid extensions back to the SASL server. |
OAuthBearerLoginModule |
The
LoginModule for the SASL/OAUTHBEARER mechanism. |
OAuthBearerToken |
The
b64token value as defined in
RFC 6750 Section
2.1 along with the token's specific scope and lifetime and principal
name. |
OAuthBearerTokenCallback |
A
Callback for use by the SaslClient and Login
implementations when they require an OAuth 2 bearer token. |
OAuthBearerValidatorCallback |
A
Callback for use by the SaslServer implementation when it
needs to provide an OAuth 2 bearer token compact serialization for
validation. |
OffsetAndMetadata |
The Kafka offset commit API allows users to provide additional metadata (in the form of a string)
when an offset is committed.
|
OffsetAndTimestamp |
A container class for offset and timestamp.
|
OffsetCommitCallback |
A callback interface that the user can implement to trigger custom actions when a commit request completes.
|
OffsetMetadataTooLarge |
The client has tried to save its offset with associated metadata larger than the maximum size allowed by the server.
|
OffsetNotAvailableException |
Indicates that the leader is not able to guarantee monotonically increasing offsets
due to the high watermark lagging behind the epoch start offset after a recent leader election
|
OffsetOutOfRangeException |
No reset policy has been defined, and the offsets for these partitions are either larger or smaller
than the range of offsets the server has for the given partition.
|
OffsetOutOfRangeException |
No reset policy has been defined, and the offsets for these partitions are either larger or smaller
than the range of offsets the server has for the given partition.
|
OffsetResetStrategy | |
OffsetSpec |
This class allows to specify the desired offsets when using
KafkaAdminClient.listOffsets(Map, ListOffsetsOptions) |
OffsetSpec.EarliestSpec | |
OffsetSpec.LatestSpec | |
OffsetSpec.MaxTimestampSpec | |
OffsetSpec.TimestampSpec | |
OffsetStorageReader |
OffsetStorageReader provides access to the offset storage used by sources.
|
OperationNotAttemptedException |
Indicates that the broker did not attempt to execute this operation.
|
OutOfOrderSequenceException |
This exception indicates that the broker received an unexpected sequence number from the producer,
which means that data may have been lost.
|
Partitioner |
Partitioner Interface
|
PartitionInfo |
This is used to describe per-partition state in the MetadataResponse.
|
PartitionReassignment |
A partition reassignment, which has been listed via
Admin.listPartitionReassignments() . |
PatternType |
Resource pattern type.
|
Percentile | |
Percentiles |
A compound stat that reports one or more percentiles
|
Percentiles.BucketSizing | |
PlainAuthenticateCallback | |
PlainLoginModule | |
PlaintextAuthenticationContext | |
PolicyViolationException |
Exception thrown if a create topics request does not satisfy the configured policy for a topic.
|
PositionOutOfRangeException | |
Predicate<R extends ConnectRecord<R>> |
A predicate on records.
|
Predicate<K,V> |
The
Predicate interface represents a predicate (boolean-valued function) of a KeyValue pair. |
PreferredLeaderNotAvailableException | |
PrincipalDeserializationException |
Exception used to indicate a kafka principal deserialization failure during request forwarding.
|
Printed<K,V> |
An object to define the options used when printing a
KStream . |
Processor<KIn,VIn,KOut,VOut> |
A processor of key-value pair records.
|
Processor<K,V> | Deprecated.
Since 3.0.
|
ProcessorContext<KForward,VForward> |
Processor context interface.
|
ProcessorContext |
Processor context interface.
|
ProcessorStateException |
Indicates a processor state operation (e.g.
|
ProcessorSupplier<KIn,VIn,KOut,VOut> |
A processor supplier that can create one or more
Processor instances. |
ProcessorSupplier<K,V> | Deprecated.
Since 3.0.
|
Produced<K,V> |
This class is used to provide the optional parameters when producing to new topics
using
KStream.to(String, Produced) . |
Producer<K,V> |
The interface for the
KafkaProducer |
ProducerConfig |
Configuration for the Kafka Producer.
|
ProducerFencedException |
This fatal exception indicates that another producer with the same
transactional.id has been
started. |
ProducerInterceptor<K,V> |
A plugin interface that allows you to intercept (and possibly mutate) the records received by the producer before
they are published to the Kafka cluster.
|
ProducerRecord<K,V> |
A key/value pair to be sent to Kafka.
|
ProducerState | |
ProductionExceptionHandler |
Interface that specifies how an exception when attempting to produce a result to
Kafka should be handled.
|
ProductionExceptionHandler.ProductionExceptionHandlerResponse | |
PunctuationType |
Controls what notion of time is used for punctuation scheduled via
ProcessorContext.schedule(Duration, PunctuationType, Punctuator) schedule}:
STREAM_TIME - uses "stream time", which is advanced by the processing of messages
in accordance with the timestamp as extracted by the TimestampExtractor in use. |
Punctuator |
A functional interface used as an argument to
ProcessorContext.schedule(Duration, PunctuationType, Punctuator) . |
QueryableStoreType<T> |
Used to enable querying of custom
StateStore types via the KafkaStreams API. |
QueryableStoreTypes |
Provides access to the
QueryableStoreType s provided with KafkaStreams . |
QueryableStoreTypes.KeyValueStoreType<K,V> | |
QueryableStoreTypes.SessionStoreType<K,V> | |
QueryableStoreTypes.WindowStoreType<K,V> | |
Quota |
An upper or lower bound for metrics
|
QuotaViolationException |
Thrown when a sensor records a value that causes a metric to go outside the bounds configured as its quota
|
RangeAssignor |
The range assignor works on a per-topic basis.
|
Rate |
The rate of the given quantity.
|
ReadOnlyKeyValueStore<K,V> |
A key-value store that only supports read operations.
|
ReadOnlySessionStore<K,AGG> |
A session store that only supports read operations.
|
ReadOnlyWindowStore<K,V> |
A window store that only supports read operations.
|
ReassignmentInProgressException |
Thrown if a request cannot be completed because a partition reassignment is in progress.
|
RebalanceInProgressException | |
Reconfigurable |
Interface for reconfigurable classes that support dynamic configuration.
|
Record<K,V> |
A data class representing an incoming record for processing in a
Processor
or a record to forward to downstream processors via ProcessorContext . |
RecordBatchTooLargeException |
This record batch is larger than the maximum allowable size
|
RecordContext |
The context associated with the current record being processed by
an
Processor |
RecordDeserializationException |
This exception is raised for any error that occurs while deserializing records received by the consumer using
the configured
Deserializer . |
RecordMetadata |
The metadata for a record that has been acknowledged by the server
|
RecordMetadata | |
RecordsToDelete |
Describe records to delete in a call to
Admin.deleteRecords(Map)
The API of this class is evolving, see Admin for details. |
RecordTooLargeException |
This record is larger than the maximum allowable size
|
Reducer<V> |
The
Reducer interface for combining two values of the same type into a new value. |
RemoteClusterUtils |
Convenience methods for multi-cluster environments.
|
RemoteLogMetadata |
Base class for remote log metadata objects like
RemoteLogSegmentMetadata , RemoteLogSegmentMetadataUpdate ,
and RemotePartitionDeleteMetadata . |
RemoteLogMetadataManager |
This interface provides storing and fetching remote log segment metadata with strongly consistent semantics.
|
RemoteLogSegmentId |
This class represents a universally unique identifier associated to a topic partition's log segment.
|
RemoteLogSegmentMetadata |
It describes the metadata about a topic partition's remote log segment in the remote storage.
|
RemoteLogSegmentMetadataUpdate |
It describes the metadata update about the log segment in the remote storage.
|
RemoteLogSegmentState |
This enum indicates the state of the remote log segment.
|
RemotePartitionDeleteMetadata |
This class represents the metadata about the remote partition.
|
RemotePartitionDeleteState |
This enum indicates the deletion state of the remote topic partition.
|
RemoteResourceNotFoundException |
Exception thrown when a resource is not found on the remote storage.
|
RemoteStorageException |
Exception thrown when there is a remote storage error.
|
RemoteStorageManager |
This interface provides the lifecycle of remote log segments that includes copy, fetch, and delete from remote
storage.
|
RemoteStorageManager.IndexType |
Type of the index file.
|
RemoveMembersFromConsumerGroupOptions | |
RemoveMembersFromConsumerGroupResult |
The result of the
Admin.removeMembersFromConsumerGroup(String, RemoveMembersFromConsumerGroupOptions) call. |
RenewDelegationTokenOptions | |
RenewDelegationTokenResult |
The result of the
KafkaAdminClient.expireDelegationToken(byte[], ExpireDelegationTokenOptions) call. |
Repartitioned<K,V> |
This class is used to provide the optional parameters for internal repartition topics.
|
ReplicaInfo |
A description of a replica on a particular broker.
|
ReplicaNotAvailableException |
The replica is not available for the requested topic partition.
|
ReplicationPolicy |
Defines which topics are "remote topics".
|
Resource |
Represents a cluster resource with a tuple of (type, name).
|
ResourceNotFoundException |
Exception thrown due to a request for a resource that does not exist.
|
ResourcePattern |
Represents a pattern that is used by ACLs to match zero or more
Resources . |
ResourcePatternFilter |
Represents a filter that can match
ResourcePattern . |
ResourceType |
Represents a type of resource which an ACL can be applied to.
|
RetriableCommitFailedException | |
RetriableException |
A retriable exception is a transient exception that if retried may succeed.
|
RetriableException |
An exception that indicates the operation can be reattempted.
|
RocksDBConfigSetter |
An interface to that allows developers to customize the RocksDB settings for a given Store.
|
RoundRobinAssignor |
The round robin assignor lays out all the available partitions and all the available consumers.
|
RoundRobinPartitioner |
The "Round-Robin" partitioner
This partitioning strategy can be used when user wants
to distribute the writes to all partitions equally.
|
SampledStat |
A SampledStat records a single scalar value measured over one or more samples.
|
SampledStat.Sample | |
SaslAuthenticationContext | |
SaslAuthenticationException |
This exception indicates that SASL authentication has failed.
|
SaslConfigs | |
SaslExtensions |
A simple immutable value object class holding customizable SASL extensions
|
SaslExtensionsCallback |
Optional callback used for SASL mechanisms if any extensions need to be set
in the SASL exchange.
|
Schema |
Definition of an abstract data type.
|
Schema.Type |
The type of a schema.
|
SchemaAndValue | |
SchemaBuilder |
SchemaBuilder provides a fluent API for constructing
Schema objects. |
SchemaBuilderException | |
SchemaProjector |
SchemaProjector is utility to project a value between compatible schemas and throw exceptions
when non compatible schemas are provided.
|
SchemaProjectorException | |
ScramCredential |
SCRAM credential class that encapsulates the credential data persisted for each user that is
accessible to the server.
|
ScramCredentialCallback |
Callback used for SCRAM mechanisms.
|
ScramCredentialInfo |
Mechanism and iterations for a SASL/SCRAM credential associated with a user.
|
ScramExtensionsCallback |
Optional callback used for SCRAM mechanisms if any extensions need to be set
in the SASL/SCRAM exchange.
|
ScramLoginModule | |
ScramMechanism |
Representation of a SASL/SCRAM Mechanism.
|
SecurityConfig |
Contains the common security config for SSL and SASL
|
SecurityDisabledException |
An error indicating that security is disabled on the broker.
|
SecurityProtocol | |
SecurityProviderCreator |
An interface for generating security providers.
|
Sensor |
A sensor applies a continuous sequence of numerical values to a set of associated metrics.
|
Sensor.RecordingLevel | |
Serde<T> |
The interface for wrapping a serializer and deserializer for the given data type.
|
Serdes |
Factory for creating serializers / deserializers.
|
Serdes.ByteArraySerde | |
Serdes.ByteBufferSerde | |
Serdes.BytesSerde | |
Serdes.DoubleSerde | |
Serdes.FloatSerde | |
Serdes.IntegerSerde | |
Serdes.ListSerde<Inner> | |
Serdes.LongSerde | |
Serdes.ShortSerde | |
Serdes.StringSerde | |
Serdes.UUIDSerde | |
Serdes.VoidSerde | |
Serdes.WrapperSerde<T> | |
SerializationException |
Any exception during serialization in the producer
|
Serializer<T> |
An interface for converting objects to bytes.
|
SessionBytesStoreSupplier |
A store supplier that can be used to create one or more
SessionStore<Byte, byte[]> instances. |
SessionStore<K,AGG> |
Interface for storing the aggregated values of sessions.
|
SessionWindowedCogroupedKStream<K,V> |
SessionWindowedCogroupKStream is an abstraction of a windowed record stream of KeyValue pairs. |
SessionWindowedDeserializer<T> | |
SessionWindowedKStream<K,V> |
SessionWindowedKStream is an abstraction of a windowed record stream of KeyValue pairs. |
SessionWindowedSerializer<T> | |
SessionWindows |
A session based window specification used for aggregating events into sessions.
|
ShortDeserializer | |
ShortSerializer | |
SimpleHeaderConverter |
A
HeaderConverter that serializes header values as strings and that deserializes header values to the most appropriate
numeric, boolean, array, or map representation. |
SimpleRate |
A simple rate the rate is incrementally calculated
based on the elapsed time between the earliest reading
and now.
|
SinkConnector |
SinkConnectors implement the Connector interface to send Kafka data to another system.
|
SinkConnectorContext |
A context to allow a
SinkConnector to interact with the Kafka Connect runtime. |
SinkRecord |
SinkRecord is a
ConnectRecord that has been read from Kafka and includes the kafkaOffset of
the record in the Kafka topic-partition in addition to the standard fields. |
SinkTask |
SinkTask is a Task that takes records loaded from Kafka and sends them to another system.
|
SinkTaskContext |
Context passed to SinkTasks, allowing them to access utilities in the Kafka Connect runtime.
|
SlidingWindows |
A sliding window used for aggregating events.
|
SnapshotNotFoundException | |
SourceAndTarget |
Directional pair of clusters, where source is replicated to target.
|
SourceConnector |
SourceConnectors implement the connector interface to pull data from another system and send
it to Kafka.
|
SourceConnectorContext |
A context to allow a
SourceConnector to interact with the Kafka Connect runtime. |
SourceRecord |
SourceRecords are generated by SourceTasks and passed to Kafka Connect for storage in
Kafka.
|
SourceTask |
SourceTask is a Task that pulls records from another system for storage in Kafka.
|
SourceTaskContext |
SourceTaskContext is provided to SourceTasks to allow them to interact with the underlying
runtime.
|
SslAuthenticationContext | |
SslAuthenticationException |
This exception indicates that SSL handshake has failed.
|
SslClientAuth |
Describes whether the server should require or request client authentication.
|
SslConfigs | |
SslEngineFactory |
Plugin interface for allowing creation of
SSLEngine object in a custom way. |
StaleBrokerEpochException | |
Stat |
A Stat is a quantity such as average, max, etc that is computed off the stream of updates to a sensor
|
StateRestoreCallback |
Restoration logic for log-backed state stores upon restart,
it takes one record at a time from the logs to apply to the restoring state.
|
StateRestoreListener |
Class for listening to various states of the restoration process of a StateStore.
|
StateSerdes<K,V> |
Factory for creating serializers / deserializers for state stores in Kafka Streams.
|
StateStore |
A storage engine for managing state maintained by a stream processor.
|
StateStoreContext |
State store context interface.
|
StateStoreMigratedException |
Indicates that the state store being queried is closed although the Kafka Streams state is
RUNNING or
REBALANCING . |
StateStoreNotAvailableException |
Indicates that the state store being queried is already closed.
|
StickyAssignor |
The sticky assignor serves two purposes.
|
StoreBuilder<T extends StateStore> |
Build a
StateStore wrapped with optional caching and logging. |
StoreQueryParameters<T> |
StoreQueryParameters allows you to pass a variety of parameters when fetching a store for interactive query. |
Stores |
Factory for creating state stores in Kafka Streams.
|
StoreSupplier<T extends StateStore> |
A state store supplier which can create one or more
StateStore instances. |
StreamJoined<K,V1,V2> |
Class used to configure the name of the join processor, the repartition topic name,
state stores or state store names in Stream-Stream join.
|
StreamPartitioner<K,V> |
Determine how records are distributed among the partitions in a Kafka topic.
|
StreamsBuilder |
StreamsBuilder provide the high-level Kafka Streams DSL to specify a Kafka Streams topology. |
StreamsConfig |
Configuration for a
KafkaStreams instance. |
StreamsConfig.InternalConfig | |
StreamsException |
StreamsException is the top-level exception type generated by Kafka Streams. |
StreamsMetadata | Deprecated.
since 3.0.0 use
StreamsMetadata |
StreamsMetadata |
Metadata of a Kafka Streams client.
|
StreamsMetrics |
The Kafka Streams metrics interface for adding metric sensors and collecting metric values.
|
StreamsNotStartedException |
Indicates that Kafka Streams is in state
CREATED and thus state stores cannot be queries yet. |
StreamsRebalancingException |
Indicates that Kafka Streams is in state
REBALANCING and thus
cannot be queried by default. |
StreamsUncaughtExceptionHandler | |
StreamsUncaughtExceptionHandler.StreamThreadExceptionResponse |
Enumeration that describes the response from the exception handler.
|
StringConverter |
Converter and HeaderConverter implementation that only supports serializing to strings. |
StringConverterConfig |
Configuration options for
StringConverter instances. |
StringDeserializer |
String encoding defaults to UTF8 and can be customized by setting the property key.deserializer.encoding,
value.deserializer.encoding or deserializer.encoding.
|
StringSerializer |
String encoding defaults to UTF8 and can be customized by setting the property key.serializer.encoding,
value.serializer.encoding or serializer.encoding.
|
Struct |
A structured record containing a set of named fields with values, each field using an independent
Schema . |
SupportedVersionRange |
Represents a range of versions that a particular broker supports for some feature.
|
Suppressed<K> | |
Suppressed.BufferConfig<BC extends Suppressed.BufferConfig<BC>> | |
Suppressed.EagerBufferConfig |
Marker interface for a buffer configuration that will strictly enforce size constraints
(bytes and/or number of records) on the buffer, so it is suitable for reducing duplicate
results downstream, but does not promise to eliminate them entirely.
|
Suppressed.StrictBufferConfig |
Marker interface for a buffer configuration that is "strict" in the sense that it will strictly
enforce the time bound and never emit early.
|
Task |
Tasks contain the code that actually copies data to/from another system.
|
TaskAssignmentException |
Indicates a run time error incurred while trying to assign
stream tasks to
threads . |
TaskCorruptedException |
Indicates a specific task is corrupted and need to be re-initialized.
|
TaskId |
The task ID representation composed as subtopology (aka topicGroupId) plus the assigned partition ID.
|
TaskIdFormatException |
Indicates a run time error incurred while trying parse the
task id
from the read string. |
TaskMetadata | Deprecated.
since 3.0, use
TaskMetadata instead. |
TaskMetadata |
Metadata of a task.
|
TaskMigratedException |
Indicates that all tasks belongs to the thread have migrated to another thread.
|
TaskState |
Describes the state, IDs, and any errors of a connector task.
|
TestInputTopic<K,V> |
TestInputTopic is used to pipe records to topic in TopologyTestDriver . |
TestOutputTopic<K,V> |
TestOutputTopic is used to read records from a topic in TopologyTestDriver . |
TestRecord<K,V> |
A key/value pair, including timestamp and record headers, to be sent to or received from
TopologyTestDriver . |
ThreadMetadata | Deprecated.
since 3.0 use
ThreadMetadata instead |
ThreadMetadata |
Metadata of a stream thread.
|
ThrottlingQuotaExceededException |
Exception thrown if an operation on a resource exceeds the throttling quota.
|
Time |
A time representing a specific point in a day, not tied to any specific date.
|
TimeoutException |
Indicates that a request timed out.
|
Timestamp |
A timestamp representing an absolute time, without timezone information.
|
TimestampedBytesStore | |
TimestampedKeyValueStore<K,V> |
A key-(value/timestamp) store that supports put/get/delete and range queries.
|
TimestampedWindowStore<K,V> |
Interface for storing the aggregated values of fixed-size time windows.
|
TimestampExtractor |
An interface that allows the Kafka Streams framework to extract a timestamp from an instance of
ConsumerRecord . |
TimeWindowedCogroupedKStream<K,V> |
TimeWindowedCogroupKStream is an abstraction of a windowed record stream of KeyValue pairs. |
TimeWindowedDeserializer<T> | |
TimeWindowedKStream<K,V> |
TimeWindowedKStream is an abstraction of a windowed record stream of KeyValue pairs. |
TimeWindowedSerializer<T> | |
TimeWindows |
The fixed-size time-based window specifications used for aggregations.
|
To |
This class is used to provide the optional parameters when sending output records to downstream processor
using
ProcessorContext.forward(Object, Object, To) . |
TokenBucket |
The
TokenBucket is a MeasurableStat implementing a token bucket algorithm
that is usable within a Sensor . |
TokenInformation |
A class representing a delegation token details.
|
TopicAuthorizationException | |
TopicCollection |
A class used to represent a collection of topics.
|
TopicCollection.TopicIdCollection |
A class used to represent a collection of topics defined by their topic ID.
|
TopicCollection.TopicNameCollection |
A class used to represent a collection of topics defined by their topic name.
|
TopicConfig |
Keys that can be used to configure a topic.
|
TopicDeletionDisabledException | |
TopicDescription |
A detailed description of a single topic in the cluster.
|
TopicExistsException | |
TopicIdPartition |
This represents universally unique identifier with topic id for a topic partition.
|
TopicListing |
A listing of a topic in the cluster.
|
TopicNameExtractor<K,V> |
An interface that allows to dynamically determine the name of the Kafka topic to send at the sink node of the topology.
|
TopicPartition |
A topic name and partition number
|
TopicPartitionInfo |
A class containing leadership, replicas and ISR information for a topic partition.
|
TopicPartitionReplica |
The topic name, partition number and the brokerId of the replica
|
Topology |
A logical representation of a
ProcessorTopology . |
Topology.AutoOffsetReset |
Sets the
auto.offset.reset configuration when
adding a source processor or when creating KStream
or KTable via StreamsBuilder . |
TopologyDescription |
A meta representation of a
topology . |
TopologyDescription.GlobalStore |
Represents a
global store . |
TopologyDescription.Node |
A node of a topology.
|
TopologyDescription.Processor |
A processor node of a topology.
|
TopologyDescription.Sink |
A sink node of a topology.
|
TopologyDescription.Source |
A source node of a topology.
|
TopologyDescription.Subtopology |
A connected sub-graph of a
Topology . |
TopologyException |
Indicates a pre run time error occurred while parsing the
logical topology
to construct the physical processor topology . |
TopologyTestDriver |
This class makes it easier to write tests to verify the behavior of topologies created with
Topology or
StreamsBuilder . |
TransactionAbortedException |
This is the Exception thrown when we are aborting any undrained batches during
a transaction which is aborted without any underlying cause - which likely means that the user chose to abort.
|
TransactionalIdAuthorizationException | |
TransactionalIdNotFoundException | |
TransactionCoordinatorFencedException | |
TransactionDescription | |
TransactionListing | |
TransactionState | |
Transformation<R extends ConnectRecord<R>> |
Single message transformation for Kafka Connect record types.
|
Transformer<K,V,R> |
The
Transformer interface is for stateful mapping of an input record to zero, one, or multiple new output
records (both key and value type can be altered arbitrarily). |
TransformerSupplier<K,V,R> |
A
TransformerSupplier interface which can create one or more Transformer instances. |
UnacceptableCredentialException |
Exception thrown when attempting to define a credential that does not meet the criteria for acceptability
(for example, attempting to create a SCRAM credential with an empty username or password or too few/many iterations).
|
UniformStickyPartitioner |
The partitioning strategy:
If a partition is specified in the record, use it
Otherwise choose the sticky partition that changes when the batch is full.
|
UnknownLeaderEpochException |
The request contained a leader epoch which is larger than that on the broker that received the
request.
|
UnknownMemberIdException | |
UnknownProducerIdException |
This exception is raised by the broker if it could not locate the producer metadata associated with the producerId
in question.
|
UnknownServerException |
An error occurred on the server for which the client doesn't have a corresponding error code.
|
UnknownStateStoreException |
Indicates that the state store being queried is unknown, i.e., the state store does either not exist in your topology
or it is not queryable.
|
UnknownTopicIdException | |
UnknownTopicOrPartitionException |
This topic/partition doesn't exist.
|
UnlimitedWindows |
The unlimited window specifications used for aggregations.
|
UnregisterBrokerOptions |
Options for
Admin.unregisterBroker(int, UnregisterBrokerOptions) . |
UnregisterBrokerResult |
The result of the
Admin.unregisterBroker(int, UnregisterBrokerOptions) call. |
UnstableOffsetCommitException |
Exception thrown when there are unstable offsets for the requested topic partitions.
|
UnsupportedByAuthenticationException |
Authentication mechanism does not support the requested function.
|
UnsupportedCompressionTypeException |
The requesting client does not support the compression type of given partition.
|
UnsupportedForMessageFormatException |
The message format version does not support the requested function.
|
UnsupportedSaslMechanismException |
This exception indicates that the SASL mechanism requested by the client
is not enabled on the broker.
|
UnsupportedVersionException |
Indicates that a request API or version needed by the client is not supported by the broker.
|
UpdateFeaturesOptions |
Options for
Admin.updateFeatures(Map, UpdateFeaturesOptions) . |
UpdateFeaturesResult |
The result of the
Admin.updateFeatures(Map, UpdateFeaturesOptions) call. |
UsePartitionTimeOnInvalidTimestamp |
Retrieves embedded metadata timestamps from Kafka messages.
|
UserScramCredentialAlteration |
A request to alter a user's SASL/SCRAM credentials.
|
UserScramCredentialDeletion |
A request to delete a SASL/SCRAM credential for a user.
|
UserScramCredentialsDescription |
Representation of all SASL/SCRAM credentials associated with a user that can be retrieved, or an exception indicating
why credentials could not be retrieved.
|
UserScramCredentialUpsertion |
A request to update/insert a SASL/SCRAM credential for a user.
|
Uuid |
This class defines an immutable universally unique identifier (UUID).
|
UUIDDeserializer |
We are converting the byte array to String before deserializing to UUID.
|
UUIDSerializer |
We are converting UUID to String before serializing.
|
Value |
An instantaneous value.
|
ValueAndTimestamp<V> |
Combines a value from a
KeyValue with a timestamp. |
ValueJoiner<V1,V2,VR> |
The
ValueJoiner interface for joining two values into a new value of arbitrary type. |
ValueJoinerWithKey<K1,V1,V2,VR> |
The
ValueJoinerWithKey interface for joining two values into a new value of arbitrary type. |
ValueMapper<V,VR> |
The
ValueMapper interface for mapping a value to a new value of arbitrary type. |
ValueMapperWithKey<K,V,VR> |
The
ValueMapperWithKey interface for mapping a value to a new value of arbitrary type. |
Values |
Utility for converting from one Connect value to a different form.
|
Values.Parser | |
Values.SchemaDetector | |
ValueTransformer<V,VR> |
The
ValueTransformer interface for stateful mapping of a value to a new value (with possible new type). |
ValueTransformerSupplier<V,VR> |
A
ValueTransformerSupplier interface which can create one or more ValueTransformer instances. |
ValueTransformerWithKey<K,V,VR> |
The
ValueTransformerWithKey interface for stateful mapping of a value to a new value (with possible new type). |
ValueTransformerWithKeySupplier<K,V,VR> |
A
ValueTransformerWithKeySupplier interface which can create one or more ValueTransformerWithKey instances. |
Versioned |
Connect requires some components implement this interface to define a version string.
|
VoidDeserializer | |
VoidSerializer | |
WakeupException |
Exception used to indicate preemption of a blocking operation by an external thread.
|
WallclockTimestampExtractor |
Retrieves current wall clock timestamps as
System.currentTimeMillis() . |
Window |
A single window instance, defined by its start and end timestamp.
|
WindowBytesStoreSupplier |
A store supplier that can be used to create one or more
WindowStore<Byte, byte[]> instances of type <Byte, byte[]>. |
Windowed<K> |
The result key type of a windowed stream aggregation.
|
WindowedCount |
A
SampledStat that maintains a simple count of what it has seen. |
WindowedSerdes | |
WindowedSerdes.SessionWindowedSerde<T> | |
WindowedSerdes.TimeWindowedSerde<T> | |
WindowedSum |
A
SampledStat that maintains the sum of what it has seen. |
Windows<W extends Window> |
The window specification for fixed size windows that is used to define window boundaries and grace period.
|
WindowStore<K,V> |
Interface for storing the aggregated values of fixed-size time windows.
|
WindowStoreIterator<V> |
Iterator interface of
KeyValue with key typed Long used for WindowStore.fetch(Object, long, long)
and WindowStore.fetch(Object, Instant, Instant)
Users must call its close method explicitly upon completeness to release resources,
or use try-with-resources statement (available since JDK7) for this Closeable class. |