You are viewing documentation for an older version (1.0) of Kafka. For up-to-date documentation, see the latest version.

Broker Configs

Broker Configs

The essential configurations are the following:

  • broker.id
  • log.dirs
  • zookeeper.connect Topic-level configurations and defaults are discussed in more detail below.
    NameDescriptionTypeDefaultValid ValuesImportance
    zookeeper.connectZookeeper host stringstringhigh
    advertised.host.nameDEPRECATED: only used when `advertised.listeners` or `listeners` are not set. Use `advertised.listeners` instead. Hostname to publish to ZooKeeper for clients to use. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, it will use the value for `host.name` if configured. Otherwise it will use the value returned from java.net.InetAddress.getCanonicalHostName().stringnullhigh
    advertised.listenersListeners to publish to ZooKeeper for clients to use, if different than the `listeners` config property. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for `listeners` will be used. Unlike `listeners` it is not valid to advertise the 0.0.0.0 meta-address.stringnullhigh
    advertised.portDEPRECATED: only used when `advertised.listeners` or `listeners` are not set. Use `advertised.listeners` instead. The port to publish to ZooKeeper for clients to use. In IaaS environments, this may need to be different from the port to which the broker binds. If this is not set, it will publish the same port that the broker binds to.intnullhigh
    auto.create.topics.enableEnable auto creation of topic on the serverbooleantruehigh
    auto.leader.rebalance.enableEnables auto leader balancing. A background thread checks and triggers leader balance if required at regular intervalsbooleantruehigh
    background.threadsThe number of threads to use for various background processing tasksint10[1,...]high
    broker.idThe broker id for this server. If unset, a unique broker id will be generated.To avoid conflicts between zookeeper generated broker id's and user configured broker id's, generated broker ids start from reserved.broker.max.id + 1.int-1high
    compression.typeSpecify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer.stringproducerhigh
    delete.topic.enableEnables delete topic. Delete topic through the admin tool will have no effect if this config is turned offbooleantruehigh
    host.nameDEPRECATED: only used when `listeners` is not set. Use `listeners` instead. hostname of broker. If this is set, it will only bind to this address. If this is not set, it will bind to all interfacesstring""high
    leader.imbalance.check.interval.secondsThe frequency with which the partition rebalance check is triggered by the controllerlong300high
    leader.imbalance.per.broker.percentageThe ratio of leader imbalance allowed per broker. The controller would trigger a leader balance if it goes above this value per broker. The value is specified in percentage.int10high
    listenersListener List - Comma-separated list of URIs we will listen on and the listener names. If the listener name is not a security protocol, listener.security.protocol.map must also be set. Specify hostname as 0.0.0.0 to bind to all interfaces. Leave hostname empty to bind to default interface. Examples of legal listener lists: PLAINTEXT://myhost:9092,SSL://:9091 CLIENT://0.0.0.0:9092,REPLICATION://localhost:9093stringnullhigh
    log.dirThe directory in which the log data is kept (supplemental for log.dirs property)string/tmp/kafka-logshigh
    log.dirsThe directories in which the log data is kept. If not set, the value in log.dir is usedstringnullhigh
    log.flush.interval.messagesThe number of messages accumulated on a log partition before messages are flushed to disklong9223372036854775807[1,...]high
    log.flush.interval.msThe maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is usedlongnullhigh
    log.flush.offset.checkpoint.interval.msThe frequency with which we update the persistent record of the last flush which acts as the log recovery pointint60000[0,...]high
    log.flush.scheduler.interval.msThe frequency in ms that the log flusher checks whether any log needs to be flushed to disklong9223372036854775807high
    log.flush.start.offset.checkpoint.interval.msThe frequency with which we update the persistent record of log start offsetint60000[0,...]high
    log.retention.bytesThe maximum size of the log before deleting itlong-1high
    log.retention.hoursThe number of hours to keep a log file before deleting it (in hours), tertiary to log.retention.ms propertyint168high
    log.retention.minutesThe number of minutes to keep a log file before deleting it (in minutes), secondary to log.retention.ms property. If not set, the value in log.retention.hours is usedintnullhigh
    log.retention.msThe number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is usedlongnullhigh
    log.roll.hoursThe maximum time before a new log segment is rolled out (in hours), secondary to log.roll.ms propertyint168[1,...]high
    log.roll.jitter.hoursThe maximum jitter to subtract from logRollTimeMillis (in hours), secondary to log.roll.jitter.ms propertyint0[0,...]high
    log.roll.jitter.msThe maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is usedlongnullhigh
    log.roll.msThe maximum time before a new log segment is rolled out (in milliseconds). If not set, the value in log.roll.hours is usedlongnullhigh
    log.segment.bytesThe maximum size of a single log fileint1073741824[14,...]high
    log.segment.delete.delay.msThe amount of time to wait before deleting a file from the filesystemlong60000[0,...]high
    message.max.bytes

    The largest record batch size allowed by Kafka. If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that the they can fetch record batches this large.

    In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.

    This can be set per topic with the topic level max.message.bytes config.

    int1000012[0,...]high
    min.insync.replicasWhen a producer sets acks to "all" (or "-1"), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend).
    When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write.
    int1[1,...]high
    num.io.threadsThe number of threads that the server uses for processing requests, which may include disk I/Oint8[1,...]high
    num.network.threadsThe number of threads that the server uses for receiving requests from the network and sending responses to the networkint3[1,...]high
    num.recovery.threads.per.data.dirThe number of threads per data directory to be used for log recovery at startup and flushing at shutdownint1[1,...]high
    num.replica.fetchersNumber of fetcher threads used to replicate messages from a source broker. Increasing this value can increase the degree of I/O parallelism in the follower broker.int1high
    offset.metadata.max.bytesThe maximum size for a metadata entry associated with an offset commitint4096high
    offsets.commit.required.acksThe required acks before the commit can be accepted. In general, the default (-1) should not be overriddenshort-1high
    offsets.commit.timeout.msOffset commit will be delayed until all replicas for the offsets topic receive the commit or this timeout is reached. This is similar to the producer request timeout.int5000[1,...]high
    offsets.load.buffer.sizeBatch size for reading from the offsets segments when loading offsets into the cache.int5242880[1,...]high
    offsets.retention.check.interval.msFrequency at which to check for stale offsetslong600000[1,...]high
    offsets.retention.minutesOffsets older than this retention period will be discardedint1440[1,...]high
    offsets.topic.compression.codecCompression codec for the offsets topic - compression may be used to achieve "atomic" commitsint0high
    offsets.topic.num.partitionsThe number of partitions for the offset commit topic (should not change after deployment)int50[1,...]high
    offsets.topic.replication.factorThe replication factor for the offsets topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement.short3[1,...]high
    offsets.topic.segment.bytesThe offsets topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loadsint104857600[1,...]high
    portDEPRECATED: only used when `listeners` is not set. Use `listeners` instead. the port to listen and accept connections onint9092high
    queued.max.requestsThe number of queued requests allowed before blocking the network threadsint500[1,...]high
    quota.consumer.defaultDEPRECATED: Used only when dynamic default quotas are not configured for or in Zookeeper. Any consumer distinguished by clientId/consumer group will get throttled if it fetches more bytes than this value per-secondlong9223372036854775807[1,...]high
    quota.producer.defaultDEPRECATED: Used only when dynamic default quotas are not configured for , or in Zookeeper. Any producer distinguished by clientId will get throttled if it produces more bytes than this value per-secondlong9223372036854775807[1,...]high
    replica.fetch.min.bytesMinimum bytes expected for each fetch response. If not enough bytes, wait up to replicaMaxWaitTimeMsint1high
    replica.fetch.wait.max.msmax wait time for each fetcher request issued by follower replicas. This value should always be less than the replica.lag.time.max.ms at all times to prevent frequent shrinking of ISR for low throughput topicsint500high
    replica.high.watermark.checkpoint.interval.msThe frequency with which the high watermark is saved out to disklong5000high
    replica.lag.time.max.msIf a follower hasn't sent any fetch requests or hasn't consumed up to the leaders log end offset for at least this time, the leader will remove the follower from isrlong10000high
    replica.socket.receive.buffer.bytesThe socket receive buffer for network requestsint65536high
    replica.socket.timeout.msThe socket timeout for network requests. Its value should be at least replica.fetch.wait.max.msint30000high
    request.timeout.msThe configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.int30000high
    socket.receive.buffer.bytesThe SO_RCVBUF buffer of the socket sever sockets. If the value is -1, the OS default will be used.int102400high
    socket.request.max.bytesThe maximum number of bytes in a socket requestint104857600[1,...]high
    socket.send.buffer.bytesThe SO_SNDBUF buffer of the socket sever sockets. If the value is -1, the OS default will be used.int102400high
    transaction.max.timeout.msThe maximum allowed timeout for transactions. If a client’s requested transaction time exceed this, then the broker will return an error in InitProducerIdRequest. This prevents a client from too large of a timeout, which can stall consumers reading from topics included in the transaction.int900000[1,...]high
    transaction.state.log.load.buffer.sizeBatch size for reading from the transaction log segments when loading producer ids and transactions into the cache.int5242880[1,...]high
    transaction.state.log.min.isrOverridden min.insync.replicas config for the transaction topic.int2[1,...]high
    transaction.state.log.num.partitionsThe number of partitions for the transaction topic (should not change after deployment).int50[1,...]high
    transaction.state.log.replication.factorThe replication factor for the transaction topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement.short3[1,...]high
    transaction.state.log.segment.bytesThe transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loadsint104857600[1,...]high
    transactional.id.expiration.msThe maximum amount of time in ms that the transaction coordinator will wait before proactively expire a producer's transactional id without receiving any transaction status updates from it.int604800000[1,...]high
    unclean.leader.election.enableIndicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data lossbooleanfalsehigh
    zookeeper.connection.timeout.msThe max time that the client waits to establish a connection to zookeeper. If not set, the value in zookeeper.session.timeout.ms is usedintnullhigh
    zookeeper.session.timeout.msZookeeper session timeoutint6000high
    zookeeper.set.aclSet client to use secure ACLsbooleanfalsehigh
    broker.id.generation.enableEnable automatic broker id generation on the server. When enabled the value configured for reserved.broker.max.id should be reviewed.booleantruemedium
    broker.rackRack of the broker. This will be used in rack aware replication assignment for fault tolerance. Examples: `RACK1`, `us-east-1d`stringnullmedium
    connections.max.idle.msIdle connections timeout: the server socket processor threads close the connections that idle more than thislong600000medium
    controlled.shutdown.enableEnable controlled shutdown of the serverbooleantruemedium
    controlled.shutdown.max.retriesControlled shutdown can fail for multiple reasons. This determines the number of retries when such failure happensint3medium
    controlled.shutdown.retry.backoff.msBefore each retry, the system needs time to recover from the state that caused the previous failure (Controller fail over, replica lag etc). This config determines the amount of time to wait before retrying.long5000medium
    controller.socket.timeout.msThe socket timeout for controller-to-broker channelsint30000medium
    default.replication.factordefault replication factors for automatically created topicsint1medium
    delete.records.purgatory.purge.interval.requestsThe purge interval (in number of requests) of the delete records request purgatoryint1medium
    fetch.purgatory.purge.interval.requestsThe purge interval (in number of requests) of the fetch request purgatoryint1000medium
    group.initial.rebalance.delay.msThe amount of time the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A longer delay means potentially fewer rebalances, but increases the time until processing begins.int3000medium
    group.max.session.timeout.msThe maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.int300000medium
    group.min.session.timeout.msThe minimum allowed session timeout for registered consumers. Shorter timeouts result in quicker failure detection at the cost of more frequent consumer heartbeating, which can overwhelm broker resources.int6000medium
    inter.broker.listener.nameName of listener used for communication between brokers. If this is unset, the listener name is defined by security.inter.broker.protocol. It is an error to set this and security.inter.broker.protocol properties at the same time.stringnullmedium
    inter.broker.protocol.versionSpecify which version of the inter-broker protocol will be used. This is typically bumped after all brokers were upgraded to a new version. Example of some valid values are: 0.8.0, 0.8.1, 0.8.1.1, 0.8.2, 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.9.0.1 Check ApiVersion for the full list.string1.0-IV0medium
    log.cleaner.backoff.msThe amount of time to sleep when there are no logs to cleanlong15000[0,...]medium
    log.cleaner.dedupe.buffer.sizeThe total memory used for log deduplication across all cleaner threadslong134217728medium
    log.cleaner.delete.retention.msHow long are delete records retained?long86400000medium
    log.cleaner.enableEnable the log cleaner process to run on the server. Should be enabled if using any topics with a cleanup.policy=compact including the internal offsets topic. If disabled those topics will not be compacted and continually grow in size.booleantruemedium
    log.cleaner.io.buffer.load.factorLog cleaner dedupe buffer load factor. The percentage full the dedupe buffer can become. A higher value will allow more log to be cleaned at once but will lead to more hash collisionsdouble0.9medium
    log.cleaner.io.buffer.sizeThe total memory used for log cleaner I/O buffers across all cleaner threadsint524288[0,...]medium
    log.cleaner.io.max.bytes.per.secondThe log cleaner will be throttled so that the sum of its read and write i/o will be less than this value on averagedouble1.7976931348623157E308medium
    log.cleaner.min.cleanable.ratioThe minimum ratio of dirty log to total log for a log to eligible for cleaningdouble0.5medium
    log.cleaner.min.compaction.lag.msThe minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.long0medium
    log.cleaner.threadsThe number of background threads to use for log cleaningint1[0,...]medium
    log.cleanup.policyThe default cleanup policy for segments beyond the retention window. A comma separated list of valid policies. Valid policies are: "delete" and "compact"listdelete[compact, delete]medium
    log.index.interval.bytesThe interval with which we add an entry to the offset indexint4096[0,...]medium
    log.index.size.max.bytesThe maximum size in bytes of the offset indexint10485760[4,...]medium
    log.message.format.versionSpecify the message format version the broker will use to append messages to the logs. The value should be a valid ApiVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check ApiVersion for more details. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand.string1.0-IV0medium
    log.message.timestamp.difference.max.msThe maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If log.message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if log.message.timestamp.type=LogAppendTime.The maximum timestamp difference allowed should be no greater than log.retention.ms to avoid unnecessarily frequent log rolling.long9223372036854775807medium
    log.message.timestamp.typeDefine whether the timestamp in the message is message create time or log append time. The value should be either `CreateTime` or `LogAppendTime`stringCreateTime[CreateTime, LogAppendTime]medium
    log.preallocateShould pre allocate file when create new segment? If you are using Kafka on Windows, you probably need to set it to true.booleanfalsemedium
    log.retention.check.interval.msThe frequency in milliseconds that the log cleaner checks whether any log is eligible for deletionlong300000[1,...]medium
    max.connections.per.ipThe maximum number of connections we allow from each ip addressint2147483647[1,...]medium
    max.connections.per.ip.overridesPer-ip or hostname overrides to the default maximum number of connectionsstring""medium
    num.partitionsThe default number of log partitions per topicint1[1,...]medium
    principal.builder.classThe fully qualified name of a class that implements the KafkaPrincipalBuilder interface, which is used to build the KafkaPrincipal object used during authorization. This config also supports the deprecated PrincipalBuilder interface which was previously used for client authentication over SSL. If no principal builder is defined, the default behavior depends on the security protocol in use. For SSL authentication, the principal name will be the distinguished name from the client certificate if one is provided; otherwise, if client authentication is not required, the principal name will be ANONYMOUS. For SASL authentication, the principal will be derived using the rules defined by sasl.kerberos.principal.to.local.rules if GSSAPI is in use, and the SASL authentication ID for other mechanisms. For PLAINTEXT, the principal will be ANONYMOUS.classnullmedium
    producer.purgatory.purge.interval.requestsThe purge interval (in number of requests) of the producer request purgatoryint1000medium
    queued.max.request.bytesThe number of queued bytes allowed before no more requests are readlong-1medium
    replica.fetch.backoff.msThe amount of time to sleep when fetch partition error occurs.int1000[0,...]medium
    replica.fetch.max.bytesThe number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config).int1048576[0,...]medium
    replica.fetch.response.max.bytesMaximum bytes expected for the entire fetch response. Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config).int10485760[0,...]medium
    reserved.broker.max.idMax number that can be used for a broker.idint1000[0,...]medium
    sasl.enabled.mechanismsThe list of SASL mechanisms enabled in the Kafka server. The list may contain any mechanism for which a security provider is available. Only GSSAPI is enabled by default.listGSSAPImedium
    sasl.kerberos.kinit.cmdKerberos kinit command path.string/usr/bin/kinitmedium
    sasl.kerberos.min.time.before.reloginLogin thread sleep time between refresh attempts.long60000medium
    sasl.kerberos.principal.to.local.rulesA list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, principal names of the form {username}/{hostname}@{REALM} are mapped to {username}. For more details on the format please see security authorization and acls. Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the principal.builder.class configuration.listDEFAULTmedium
    sasl.kerberos.service.nameThe Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.stringnullmedium
    sasl.kerberos.ticket.renew.jitterPercentage of random jitter added to the renewal time.double0.05medium
    sasl.kerberos.ticket.renew.window.factorLogin thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.double0.8medium
    sasl.mechanism.inter.broker.protocolSASL mechanism used for inter-broker communication. Default is GSSAPI.stringGSSAPImedium
    security.inter.broker.protocolSecurity protocol used to communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. It is an error to set this and inter.broker.listener.name properties at the same time.stringPLAINTEXTmedium
    ssl.cipher.suitesA list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.listnullmedium
    ssl.client.authConfigures kafka broker to request client authentication. The following settings are common:
    • ssl.client.auth=required If set to required client authentication is required.
    • ssl.client.auth=requested This means client authentication is optional. unlike requested , if this option is set client can choose not to provide authentication information about itself
    • ssl.client.auth=none This means client authentication is not needed.
    stringnone[required, requested, none]medium
    ssl.enabled.protocolsThe list of protocols enabled for SSL connections.listTLSv1.2,TLSv1.1,TLSv1medium
    ssl.key.passwordThe password of the private key in the key store file. This is optional for client.passwordnullmedium
    ssl.keymanager.algorithmThe algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.stringSunX509medium
    ssl.keystore.locationThe location of the key store file. This is optional for client and can be used for two-way authentication for client.stringnullmedium
    ssl.keystore.passwordThe store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.passwordnullmedium
    ssl.keystore.typeThe file format of the key store file. This is optional for client.stringJKSmedium
    ssl.protocolThe SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.stringTLSmedium
    ssl.providerThe name of the security provider used for SSL connections. Default value is the default security provider of the JVM.stringnullmedium
    ssl.trustmanager.algorithmThe algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.stringPKIXmedium
    ssl.truststore.locationThe location of the trust store file.stringnullmedium
    ssl.truststore.passwordThe password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.passwordnullmedium
    ssl.truststore.typeThe file format of the trust store file.stringJKSmedium
    alter.config.policy.class.nameThe alter configs policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.AlterConfigPolicy interface.classnulllow
    authorizer.class.nameThe authorizer class that should be used for authorizationstring""low
    create.topic.policy.class.nameThe create topic policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.CreateTopicPolicy interface.classnulllow
    listener.security.protocol.mapMap between listener names and security protocols. This must be defined for the same security protocol to be usable in more than one port or IP. For example, internal and external traffic can be separated even if SSL is required for both. Concretely, the user could define listeners with names INTERNAL and EXTERNAL and this property as: `INTERNAL:SSL,EXTERNAL:SSL`. As shown, key and value are separated by a colon and map entries are separated by commas. Each listener name should only appear once in the map. Different security (SSL and SASL) settings can be configured for each listener by adding a normalised prefix (the listener name is lowercased) to the config name. For example, to set a different keystore for the INTERNAL listener, a config with name `listener.name.internal.ssl.keystore.location` would be set. If the config for the listener name is not set, the config will fallback to the generic config (i.e. `ssl.keystore.location`).stringPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLlow
    metric.reportersA list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.list""low
    metrics.num.samplesThe number of samples maintained to compute metrics.int2[1,...]low
    metrics.recording.levelThe highest recording level for metrics.stringINFOlow
    metrics.sample.window.msThe window of time a metrics sample is computed over.long30000[1,...]low
    quota.window.numThe number of samples to retain in memory for client quotasint11[1,...]low
    quota.window.size.secondsThe time span of each sample for client quotasint1[1,...]low
    replication.quota.window.numThe number of samples to retain in memory for replication quotasint11[1,...]low
    replication.quota.window.size.secondsThe time span of each sample for replication quotasint1[1,...]low
    ssl.endpoint.identification.algorithmThe endpoint identification algorithm to validate server hostname using server certificate.stringnulllow
    ssl.secure.random.implementationThe SecureRandom PRNG implementation to use for SSL cryptography operations.stringnulllow
    transaction.abort.timed.out.transaction.cleanup.interval.msThe interval at which to rollback transactions that have timed outint60000[1,...]low
    transaction.remove.expired.transaction.cleanup.interval.msThe interval at which to remove transactions that have expired due to transactional.id.expiration.ms passingint3600000[1,...]low
    zookeeper.sync.time.msHow far a ZK follower can be behind a ZK leaderint2000low

More details about broker configuration can be found in the scala class kafka.server.KafkaConfig.