Class ConnectRecord<R extends ConnectRecord<R>>
java.lang.Object
org.apache.kafka.connect.connector.ConnectRecord<R>
- Direct Known Subclasses:
SinkRecord
,SourceRecord
Base class for records containing data to be copied to/from Kafka. This corresponds closely to
Kafka's ProducerRecord
and ConsumerRecord
classes, and holds the data that may be used by both
sources and sinks (topic, kafkaPartition, key, value, timestamp, headers). Although both implementations include a
notion of offset, it is not included here because they differ in type.
-
Constructor Summary
ConstructorDescriptionConnectRecord
(String topic, Integer kafkaPartition, Schema keySchema, Object key, Schema valueSchema, Object value, Long timestamp) ConnectRecord
(String topic, Integer kafkaPartition, Schema keySchema, Object key, Schema valueSchema, Object value, Long timestamp, Iterable<Header> headers) -
Method Summary
Modifier and TypeMethodDescriptionboolean
int
hashCode()
headers()
Get the headers for this record.key()
abstract R
newRecord
(String topic, Integer kafkaPartition, Schema keySchema, Object key, Schema valueSchema, Object value, Long timestamp) Create a new record of the same type as itself, with the specified parameter values.abstract R
newRecord
(String topic, Integer kafkaPartition, Schema keySchema, Object key, Schema valueSchema, Object value, Long timestamp, Iterable<Header> headers) Create a new record of the same type as itself, with the specified parameter values.topic()
toString()
value()
-
Constructor Details
-
ConnectRecord
-
ConnectRecord
-
-
Method Details
-
topic
-
kafkaPartition
-
key
-
keySchema
-
value
-
valueSchema
-
timestamp
-
headers
Get the headers for this record.- Returns:
- the headers; never null
-
newRecord
public abstract R newRecord(String topic, Integer kafkaPartition, Schema keySchema, Object key, Schema valueSchema, Object value, Long timestamp) Create a new record of the same type as itself, with the specified parameter values. All other fields in this record will be copied over to the new record. Since the headers are mutable, the resulting record will have a copy of this record's headers.- Parameters:
topic
- the name of the topic; may be nullkafkaPartition
- the partition number for the Kafka topic; may be nullkeySchema
- the schema for the key; may be nullkey
- the key; may be nullvalueSchema
- the schema for the value; may be nullvalue
- the value; may be nulltimestamp
- the timestamp; may be null- Returns:
- the new record
-
newRecord
public abstract R newRecord(String topic, Integer kafkaPartition, Schema keySchema, Object key, Schema valueSchema, Object value, Long timestamp, Iterable<Header> headers) Create a new record of the same type as itself, with the specified parameter values. All other fields in this record will be copied over to the new record.- Parameters:
topic
- the name of the topic; may be nullkafkaPartition
- the partition number for the Kafka topic; may be nullkeySchema
- the schema for the key; may be nullkey
- the key; may be nullvalueSchema
- the schema for the value; may be nullvalue
- the value; may be nulltimestamp
- the timestamp; may be nullheaders
- the headers; may be null or empty- Returns:
- the new record
-
toString
-
equals
-
hashCode
public int hashCode()
-