...

Text file src/github.com/twmb/franz-go/generate/definitions/misc

Documentation: github.com/twmb/franz-go/generate/definitions

     1// MessageV0 is the message format Kafka used prior to 0.10.
     2//
     3// To produce or fetch messages, Kafka would write many messages contiguously
     4// as an array without specifying the array length.
     5MessageV0 => not top level
     6  // Offset is the offset of this record.
     7  //
     8  // If this is the outer message of a recursive message set (i.e. a
     9  // message set has been compressed and this is the outer message),
    10  // then the offset should be the offset of the last inner value.
    11  Offset: int64
    12  // MessageSize is the size of everything that follows in this message.
    13  MessageSize: int32
    14  // CRC is the crc of everything that follows this field (NOT using the
    15  // Castagnoli polynomial, as is the case in the 0.11+ RecordBatch).
    16  CRC: int32
    17  // Magic is 0.
    18  Magic: int8
    19  // Attributes describe the attributes of this message.
    20  //
    21  // The first three bits correspond to compression:
    22  //   - 00 is no compression
    23  //   - 01 is gzip compression
    24  //   - 10 is snappy compression
    25  //
    26  // The remaining bits are unused and must be 0.
    27  Attributes: int8
    28  // Key is an blob of data for a record.
    29  //
    30  // Key's are usually used for hashing the record to specific Kafka partitions.
    31  Key: nullable-bytes
    32  // Value is a blob of data. This field is the main "message" portion of a
    33  // record.
    34  Value: nullable-bytes
    35
    36// MessageV1 is the message format Kafka used prior to 0.11.
    37//
    38// To produce or fetch messages, Kafka would write many messages contiguously
    39// as an array without specifying the array length.
    40//
    41// To support compression, an entire message set would be compressed and used
    42// as the Value in another message set (thus being "recursive"). The key for
    43// this outer message set must be null.
    44MessageV1 => not top level
    45  // Offset is the offset of this record.
    46  //
    47  // Different from v0, if this message set is a recursive message set
    48  // (that is, compressed and inside another message set), the offset
    49  // on the inner set is relative to the offset of the outer set.
    50  Offset: int64
    51  // MessageSize is the size of everything that follows in this message.
    52  MessageSize: int32
    53  // CRC is the crc of everything that follows this field (NOT using the
    54  // Castagnoli polynomial, as is the case in the 0.11+ RecordBatch).
    55  CRC: int32
    56  // Magic is 1.
    57  Magic: int8
    58  // Attributes describe the attributes of this message.
    59  //
    60  // The first three bits correspond to compression:
    61  //   - 00 is no compression
    62  //   - 01 is gzip compression
    63  //   - 10 is snappy compression
    64  //
    65  // Bit 4 is the timestamp type, with 0 meaning CreateTime corresponding
    66  // to the timestamp being from the producer, and 1 meaning LogAppendTime
    67  // corresponding to the timestamp being from the broker.
    68  // Setting this to LogAppendTime will cause batches to be rejected.
    69  //
    70  // The remaining bits are unused and must be 0.
    71  Attributes: int8
    72  // Timestamp is the millisecond timestamp of this message.
    73  Timestamp: int64
    74  // Key is an blob of data for a record.
    75  //
    76  // Key's are usually used for hashing the record to specific Kafka partitions.
    77  Key: nullable-bytes
    78  // Value is  a blob of data. This field is the main "message" portion of a
    79  // record.
    80  Value: nullable-bytes
    81
    82// Header is user provided metadata for a record. Kafka does not look at
    83// headers at all; they are solely for producers and consumers.
    84Header => not top level
    85  Key: varint-string
    86  Value: varint-bytes
    87
    88// NOTE: we manually manage Record now because of the varint => varlong switch.
    89// To regenerate the base autogenerated functions, uncomment this.
    90//
    91// // A Record is a Kafka v0.11.0.0 record. It corresponds to an individual
    92// // message as it is written on the wire.
    93// Record => not top level
    94//   // Length is the length of this record on the wire of everything that
    95//   // follows this field. It is an int32 encoded as a varint.
    96//   Length: varint
    97//   // Attributes are record level attributes. This field currently is unused.
    98//   Attributes: int8
    99//   // TimestampDelta is the millisecond delta of this record's timestamp
   100//   // from the record's RecordBatch's FirstTimestamp.
   101//   TimestampDelta: varlong
   102//   // OffsetDelta is the delta of this record's offset from the record's
   103//   // RecordBatch's FirstOffset.
   104//   //
   105//   // For producing, this is usually equal to the index of the record in
   106//   // the record batch.
   107//   OffsetDelta: varint
   108//   // Key is an blob of data for a record.
   109//   //
   110//   // Key's are usually used for hashing the record to specific Kafka partitions.
   111//   Key: varint-bytes
   112//   // Value is a blob of data. This field is the main "message" portion of a
   113//   // record.
   114//   Value: varint-bytes
   115//   // Headers are optional user provided metadata for records. Unlike normal
   116//   // arrays, the number of headers is encoded as a varint.
   117//   Headers: varint[Header]
   118
   119// RecordBatch is a Kafka concept that groups many individual records together
   120// in a more optimized format.
   121RecordBatch => not top level
   122  // FirstOffset is the first offset in a record batch.
   123  //
   124  // For producing, this is usually 0.
   125  FirstOffset: int64
   126  // Length is the wire length of everything that follows this field.
   127  Length: int32
   128  // PartitionLeaderEpoch is the leader epoch of the broker at the time
   129  // this batch was written. Kafka uses this for cluster communication,
   130  // but clients can also use this to better aid truncation detection.
   131  // See KIP-320. Producers should set this to -1.
   132  PartitionLeaderEpoch: int32
   133  // Magic is the current "magic" number of this message format.
   134  // The current magic number is 2.
   135  Magic: int8
   136  // CRC is the crc of everything that follows this field using the
   137  // Castagnoli polynomial.
   138  CRC: int32
   139  // Attributes describe the records array of this batch.
   140  //
   141  // The first three bits correspond to compression:
   142  //   - 000 is no compression
   143  //   - 001 is gzip compression
   144  //   - 010 is snappy compression
   145  //   - 011 is lz4 compression
   146  //   - 100 is zstd compression (produce request version 7+)
   147  //
   148  // Bit 4 is the timestamp type, with 0 meaning CreateTime corresponding
   149  // to the timestamp being from the producer, and 1 meaning LogAppendTime
   150  // corresponding to the timestamp being from the broker.
   151  // Setting this to LogAppendTime will cause batches to be rejected.
   152  //
   153  // Bit 5 indicates whether the batch is part of a transaction (1 is yes).
   154  //
   155  // Bit 6 indicates if the batch includes a control message (1 is yes).
   156  // Control messages are used to enable transactions and are generated from
   157  // the broker. Clients should not return control batches to applications.
   158  Attributes: int16
   159  // LastOffsetDelta is the offset of the last message in a batch. This is used
   160  // by the broker to ensure correct behavior even with batch compaction.
   161  LastOffsetDelta: int32
   162  // FirstTimestamp is the timestamp (in milliseconds) of the first record
   163  // in a batch.
   164  FirstTimestamp: int64
   165  // MaxTimestamp is the timestamp (in milliseconds) of the last record
   166  // in a batch. Similar to LastOffsetDelta, this is used to ensure correct
   167  // behavior with compacting.
   168  MaxTimestamp: int64
   169  // ProducerID is the broker assigned producerID from an InitProducerID
   170  // request.
   171  //
   172  // Clients that wish to support idempotent messages and transactions must
   173  // set this field.
   174  //
   175  // Note that when not using transactions, any producer here is always
   176  // accepted (and the epoch is always zero). Outside transactions, the ID
   177  // is used only to deduplicate requests (and there must be at max 5
   178  // concurrent requests).
   179  ProducerID: int64
   180  // ProducerEpoch is the broker assigned producerEpoch from an InitProducerID
   181  // request.
   182  //
   183  // Clients that wish to support idempotent messages and transactions must
   184  // set this field.
   185  ProducerEpoch: int16
   186  // FirstSequence is the producer assigned sequence number used by the
   187  // broker to deduplicate messages.
   188  //
   189  // Clients that wish to support idempotent messages and transactions must
   190  // set this field.
   191  //
   192  // The sequence number for each record in a batch is OffsetDelta + FirstSequence.
   193  FirstSequence: int32
   194  // NumRecords is the number of records in the array below.
   195  //
   196  // This is separate from Records due to the potential for records to be
   197  // compressed.
   198  NumRecords: int32
   199  // Records contains records, either compressed or uncompressed.
   200  //
   201  // For uncompressed records, this is an array of records ([Record]).
   202  //
   203  // For compressed records, the length of the uncompressed array is kept
   204  // but everything that follows is compressed.
   205  //
   206  // The number of bytes is expected to be the Length field minus 49.
   207  Records: length-field-minus => Length - 49
   208
   209// OffsetCommitKey is the key for the Kafka internal __consumer_offsets topic
   210// if the key starts with an int16 with a value of 0 or 1.
   211//
   212// This type was introduced in KAFKA-1012 commit a670537aa3 with release 0.8.2
   213// and has been in use ever since.
   214OffsetCommitKey => not top level, with version field
   215  // Version is which encoding version this value is using.
   216  Version: int16
   217  // Group is the group being committed.
   218  Group: string
   219  // Topic is the topic being committed.
   220  Topic: string
   221  // Partition is the partition being committed.
   222  Partition: int32
   223
   224// OffsetCommitValue is the value for the Kafka internal __consumer_offsets
   225// topic if the key is of OffsetCommitKey type.
   226//
   227// Version 0 was introduced with the key version 0.
   228//
   229// KAFKA-1634 commit c5df2a8e3a in 0.9.0 released version 1.
   230//
   231// KAFKA-4682 commit 418a91b5d4, proposed in KIP-211 and included in 2.1.0
   232// released version 2.
   233//
   234// KAFKA-7437 commit 9f7267dd2f, proposed in KIP-320 and included in 2.1.0
   235// released version 3.
   236OffsetCommitValue => not top level, with version field
   237  // Version is which encoding version this value is using.
   238  Version: int16
   239  // Offset is the committed offset.
   240  Offset: int64
   241  // LeaderEpoch is the epoch of the leader committing this message.
   242  LeaderEpoch: int32 // v3+
   243  // Metadata is the metadata included in the commit.
   244  Metadata: string
   245  // CommitTimestamp is when this commit occurred.
   246  CommitTimestamp: int64
   247  // ExpireTimestamp, introduced in v1 and dropped in v2 with KIP-111,
   248  // is when this commit expires.
   249  ExpireTimestamp: int64 // v1-v1
   250
   251// GroupMetadataKey is the key for the Kafka internal __consumer_offsets topic
   252// if the key starts with an int16 with a value of 2.
   253//
   254// This type was introduced in KAFKA-2017 commit 7c33475274 with release 0.9.0
   255// and has been in use ever since.
   256GroupMetadataKey => not top level, with version field
   257  // Version is which encoding version this value is using.
   258  Version: int16
   259  // Group is the group this metadata is for.
   260  Group: string
   261
   262// GroupMetadataValue is the value for the Kafka internal __consumer_offsets
   263// topic if the key is of GroupMetadataKey type.
   264//
   265// Version 0 was introduced with the key version 0.
   266//
   267// KAFKA-3888 commit 40b1dd3f49, proposed in KIP-62 and included in 0.10.1
   268// released version 1.
   269//
   270// KAFKA-4682 commit 418a91b5d4, proposed in KIP-211 and included in 2.1.0
   271// released version 2.
   272//
   273// KAFKA-7862 commit 0f995ba6be, proposed in KIP-345 and included in 2.3.0
   274// released version 3.
   275GroupMetadataValue => not top level, with version field
   276  // Version is the version of this value.
   277  Version: int16
   278  // ProtocolType is the type of protocol being used for the group
   279  // (i.e., "consumer").
   280  ProtocolType: string
   281  // Generation is the generation of this group.
   282  Generation: int32
   283  // Protocol is the agreed upon protocol all members are using to partition
   284  // (i.e., "sticky").
   285  Protocol: nullable-string
   286  // Leader is the group leader.
   287  Leader: nullable-string
   288  // CurrentStateTimestamp is the timestamp for this state of the group
   289  // (stable, etc.).
   290  CurrentStateTimestamp: int64 // v2+
   291  // Members are the group members.
   292  Members: [=>]
   293    // MemberID is a group member.
   294    MemberID: string
   295    // InstanceID is the instance ID of this member in the group (KIP-345).
   296    InstanceID: nullable-string // v3+
   297    // ClientID is the client ID of this group member.
   298    ClientID: string
   299    // ClientHost is the hostname of this group member.
   300    ClientHost: string
   301    // RebalanceTimeoutMillis is the rebalance timeout of this group member.
   302    RebalanceTimeoutMillis: int32 // v1+
   303    // SessionTimeoutMillis is the session timeout of this group member.
   304    SessionTimeoutMillis: int32
   305    // Subscription is the subscription of this group member.
   306    Subscription: bytes
   307    // Assignment is what the leader assigned this group member.
   308    Assignment: bytes
   309
   310// TxnMetadataKey is the key for the Kafka internal __transaction_state topic
   311// if the key starts with an int16 with a value of 0.
   312TxnMetadataKey => not top level, with version field
   313  // Version is the version of this type.
   314  Version: int16
   315  // TransactionalID is the transactional ID this record is for.
   316  TransactionalID: string
   317
   318// TxnMetadataValue is the value for the Kafka internal __transaction_state
   319// topic if the key is of TxnMetadataKey type.
   320TxnMetadataValue => not top level, with version field
   321  // Version is the version of this value.
   322  Version: int16
   323  // ProducerID is the ID in use by the transactional ID.
   324  ProducerID: int64
   325  // ProducerEpoch is the epoch associated with the producer ID.
   326  ProducerEpoch: int16
   327  // TimeoutMillis is the timeout of this transaction in milliseconds.
   328  TimeoutMillis: int32
   329  // State is the state this transaction is in,
   330  // 0 is Empty, 1 is Ongoing, 2 is PrepareCommit, 3 is PrepareAbort, 4 is
   331  // CompleteCommit, 5 is CompleteAbort, 6 is Dead, and 7 is PrepareEpochFence.
   332  State: enum-TransactionState
   333  // Topics are topics that are involved in this transaction.
   334  Topics: [=>]
   335    // Topic is a topic involved in this transaction.
   336    Topic: string
   337    // Partitions are partitions in this topic involved in the transaction.
   338    Partitions: [int32]
   339  // LastUpdateTimestamp is the timestamp in millis of when this transaction
   340  // was last updated.
   341  LastUpdateTimestamp: int64
   342  // StartTimestamp is the timestamp in millis of when this transaction started.
   343  StartTimestamp: int64
   344
   345// StickyMemberMetadata is is what is encoded in UserData for
   346// ConsumerMemberMetadata in group join requests with the sticky partitioning
   347// strategy.
   348//
   349// V1 added generation, which fixed a bug with flaky group members joining
   350// repeatedly. See KIP-341 for more details.
   351//
   352// Note that clients should always try decoding as v1 and, if that fails,
   353// fall back to v0. This is necessary due to there being no version number
   354// anywhere in this type.
   355StickyMemberMetadata => not top level, no encoding
   356  // CurrentAssignment is the assignment that a group member has when
   357  // issuing a join.
   358  CurrentAssignment: [=>]
   359    // Topic is a topic the group member is currently assigned.
   360    Topic: string
   361    // Partitions are the partitions within a topic that a group member is
   362    // currently assigned.
   363    Partitions: [int32]
   364  // Generation is the generation of this join. This is incremented every join.
   365  Generation: int32(-1) // v1+
   366
   367// ConsumerMemberMetadata is the metadata that is usually sent with a join group
   368// request with the "consumer" protocol (normal, non-connect consumers).
   369ConsumerMemberMetadata => not top level, with version field
   370  // Version is 0, 1, 2, or 3.
   371  Version: int16
   372  // Topics is the list of topics in the group that this member is interested
   373  // in consuming.
   374  Topics: [string]
   375  // UserData is arbitrary client data for a given client in the group.
   376  // For sticky assignment, this is StickyMemberMetadata.
   377  UserData: nullable-bytes
   378  // OwnedPartitions, introduced for KIP-429, are the partitions that this
   379  // member currently owns.
   380  OwnedPartitions: [=>] // v1+
   381    Topic: string
   382    Partitions: [int32]
   383  // Generation is the generation of the group.
   384  Generation: int32(-1) // v2+
   385  // Rack, if non-nil, opts into rack-aware replica assignment.
   386  Rack: nullable-string // v3+
   387
   388// ConsumerMemberAssignment is the assignment data that is usually sent with a
   389// sync group request with the "consumer" protocol (normal, non-connect
   390// consumers).
   391ConsumerMemberAssignment => not top level, with version field
   392  // Verson is 0, 1, or 2.
   393  Version: int16
   394  // Topics contains topics in the assignment.
   395  Topics: [=>]
   396    // Topic is a topic in the assignment.
   397    Topic: string
   398    // Partitions contains partitions in the assignment.
   399    Partitions: [int32]
   400  // UserData is arbitrary client data for a given client in the group.
   401  UserData: nullable-bytes
   402
   403// ConnectMemberMetadata is the metadata used in a join group request with the
   404// "connect" protocol. v1 introduced incremental cooperative rebalancing (akin
   405// to cooperative-sticky) per KIP-415.
   406//
   407//     v0 defined in connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/ConnectProtocol.java
   408//     v1+ defined in connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/IncrementalCooperativeConnectProtocol.java
   409//
   410ConnectMemberMetadata => not top level, with version field
   411  Version: int16
   412  URL: string
   413  ConfigOffset: int64
   414  CurrentAssignment: nullable-bytes // v1+
   415
   416// ConnectMemberAssignment is the assignment that is used in a sync group
   417// request with the "connect" protocol. See ConnectMemberMetadata for links to
   418// the Kafka code where these fields are defined.
   419ConnectMemberAssignment => not top level, with version field
   420  Version: int16
   421  Error: int16
   422  Leader: string
   423  LeaderURL: string
   424  ConfigOffset: int64
   425  Assignment: [=>]
   426    Connector: string
   427    Tasks: [int16]
   428  Revoked: [=>] // v1+
   429    Connector: string
   430    Tasks: [int16]
   431  ScheduledDelay: int32 // v1+
   432
   433// DefaultPrincipalData is the encoded principal data. This is used in an
   434// envelope request from broker to broker.
   435DefaultPrincipalData => not top level, with version field, flexible v0+
   436  Version: int16
   437  // The principal type.
   438  Type: string
   439  // The principal name.
   440  Name: string
   441  // Whether the principal was authenticated by a delegation token on the forwarding broker.
   442  TokenAuthenticated: bool
   443
   444/////////////////////
   445// CONTROL RECORDS //
   446/////////////////////
   447
   448// ControlRecordKey is the key in a control record.
   449ControlRecordKey => not top level, with version field
   450  Version: int16
   451  Type: enum-ControlRecordKeyType
   452
   453// EndTxnMarker is the value for a control record when the key is type 0 or 1.
   454EndTxnMarker => not top level, with version field
   455  Version: int16
   456  CoordinatorEpoch: int32
   457
   458LeaderChangeMessageVoter => not top level, no encoding, flexible v0+
   459  VoterID: int32
   460
   461// LeaderChangeMessage is the value for a control record when the key is type 3.
   462LeaderChangeMessage => not top level, with version field, flexible v0+
   463  Version: int16
   464  // The ID of the newly elected leader.
   465  LeaderID: int32
   466  // The set of voters in the quorum for this epoch.
   467  Voters: [LeaderChangeMessageVoter]
   468  // The voters who voted for the leader at the time of election.
   469  GrantingVoters: [LeaderChangeMessageVoter]

View as plain text