...

Text file src/github.com/twmb/franz-go/generate/definitions/00_produce

Documentation: github.com/twmb/franz-go/generate/definitions

     1// ProduceRequest issues records to be created to Kafka.
     2//
     3// Kafka 0.10.0 (v2) changed Records from MessageSet v0 to MessageSet v1.
     4// Kafka 0.11.0 (v3) again changed Records to RecordBatch.
     5//
     6// Note that the special client ID "__admin_client" will allow you to produce
     7// records to internal topics. This is generally recommended if you want to
     8// break your Kafka cluster.
     9ProduceRequest => key 0, max version 9, flexible v9+
    10  // TransactionID is the transaction ID to use for this request, allowing for
    11  // exactly once semantics.
    12  TransactionID: nullable-string // v3+
    13  // Acks specifies the number of acks that the partition leaders must receive
    14  // from in sync replicas before considering a record batch fully written.
    15  //
    16  // Valid values are -1, 0, or 1 corresponding to all, none, or the leader only.
    17  //
    18  // Note that if no acks are requested, Kafka will close the connection
    19  // if any topic or partition errors to trigger a client metadata refresh.
    20  Acks: int16
    21  TimeoutMillis
    22  // Topics is an array of topics to send record batches to.
    23  Topics: [=>]
    24    // Topic is a topic to send record batches to.
    25    Topic: string
    26    // Partitions is an array of partitions to send record batches to.
    27    Partitions: [=>]
    28      // Partition is a partition to send a record batch to.
    29      Partition: int32
    30      // Records is a batch of records to write to a topic's partition.
    31      //
    32      // For Kafka pre 0.11.0, the contents of the byte array is a serialized
    33      // message set. At or after 0.11.0, the contents of the byte array is a
    34      // serialized RecordBatch.
    35      Records: nullable-bytes
    36
    37// ProduceResponse is returned from a ProduceRequest.
    38ProduceResponse =>
    39  // Topics is an array of responses for the topic's that batches were sent
    40  // to.
    41  Topics: [=>]
    42    // Topic is the topic this response pertains to.
    43    Topic: string
    44    // Partitions is an array of responses for the partition's that
    45    // batches were sent to.
    46    Partitions: [=>]
    47      // Partition is the partition this response pertains to.
    48      Partition: int32
    49      // ErrorCode is any error for a topic/partition in the request.
    50      // There are many error codes for produce requests.
    51      //
    52      // TRANSACTIONAL_ID_AUTHORIZATION_FAILED is returned for all topics and
    53      // partitions if the request had a transactional ID but the client
    54      // is not authorized for transactions.
    55      //
    56      // CLUSTER_AUTHORIZATION_FAILED is returned for all topics and partitions
    57      // if the request was idempotent but the client is not authorized
    58      // for idempotent requests.
    59      //
    60      // TOPIC_AUTHORIZATION_FAILED is returned for all topics the client
    61      // is not authorized to talk to.
    62      //
    63      // INVALID_REQUIRED_ACKS is returned if the request contained an invalid
    64      // number for "acks".
    65      //
    66      // CORRUPT_MESSAGE is returned for many reasons, generally related to
    67      // problems with messages (invalid magic, size mismatch, etc.).
    68      //
    69      // MESSAGE_TOO_LARGE is returned if a record batch is larger than the
    70      // broker's configured max.message.size.
    71      //
    72      // RECORD_LIST_TOO_LARGE is returned if the record batch is larger than
    73      // the broker's segment.bytes.
    74      //
    75      // INVALID_TIMESTAMP is returned if the record batch uses LogAppendTime
    76      // or if the timestamp delta from when the broker receives the message
    77      // is more than the broker's log.message.timestamp.difference.max.ms.
    78      //
    79      // UNSUPPORTED_FOR_MESSAGE_FORMAT is returned if using a Kafka v2 message
    80      // format (i.e. RecordBatch) feature (idempotence) while sending v1
    81      // messages (i.e. a MessageSet).
    82      //
    83      // KAFKA_STORAGE_ERROR is returned if the log directory for a partition
    84      // is offline.
    85      //
    86      // NOT_ENOUGH_REPLICAS is returned if all acks are required, but there
    87      // are not enough in sync replicas yet.
    88      //
    89      // NOT_ENOUGH_REPLICAS_AFTER_APPEND is returned on old Kafka versions
    90      // (pre 0.11.0.0) when a message was written to disk and then Kafka
    91      // noticed not enough replicas existed to replicate the message.
    92      //
    93      // DUPLICATE_SEQUENCE_NUMBER is returned for Kafka <1.1.0 when a
    94      // sequence number is detected as a duplicate. After, out of order
    95      // is returned.
    96      //
    97      // UNKNOWN_TOPIC_OR_PARTITION is returned if the topic or partition
    98      // is unknown.
    99      //
   100      // NOT_LEADER_FOR_PARTITION is returned if the broker is not a leader
   101      // for this partition. This means that the client has stale metadata.
   102      //
   103      // INVALID_PRODUCER_EPOCH is returned if the produce request was
   104      // attempted with an old epoch. Either there is a newer producer using
   105      // the same transaction ID, or the transaction ID used has expired.
   106      //
   107      // UNKNOWN_PRODUCER_ID, added in Kafka 1.0.0 (message format v5+) is
   108      // returned if the producer used an ID that Kafka does not know about or
   109      // if the request has a larger sequence number than Kafka expects.  The
   110      // LogStartOffset must be checked in this case. If the offset is greater
   111      // than the last acknowledged offset, then no data loss has occurred; the
   112      // client just sent data so long ago that Kafka rotated the partition out
   113      // of existence and no longer knows of this producer ID. In this case,
   114      // reset your sequence numbers to 0. If the log start offset is equal to
   115      // or less than what the client sent prior, then data loss has occurred.
   116      // See KAFKA-5793 for more details. NOTE: Unfortunately, even UNKNOWN_PRODUCER_ID
   117      // is unsafe to handle, so this error should likely be treated the same
   118      // as OUT_OF_ORDER_SEQUENCE_NUMER. See KIP-360 for more details.
   119      //
   120      // OUT_OF_ORDER_SEQUENCE_NUMBER is sent if the batch's FirstSequence was
   121      // not what it should be (the last FirstSequence, plus the number of
   122      // records in the last batch, plus one). After 1.0.0, this generally
   123      // means data loss. Before, there could be confusion on if the broker
   124      // actually rotated the partition out of existence (this is why
   125      // UNKNOWN_PRODUCER_ID was introduced).
   126      ErrorCode: int16
   127      // BaseOffset is the offset that the records in the produce request began
   128      // at in the partition.
   129      BaseOffset: int64
   130      // LogAppendTime is the millisecond that records were appended to the
   131      // partition inside Kafka. This is only not -1 if records were written
   132      // with the log append time flag (which producers cannot do).
   133      LogAppendTime: int64(-1) // v2+
   134      // LogStartOffset, introduced in Kafka 1.0.0, can be used to see if an
   135      // UNKNOWN_PRODUCER_ID means Kafka rotated records containing the used
   136      // producer ID out of existence, or if Kafka lost data.
   137      LogStartOffset: int64(-1) // v5+
   138      // ErrorRecords are indices of individual records that caused a batch
   139      // to error. This was added for KIP-467.
   140      ErrorRecords: [=>] // v8+
   141        // RelativeOffset is the offset of the record that caused problems.
   142        RelativeOffset: int32
   143        // ErrorMessage is the error of this record.
   144        ErrorMessage: nullable-string
   145      // ErrorMessage is the global error message of of what caused this batch
   146      // to error.
   147      ErrorMessage: nullable-string // v8+
   148  ThrottleMillis(6) // v1+

View as plain text