BenchResults records features and results of a benchmark run. A collection of these structs is usually serialized and written to a file after a benchmark execution, and could later be read for pretty-printing or comparison with other benchmark results.
type BenchResults struct { // GoVersion is the version of the compiler the benchmark was compiled with. GoVersion string // GrpcVersion is the gRPC version being benchmarked. GrpcVersion string // RunMode is the workload mode for this benchmark run. This could be unary, // stream or unconstrained. RunMode string // Features represents the configured feature options for this run. Features Features // SharedFeatures represents the features which were shared across all // benchmark runs during one execution. It is a slice indexed by // 'FeaturesIndex' and a value of true indicates that the associated // feature is shared across all runs. SharedFeatures []bool // Data contains the statistical data of interest from the benchmark run. Data RunData }
FeatureIndex is an enum for features that usually differ across individual benchmark runs in a single execution. These are usually configured by the user through command line flags.
type FeatureIndex int
FeatureIndex enum values corresponding to individually settable features.
const ( EnableTraceIndex FeatureIndex = iota ReadLatenciesIndex ReadKbpsIndex ReadMTUIndex MaxConcurrentCallsIndex ReqSizeBytesIndex RespSizeBytesIndex ReqPayloadCurveIndex RespPayloadCurveIndex CompModesIndex EnableChannelzIndex EnablePreloaderIndex ClientReadBufferSize ClientWriteBufferSize ServerReadBufferSize ServerWriteBufferSize SleepBetweenRPCs RecvBufferPool // MaxFeatureIndex is a place holder to indicate the total number of feature // indices we have. Any new feature indices should be added above this. MaxFeatureIndex )
Features represent configured options for a specific benchmark run. This is usually constructed from command line arguments passed by the caller. See benchmark/benchmain/main.go for defined command line flags. This is also part of the BenchResults struct which is serialized and written to a file.
type Features struct { // Network mode used for this benchmark run. Could be one of Local, LAN, WAN // or Longhaul. NetworkMode string // UseBufCon indicates whether an in-memory connection was used for this // benchmark run instead of system network I/O. UseBufConn bool // EnableKeepalive indicates if keepalives were enabled on the connections // used in this benchmark run. EnableKeepalive bool // BenchTime indicates the duration of the benchmark run. BenchTime time.Duration // Connections configures the number of grpc connections between client and server. Connections int // EnableTrace indicates if tracing was enabled. EnableTrace bool // Latency is the simulated one-way network latency used. Latency time.Duration // Kbps is the simulated network throughput used. Kbps int // MTU is the simulated network MTU used. MTU int // MaxConcurrentCalls is the number of concurrent RPCs made during this // benchmark run. MaxConcurrentCalls int // ReqSizeBytes is the request size in bytes used in this benchmark run. // Unused if ReqPayloadCurve is non-nil. ReqSizeBytes int // RespSizeBytes is the response size in bytes used in this benchmark run. // Unused if RespPayloadCurve is non-nil. RespSizeBytes int // ReqPayloadCurve is a histogram representing the shape a random // distribution request payloads should take. ReqPayloadCurve *PayloadCurve // RespPayloadCurve is a histogram representing the shape a random // distribution request payloads should take. RespPayloadCurve *PayloadCurve // ModeCompressor represents the compressor mode used. ModeCompressor string // EnableChannelz indicates if channelz was turned on. EnableChannelz bool // EnablePreloader indicates if preloading was turned on. EnablePreloader bool // ClientReadBufferSize is the size of the client read buffer in bytes. If negative, use the default buffer size. ClientReadBufferSize int // ClientWriteBufferSize is the size of the client write buffer in bytes. If negative, use the default buffer size. ClientWriteBufferSize int // ServerReadBufferSize is the size of the server read buffer in bytes. If negative, use the default buffer size. ServerReadBufferSize int // ServerWriteBufferSize is the size of the server write buffer in bytes. If negative, use the default buffer size. ServerWriteBufferSize int // SleepBetweenRPCs configures optional delay between RPCs. SleepBetweenRPCs time.Duration // RecvBufferPool represents the shared recv buffer pool used. RecvBufferPool string // SharedWriteBuffer configures whether both client and server share per-connection write buffer SharedWriteBuffer bool }
func (f Features) PrintableName(wantFeatures []bool) string
PrintableName returns a one line name which includes the features specified by 'wantFeatures' which is a bitmask of wanted features, indexed by FeaturesIndex.
func (f Features) SharedFeatures(wantFeatures []bool) string
SharedFeatures returns the shared features as a pretty printable string. 'wantFeatures' is a bitmask of wanted features, indexed by FeaturesIndex.
func (f Features) String() string
String returns all the feature values as a string.
Histogram accumulates values in the form of a histogram with exponentially increased bucket sizes.
type Histogram struct { // Count is the total number of values added to the histogram. Count int64 // Sum is the sum of all the values added to the histogram. Sum int64 // SumOfSquares is the sum of squares of all values. SumOfSquares int64 // Min is the minimum of all the values added to the histogram. Min int64 // Max is the maximum of all the values added to the histogram. Max int64 // Buckets contains all the buckets of the histogram. Buckets []HistogramBucket // contains filtered or unexported fields }
func NewHistogram(opts HistogramOptions) *Histogram
NewHistogram returns a pointer to a new Histogram object that was created with the provided options.
func (h *Histogram) Add(value int64) error
Add adds a value to the histogram.
func (h *Histogram) Clear()
Clear resets all the content of histogram.
func (h *Histogram) Merge(h2 *Histogram)
Merge takes another histogram h2, and merges its content into h. The two histograms must be created by equivalent HistogramOptions.
func (h *Histogram) Opts() HistogramOptions
Opts returns a copy of the options used to create the Histogram.
func (h *Histogram) Print(w io.Writer)
Print writes textual output of the histogram values.
func (h *Histogram) PrintWithUnit(w io.Writer, unit float64)
PrintWithUnit writes textual output of the histogram values . Data in histogram is divided by a Unit before print.
func (h *Histogram) String() string
String returns the textual output of the histogram values as string.
HistogramBucket represents one histogram bucket.
type HistogramBucket struct { // LowBound is the lower bound of the bucket. LowBound float64 // Count is the number of values in the bucket. Count int64 }
HistogramOptions contains the parameters that define the histogram's buckets. The first bucket of the created histogram (with index 0) contains [min, min+n) where n = BaseBucketSize, min = MinValue. Bucket i (i>=1) contains [min + n * m^(i-1), min + n * m^i), where m = 1+GrowthFactor. The type of the values is int64.
type HistogramOptions struct { // NumBuckets is the number of buckets. NumBuckets int // GrowthFactor is the growth factor of the buckets. A value of 0.1 // indicates that bucket N+1 will be 10% larger than bucket N. GrowthFactor float64 // BaseBucketSize is the size of the first bucket. BaseBucketSize float64 // MinValue is the lower bound of the first bucket. MinValue int64 }
PayloadCurve is an internal representation of a weighted random distribution CSV file. Once a *PayloadCurve is created with NewPayloadCurve, the ChooseRandom function should be called to generate random payload sizes.
type PayloadCurve struct { // Sha256 must be a public field so that the gob encoder can write it to // disk. This will be needed at decode-time by the Hash function. Sha256 string // contains filtered or unexported fields }
func NewPayloadCurve(file string) (*PayloadCurve, error)
NewPayloadCurve parses a .csv file and returns a *PayloadCurve if no errors were encountered in parsing and initialization.
func (pc *PayloadCurve) ChooseRandom() int
ChooseRandom picks a random payload size (in bytes) that follows the underlying weighted random distribution.
func (pc *PayloadCurve) Hash() string
Hash returns a string uniquely identifying a payload curve file for feature matching purposes.
func (pc *PayloadCurve) ShortHash() string
ShortHash returns a shortened version of Hash for display purposes.
RunData contains statistical data of interest from a benchmark run.
type RunData struct { // TotalOps is the number of operations executed during this benchmark run. // Only makes sense for unary and streaming workloads. TotalOps uint64 // SendOps is the number of send operations executed during this benchmark // run. Only makes sense for unconstrained workloads. SendOps uint64 // RecvOps is the number of receive operations executed during this benchmark // run. Only makes sense for unconstrained workloads. RecvOps uint64 // AllocedBytes is the average memory allocation in bytes per operation. AllocedBytes float64 // Allocs is the average number of memory allocations per operation. Allocs float64 // ReqT is the average request throughput associated with this run. ReqT float64 // RespT is the average response throughput associated with this run. RespT float64 // Fiftieth is the 50th percentile latency. Fiftieth time.Duration // Ninetieth is the 90th percentile latency. Ninetieth time.Duration // Ninetyninth is the 99th percentile latency. NinetyNinth time.Duration // Average is the average latency. Average time.Duration }
Stats is a helper for gathering statistics about individual benchmark runs.
type Stats struct {
// contains filtered or unexported fields
}
func NewStats(numBuckets int) *Stats
NewStats creates a new Stats instance. If numBuckets is not positive, the default value (16) will be used.
func (s *Stats) AddDuration(d time.Duration)
AddDuration adds an elapsed duration per operation to the stats. This is used by unary and stream modes where request and response stats are equal.
func (s *Stats) EndRun(count uint64)
EndRun is to be invoked to indicate the end of the ongoing benchmark run. It computes a bunch of stats and dumps them to stdout.
func (s *Stats) EndUnconstrainedRun(req uint64, resp uint64)
EndUnconstrainedRun is similar to EndRun, but is to be used for unconstrained workloads.
func (s *Stats) GetResults() []BenchResults
GetResults returns the results from all benchmark runs.
func (s *Stats) StartRun(mode string, f Features, sf []bool)
StartRun is to be invoked to indicate the start of a new benchmark run.