const BAILOUT_MESSAGE = `Ginkgo detected an issue while intercepting output.
When running in parallel, Ginkgo captures stdout and stderr output
and attaches it to the running spec. It looks like that process is getting
stuck for this suite.
This usually happens if you, or a library you are using, spin up an external
process and set cmd.Stdout = os.Stdout and/or cmd.Stderr = os.Stderr. This
causes the external process to keep Ginkgo's output interceptor pipe open and
causes output interception to hang.
Ginkgo has detected this and shortcircuited the capture process. The specs
will continue running after this message however output from the external
process that caused this issue will not be captured.
You have several options to fix this. In preferred order they are:
1. Pass GinkgoWriter instead of os.Stdout or os.Stderr to your process.
2. Ensure your process exits before the current spec completes. If your
process is long-lived and must cross spec boundaries, this option won't
work for you.
3. Pause Ginkgo's output interceptor before starting your process and then
resume it after. Use PauseOutputInterception() and ResumeOutputInterception()
to do this.
4. Set --output-interceptor-mode=none when running your Ginkgo suite. This will
turn off all output interception but allow specs to run in parallel without this
issue. You may miss important output if you do this including output from Go's
race detector.
More details on issue #851 - https://github.com/onsi/ginkgo/issues/851
`
const BAILOUT_TIME = 1 * time.Second
const ContinueOnFailure = continueOnFailureType(true)
const Focus = focusType(true)
const OncePerOrdered = honorsOrderedType(true)
const Ordered = orderedType(true)
const Pending = pendingType(true)
const Serial = serialType(true)
const SuppressProgressReporting = suppressProgressReporting(true)
var PROGRESS_REPORTER_DEADLING = 5 * time.Second
var PROGRESS_SIGNALS = []os.Signal{syscall.SIGUSR1}
func ApplyNestedFocusPolicyToTree(tree *TreeNode)
If a container marked as focus has a descendant that is also marked as focus, Ginkgo's policy is to unmark the container's focus. This gives developers a more intuitive experience when debugging specs. It is common to focus a container to just run a subset of specs, then identify the specific specs within the container to focus - this policy allows the developer to simply focus those specific specs and not need to go back and turn the focus off of the container:
As a common example, consider:
FDescribe("something to debug", function() { It("works", function() {...}) It("works", function() {...}) FIt("doesn't work", function() {...}) It("works", function() {...}) })
here the developer's intent is to focus in on the `"doesn't work"` spec and not to run the adjacent specs in the focused `"something to debug"` container. The nested policy applied by this function enables this behavior.
func GinkgoLogrFunc(writer *Writer) logr.Logger
func MakeIncrementingIndexCounter() func() (int, error)
func NewProgressReport(isRunningInParallel bool, report types.SpecReport, currentNode Node, currentNodeStartTime time.Time, currentStep types.SpecEvent, gwOutput string, timelineLocation types.TimelineLocation, additionalReports []string, sourceRoots []string, includeAll bool) (types.ProgressReport, error)
func NewSpecContext(suite *Suite) *specContext
SpecContext includes a reference to `suite` and embeds itself in itself as a "GINKGO_SPEC_CONTEXT" value. This allows users to create child Contexts without having down-stream consumers (e.g. Gomega) lose access to the SpecContext and its methods. This allows us to build extensions on top of Ginkgo that simply take an all-encompassing context.
Note that while SpecContext is used to enforce deadlines by Ginkgo it is not configured as a context.WithDeadline. Instead, Ginkgo owns responsibility for cancelling the context when the deadline elapses.
This is because Ginkgo needs finer control over when the context is canceled. Specifically, Ginkgo needs to generate a ProgressReport before it cancels the context to ensure progress is captured where the spec is currently running. The only way to avoid a race here is to manually control the cancellation.
func OrderSpecs(specs Specs, suiteConfig types.SuiteConfig) (GroupedSpecIndices, GroupedSpecIndices)
func PartitionDecorations(args ...interface{}) ([]interface{}, []interface{})
func RegisterForProgressSignal(handler func()) context.CancelFunc
func UniqueNodeID() uint
type Done chan<- interface{} // Deprecated Done Channel for asynchronous testing
type Failer struct {
// contains filtered or unexported fields
}
func NewFailer() *Failer
func (f *Failer) AbortSuite(message string, location types.CodeLocation)
func (f *Failer) Drain() (types.SpecState, types.Failure)
func (f *Failer) Fail(message string, location types.CodeLocation)
func (f *Failer) GetFailure() types.Failure
func (f *Failer) GetState() types.SpecState
func (f *Failer) Panic(location types.CodeLocation, forwardedPanic interface{})
func (f *Failer) Skip(message string, location types.CodeLocation)
type FlakeAttempts uint
type GracePeriod time.Duration
type GroupedSpecIndices []SpecIndices
type Labels []string
func UnionOfLabels(labels ...Labels) Labels
func (l Labels) MatchesLabelFilter(query string) bool
type MustPassRepeatedly uint
type Node struct { ID uint NodeType types.NodeType Text string Body func(SpecContext) CodeLocation types.CodeLocation NestingLevel int HasContext bool SynchronizedBeforeSuiteProc1Body func(SpecContext) []byte SynchronizedBeforeSuiteProc1BodyHasContext bool SynchronizedBeforeSuiteAllProcsBody func(SpecContext, []byte) SynchronizedBeforeSuiteAllProcsBodyHasContext bool SynchronizedAfterSuiteAllProcsBody func(SpecContext) SynchronizedAfterSuiteAllProcsBodyHasContext bool SynchronizedAfterSuiteProc1Body func(SpecContext) SynchronizedAfterSuiteProc1BodyHasContext bool ReportEachBody func(SpecContext, types.SpecReport) ReportSuiteBody func(SpecContext, types.Report) MarkedFocus bool MarkedPending bool MarkedSerial bool MarkedOrdered bool MarkedContinueOnFailure bool MarkedOncePerOrdered bool FlakeAttempts int MustPassRepeatedly int Labels Labels PollProgressAfter time.Duration PollProgressInterval time.Duration NodeTimeout time.Duration SpecTimeout time.Duration GracePeriod time.Duration NodeIDWhereCleanupWasGenerated uint }
func NewCleanupNode(deprecationTracker *types.DeprecationTracker, fail func(string, types.CodeLocation), args ...interface{}) (Node, []error)
func NewNode(deprecationTracker *types.DeprecationTracker, nodeType types.NodeType, text string, args ...interface{}) (Node, []error)
func (n Node) IsZero() bool
type NodeTimeout time.Duration
Nodes
type Nodes []Node
func (n Nodes) BestTextFor(node Node) string
func (n Nodes) Clone() Nodes
func (n Nodes) CodeLocations() []types.CodeLocation
func (n Nodes) ContainsNodeID(id uint) bool
func (n Nodes) CopyAppend(nodes ...Node) Nodes
func (n Nodes) Filter(filter func(Node) bool) Nodes
func (n Nodes) FirstNodeMarkedOrdered() Node
func (n Nodes) FirstNodeWithType(nodeTypes types.NodeType) Node
func (n Nodes) FirstSatisfying(filter func(Node) bool) Node
func (n Nodes) FirstWithNestingLevel(level int) Node
func (n Nodes) GetMaxFlakeAttempts() int
func (n Nodes) GetMaxMustPassRepeatedly() int
func (n Nodes) HasNodeMarkedFocus() bool
func (n Nodes) HasNodeMarkedPending() bool
func (n Nodes) HasNodeMarkedSerial() bool
func (n Nodes) IndexOfFirstNodeMarkedOrdered() int
func (n Nodes) Labels() [][]string
func (n Nodes) Reverse() Nodes
func (n Nodes) SortedByAscendingNestingLevel() Nodes
func (n Nodes) SortedByDescendingNestingLevel() Nodes
func (n Nodes) SplitAround(pivot Node) (Nodes, Nodes)
func (n Nodes) Texts() []string
func (n Nodes) UnionOfLabels() []string
func (n Nodes) WithType(nodeTypes types.NodeType) Nodes
func (n Nodes) WithinNestingLevel(deepestNestingLevel int) Nodes
func (n Nodes) WithoutNode(nodeToExclude Node) Nodes
func (n Nodes) WithoutType(nodeTypes types.NodeType) Nodes
type NoopOutputInterceptor struct{}
func (interceptor NoopOutputInterceptor) PauseIntercepting()
func (interceptor NoopOutputInterceptor) ResumeIntercepting()
func (interceptor NoopOutputInterceptor) Shutdown()
func (interceptor NoopOutputInterceptor) StartInterceptingOutput()
func (interceptor NoopOutputInterceptor) StartInterceptingOutputAndForwardTo(io.Writer)
func (interceptor NoopOutputInterceptor) StopInterceptingAndReturnOutput() string
type Offset uint
The OutputInterceptor is used by to intercept and capture all stdin and stderr output during a test run.
type OutputInterceptor interface { StartInterceptingOutput() StartInterceptingOutputAndForwardTo(io.Writer) StopInterceptingAndReturnOutput() string PauseIntercepting() ResumeIntercepting() Shutdown() }
func NewOSGlobalReassigningOutputInterceptor() OutputInterceptor
This is used on windows builds but included here so it can be explicitly tested on unix systems too
func NewOutputInterceptor() OutputInterceptor
type Phase uint
const ( PhaseBuildTopLevel Phase = iota PhaseBuildTree PhaseRun )
type PollProgressAfter time.Duration
type PollProgressInterval time.Duration
type ProgressReporterManager struct {
// contains filtered or unexported fields
}
func NewProgressReporterManager() *ProgressReporterManager
func (prm *ProgressReporterManager) AttachProgressReporter(reporter func() string) func()
func (prm *ProgressReporterManager) QueryProgressReporters(ctx context.Context, failer *Failer) []string
type ProgressSignalRegistrar func(func()) context.CancelFunc
type ProgressStepCursor struct { Text string CodeLocation types.CodeLocation StartTime time.Time }
type ReportEntry = types.ReportEntry
func NewReportEntry(name string, cl types.CodeLocation, args ...interface{}) (ReportEntry, error)
type SortableSpecs struct { Specs Specs Indexes []int }
func NewSortableSpecs(specs Specs) *SortableSpecs
func (s *SortableSpecs) Len() int
func (s *SortableSpecs) Less(i, j int) bool
func (s *SortableSpecs) Swap(i, j int)
type Spec struct { Nodes Nodes Skip bool }
func (s Spec) FirstNodeWithType(nodeTypes types.NodeType) Node
func (s Spec) FlakeAttempts() int
func (s Spec) MustPassRepeatedly() int
func (s Spec) SpecTimeout() time.Duration
func (s Spec) SubjectID() uint
func (s Spec) Text() string
type SpecContext interface { context.Context SpecReport() types.SpecReport AttachProgressReporter(func() string) func() }
type SpecIndices []int
type SpecTimeout time.Duration
type Specs []Spec
func ApplyFocusToSpecs(specs Specs, description string, suiteLabels Labels, suiteConfig types.SuiteConfig) (Specs, bool)
Ginkgo supports focussing specs using `FIt`, `FDescribe`, etc. - this is called "programmatic focus" It also supports focussing specs using regular expressions on the command line (`-focus=`, `-skip=`) that match against spec text and file filters (`-focus-files=`, `-skip-files=`) that match against code locations for nodes in specs.
When both programmatic and file filters are provided their results are ANDed together. If multiple kinds of filters are provided, the file filters run first followed by the regex filters.
This function sets the `Skip` property on specs by applying Ginkgo's focus policy: - If there are no CLI arguments and no programmatic focus, do nothing. - If a spec somewhere has programmatic focus skip any specs that have no programmatic focus. - If there are CLI arguments parse them and skip any specs that either don't match the focus filters or do match the skip filters.
*Note:* specs with pending nodes are Skipped when created by NewSpec.
func GenerateSpecsFromTreeRoot(tree *TreeNode) Specs
func (s Specs) AtIndices(indices SpecIndices) Specs
func (s Specs) CountWithoutSkip() int
func (s Specs) HasAnySpecsMarkedPending() bool
type Suite struct { *ProgressReporterManager // contains filtered or unexported fields }
func NewSuite() *Suite
func (suite *Suite) AddReportEntry(entry ReportEntry) error
func (suite *Suite) BuildTree() error
func (suite *Suite) By(text string, callback ...func()) error
func (suite *Suite) Clone() (*Suite, error)
func (suite *Suite) CurrentSpecReport() types.SpecReport
Spec Running methods - used during PhaseRun
func (suite *Suite) GetPreviewReport() types.Report
Only valid in the preview context. In general suite.report only includes the specs run by _this_ node - it is only at the end of the suite that the parallel reports are aggregated. However in the preview context we run in series and
func (suite *Suite) InRunPhase() bool
func (suite *Suite) PushNode(node Node) error
func (suite *Suite) Run(description string, suiteLabels Labels, suitePath string, failer *Failer, reporter reporters.Reporter, writer WriterInterface, outputInterceptor OutputInterceptor, interruptHandler interrupt_handler.InterruptHandlerInterface, client parallel_support.Client, progressSignalRegistrar ProgressSignalRegistrar, suiteConfig types.SuiteConfig) (bool, bool)
type TreeNode struct { Node Node Parent *TreeNode Children TreeNodes }
func (tn *TreeNode) AncestorNodeChain() Nodes
func (tn *TreeNode) AppendChild(child *TreeNode)
type TreeNodes []*TreeNode
func (tn TreeNodes) Nodes() Nodes
func (tn TreeNodes) WithID(id uint) *TreeNode
Writer implements WriterInterface and GinkgoWriterInterface
type Writer struct {
// contains filtered or unexported fields
}
func NewWriter(outWriter io.Writer) *Writer
func (w *Writer) Bytes() []byte
func (w *Writer) ClearTeeWriters()
func (w *Writer) Len() int
func (w *Writer) Print(a ...interface{})
func (w *Writer) Printf(format string, a ...interface{})
func (w *Writer) Println(a ...interface{})
func (w *Writer) SetMode(mode WriterMode)
func (w *Writer) TeeTo(writer io.Writer)
GinkgoWriterInterface
func (w *Writer) Truncate()
func (w *Writer) Write(b []byte) (n int, err error)
type WriterInterface interface { io.Writer Truncate() Bytes() []byte Len() int }
type WriterMode uint
const ( WriterModeStreamAndBuffer WriterMode = iota WriterModeBufferOnly )
Name | Synopsis |
---|---|
.. |