1- testname: Priority and Fairness FlowSchema API
2 codename: '[sig-api-machinery] API priority and fairness should support FlowSchema
3 API operations [Conformance]'
4 description: ' The flowcontrol.apiserver.k8s.io API group MUST exist in the /apis
5 discovery document. The flowcontrol.apiserver.k8s.io/v1 API group/version MUST
6 exist in the /apis/flowcontrol.apiserver.k8s.io discovery document. The flowschemas
7 and flowschemas/status resources MUST exist in the /apis/flowcontrol.apiserver.k8s.io/v1
8 discovery document. The flowschema resource must support create, get, list, watch,
9 update, patch, delete, and deletecollection.'
10 release: v1.29
11 file: test/e2e/apimachinery/flowcontrol.go
12- testname: Priority and Fairness PriorityLevelConfiguration API
13 codename: '[sig-api-machinery] API priority and fairness should support PriorityLevelConfiguration
14 API operations [Conformance]'
15 description: ' The flowcontrol.apiserver.k8s.io API group MUST exist in the /apis
16 discovery document. The flowcontrol.apiserver.k8s.io/v1 API group/version MUST
17 exist in the /apis/flowcontrol.apiserver.k8s.io discovery document. The prioritylevelconfiguration
18 and prioritylevelconfiguration/status resources MUST exist in the /apis/flowcontrol.apiserver.k8s.io/v1
19 discovery document. The prioritylevelconfiguration resource must support create,
20 get, list, watch, update, patch, delete, and deletecollection.'
21 release: v1.29
22 file: test/e2e/apimachinery/flowcontrol.go
23- testname: Admission webhook, list mutating webhooks
24 codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing
25 mutating webhooks should work [Conformance]'
26 description: Create 10 mutating webhook configurations, all with a label. Attempt
27 to list the webhook configurations matching the label; all the created webhook
28 configurations MUST be present. Attempt to create an object; the object MUST be
29 mutated. Attempt to remove the webhook configurations matching the label with
30 deletecollection; all webhook configurations MUST be deleted. Attempt to create
31 an object; the object MUST NOT be mutated.
32 release: v1.16
33 file: test/e2e/apimachinery/webhook.go
34- testname: Admission webhook, list validating webhooks
35 codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing
36 validating webhooks should work [Conformance]'
37 description: Create 10 validating webhook configurations, all with a label. Attempt
38 to list the webhook configurations matching the label; all the created webhook
39 configurations MUST be present. Attempt to create an object; the create MUST be
40 denied. Attempt to remove the webhook configurations matching the label with deletecollection;
41 all webhook configurations MUST be deleted. Attempt to create an object; the create
42 MUST NOT be denied.
43 release: v1.16
44 file: test/e2e/apimachinery/webhook.go
45- testname: Admission webhook, update mutating webhook
46 codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating
47 a mutating webhook should work [Conformance]'
48 description: Register a mutating admission webhook configuration. Update the webhook
49 to not apply to the create operation and attempt to create an object; the webhook
50 MUST NOT mutate the object. Patch the webhook to apply to the create operation
51 again and attempt to create an object; the webhook MUST mutate the object.
52 release: v1.16
53 file: test/e2e/apimachinery/webhook.go
54- testname: Admission webhook, update validating webhook
55 codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating
56 a validating webhook should work [Conformance]'
57 description: Register a validating admission webhook configuration. Update the webhook
58 to not apply to the create operation and attempt to create an object; the webhook
59 MUST NOT deny the create. Patch the webhook to apply to the create operation again
60 and attempt to create an object; the webhook MUST deny the create.
61 release: v1.16
62 file: test/e2e/apimachinery/webhook.go
63- testname: Mutating Admission webhook, create and update mutating webhook configuration
64 with matchConditions
65 codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
66 be able to create and update mutating webhook configurations with match conditions
67 [Conformance]'
68 description: Register a mutating webhook configuration. Verify that the match conditions
69 field are properly stored in the api-server. Update the mutating webhook configuration
70 and retrieve it; the retrieved object must contain the newly update matchConditions
71 fields.
72 release: v1.28
73 file: test/e2e/apimachinery/webhook.go
74- testname: Validating Admission webhook, create and update validating webhook configuration
75 with matchConditions
76 codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
77 be able to create and update validating webhook configurations with match conditions
78 [Conformance]'
79 description: Register a validating webhook configuration. Verify that the match
80 conditions field are properly stored in the api-server. Update the validating
81 webhook configuration and retrieve it; the retrieved object must contain the newly
82 update matchConditions fields.
83 release: v1.28
84 file: test/e2e/apimachinery/webhook.go
85- testname: Admission webhook, deny attach
86 codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
87 be able to deny attaching pod [Conformance]'
88 description: Register an admission webhook configuration that denies connecting
89 to a pod's attach sub-resource. Attempts to attach MUST be denied.
90 release: v1.16
91 file: test/e2e/apimachinery/webhook.go
92- testname: Admission webhook, deny custom resource create and delete
93 codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
94 be able to deny custom resource creation, update and deletion [Conformance]'
95 description: Register an admission webhook configuration that denies creation, update
96 and deletion of custom resources. Attempts to create, update and delete custom
97 resources MUST be denied.
98 release: v1.16
99 file: test/e2e/apimachinery/webhook.go
100- testname: Admission webhook, deny create
101 codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
102 be able to deny pod and configmap creation [Conformance]'
103 description: Register an admission webhook configuration that admits pod and configmap.
104 Attempts to create non-compliant pods and configmaps, or update/patch compliant
105 pods and configmaps to be non-compliant MUST be denied. An attempt to create a
106 pod that causes a webhook to hang MUST result in a webhook timeout error, and
107 the pod creation MUST be denied. An attempt to create a non-compliant configmap
108 in a whitelisted namespace based on the webhook namespace selector MUST be allowed.
109 release: v1.16
110 file: test/e2e/apimachinery/webhook.go
111- testname: Admission webhook, deny custom resource definition
112 codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
113 deny crd creation [Conformance]'
114 description: Register a webhook that denies custom resource definition create. Attempt
115 to create a custom resource definition; the create request MUST be denied.
116 release: v1.16
117 file: test/e2e/apimachinery/webhook.go
118- testname: Admission webhook, honor timeout
119 codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
120 honor timeout [Conformance]'
121 description: Using a webhook that waits 5 seconds before admitting objects, configure
122 the webhook with combinations of timeouts and failure policy values. Attempt to
123 create a config map with each combination. Requests MUST timeout if the configured
124 webhook timeout is less than 5 seconds and failure policy is fail. Requests must
125 not timeout if the failure policy is ignore. Requests MUST NOT timeout if configured
126 webhook timeout is 10 seconds (much longer than the webhook wait duration).
127 release: v1.16
128 file: test/e2e/apimachinery/webhook.go
129- testname: Admission webhook, discovery document
130 codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
131 include webhook resources in discovery documents [Conformance]'
132 description: The admissionregistration.k8s.io API group MUST exists in the /apis
133 discovery document. The admissionregistration.k8s.io/v1 API group/version MUST
134 exists in the /apis discovery document. The mutatingwebhookconfigurations and
135 validatingwebhookconfigurations resources MUST exist in the /apis/admissionregistration.k8s.io/v1
136 discovery document.
137 release: v1.16
138 file: test/e2e/apimachinery/webhook.go
139- testname: Admission webhook, ordered mutation
140 codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
141 mutate configmap [Conformance]'
142 description: Register a mutating webhook configuration with two webhooks that admit
143 configmaps, one that adds a data key if the configmap already has a specific key,
144 and another that adds a key if the key added by the first webhook is present.
145 Attempt to create a config map; both keys MUST be added to the config map.
146 release: v1.16
147 file: test/e2e/apimachinery/webhook.go
148- testname: Admission webhook, mutate custom resource
149 codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
150 mutate custom resource [Conformance]'
151 description: Register a webhook that mutates a custom resource. Attempt to create
152 custom resource object; the custom resource MUST be mutated.
153 release: v1.16
154 file: test/e2e/apimachinery/webhook.go
155- testname: Admission webhook, mutate custom resource with different stored version
156 codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
157 mutate custom resource with different stored version [Conformance]'
158 description: Register a webhook that mutates custom resources on create and update.
159 Register a custom resource definition using v1 as stored version. Create a custom
160 resource. Patch the custom resource definition to use v2 as the stored version.
161 Attempt to patch the custom resource with a new field and value; the patch MUST
162 be applied successfully.
163 release: v1.16
164 file: test/e2e/apimachinery/webhook.go
165- testname: Admission webhook, mutate custom resource with pruning
166 codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
167 mutate custom resource with pruning [Conformance]'
168 description: Register mutating webhooks that adds fields to custom objects. Register
169 a custom resource definition with a schema that includes only one of the data
170 keys added by the webhooks. Attempt to a custom resource; the fields included
171 in the schema MUST be present and field not included in the schema MUST NOT be
172 present.
173 release: v1.16
174 file: test/e2e/apimachinery/webhook.go
175- testname: Mutating Admission webhook, mutating webhook excluding object with specific
176 name
177 codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
178 mutate everything except ''skip-me'' configmaps [Conformance]'
179 description: Create a mutating webhook configuration with matchConditions field
180 that will reject all resources except ones with a specific name 'skip-me'. Create
181 a configMap with the name 'skip-me' and verify that it's mutated. Create a configMap
182 with a different name than 'skip-me' and verify that it's mustated.
183 release: v1.28
184 file: test/e2e/apimachinery/webhook.go
185- testname: Admission webhook, mutation with defaulting
186 codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
187 mutate pod and apply defaults after mutation [Conformance]'
188 description: Register a mutating webhook that adds an InitContainer to pods. Attempt
189 to create a pod; the InitContainer MUST be added the TerminationMessagePolicy
190 MUST be defaulted.
191 release: v1.16
192 file: test/e2e/apimachinery/webhook.go
193- testname: Admission webhook, admission control not allowed on webhook configuration
194 objects
195 codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
196 not be able to mutate or prevent deletion of webhook configuration objects [Conformance]'
197 description: Register webhooks that mutate and deny deletion of webhook configuration
198 objects. Attempt to create and delete a webhook configuration object; both operations
199 MUST be allowed and the webhook configuration object MUST NOT be mutated the webhooks.
200 release: v1.16
201 file: test/e2e/apimachinery/webhook.go
202- testname: Mutating Admission webhook, reject mutating webhook configurations with
203 invalid matchConditions
204 codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
205 reject mutating webhook configurations with invalid match conditions [Conformance]'
206 description: Creates a mutating webhook configuration with an invalid CEL expression
207 in it's matchConditions field. The api-server server should reject the create
208 request with a "compilation failed" error message.
209 release: v1.28
210 file: test/e2e/apimachinery/webhook.go
211- testname: Validing Admission webhook, reject validating webhook configurations with
212 invalid matchConditions
213 codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
214 reject validating webhook configurations with invalid match conditions [Conformance]'
215 description: Creates a validating webhook configuration with an invalid CEL expression
216 in it's matchConditions field. The api-server server should reject the create
217 request with a "compilation failed" error message.
218 release: v1.28
219 file: test/e2e/apimachinery/webhook.go
220- testname: Admission webhook, fail closed
221 codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
222 unconditionally reject operations on fail closed webhook [Conformance]'
223 description: Register a webhook with a fail closed policy and without CA bundle
224 so that it cannot be called. Attempt operations that require the admission webhook;
225 all MUST be denied.
226 release: v1.16
227 file: test/e2e/apimachinery/webhook.go
228- testname: Aggregated Discovery Interface
229 codename: '[sig-api-machinery] AggregatedDiscovery should support aggregated discovery
230 interface [Conformance]'
231 description: An apiserver MUST support the Aggregated Discovery client interface.
232 Built-in resources MUST all be present.
233 release: v1.30
234 file: test/e2e/apimachinery/aggregated_discovery.go
235- testname: Aggregated Discovery Interface CRDs
236 codename: '[sig-api-machinery] AggregatedDiscovery should support aggregated discovery
237 interface for CRDs [Conformance]'
238 description: An apiserver MUST support the Aggregated Discovery client interface.
239 Add a CRD to the apiserver. The CRD resource MUST be present in the discovery
240 document.
241 release: v1.30
242 file: test/e2e/apimachinery/aggregated_discovery.go
243- testname: Aggregated Discovery Endpoint Accept Headers
244 codename: '[sig-api-machinery] AggregatedDiscovery should support raw aggregated
245 discovery endpoint Accept headers [Conformance]'
246 description: An apiserver MUST support the Aggregated Discovery endpoint Accept
247 headers. Built-in resources MUST all be present.
248 release: v1.30
249 file: test/e2e/apimachinery/aggregated_discovery.go
250- testname: Aggregated Discovery Endpoint Accept Headers CRDs
251 codename: '[sig-api-machinery] AggregatedDiscovery should support raw aggregated
252 discovery request for CRDs [Conformance]'
253 description: An apiserver MUST support the Aggregated Discovery endpoint Accept
254 headers. Add a CRD to the apiserver. The CRD MUST appear in the discovery document.
255 release: v1.30
256 file: test/e2e/apimachinery/aggregated_discovery.go
257- testname: aggregator-supports-the-sample-apiserver
258 codename: '[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample
259 API Server using the current Aggregator [Conformance]'
260 description: Ensure that the sample-apiserver code from 1.17 and compiled against
261 1.17 will work on the current Aggregator/API-Server.
262 release: v1.17, v1.21, v1.27
263 file: test/e2e/apimachinery/aggregator.go
264- testname: Custom Resource Definition Conversion Webhook, convert mixed version list
265 codename: '[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
266 should be able to convert a non homogeneous list of CRs [Conformance]'
267 description: Register a conversion webhook and a custom resource definition. Create
268 a custom resource stored at v1. Change the custom resource definition storage
269 to v2. Create a custom resource stored at v2. Attempt to list the custom resources
270 at v2; the list result MUST contain both custom resources at v2.
271 release: v1.16
272 file: test/e2e/apimachinery/crd_conversion_webhook.go
273- testname: Custom Resource Definition Conversion Webhook, conversion custom resource
274 codename: '[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
275 should be able to convert from CR v1 to CR v2 [Conformance]'
276 description: Register a conversion webhook and a custom resource definition. Create
277 a v1 custom resource. Attempts to read it at v2 MUST succeed.
278 release: v1.16
279 file: test/e2e/apimachinery/crd_conversion_webhook.go
280- testname: Custom Resource Definition, watch
281 codename: '[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
282 CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]'
283 description: Create a Custom Resource Definition. Attempt to watch it; the watch
284 MUST observe create, modify and delete events.
285 release: v1.16
286 file: test/e2e/apimachinery/crd_watch.go
287- testname: Custom Resource Definition, create
288 codename: '[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
289 Simple CustomResourceDefinition creating/deleting custom resource definition objects
290 works [Conformance]'
291 description: Create a API extension client and define a random custom resource definition.
292 Create the custom resource definition and then delete it. The creation and deletion
293 MUST be successful.
294 release: v1.9
295 file: test/e2e/apimachinery/custom_resource_definition.go
296- testname: Custom Resource Definition, status sub-resource
297 codename: '[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
298 Simple CustomResourceDefinition getting/updating/patching custom resource definition
299 status sub-resource works [Conformance]'
300 description: Create a custom resource definition. Attempt to read, update and patch
301 its status sub-resource; all mutating sub-resource operations MUST be visible
302 to subsequent reads.
303 release: v1.16
304 file: test/e2e/apimachinery/custom_resource_definition.go
305- testname: Custom Resource Definition, list
306 codename: '[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
307 Simple CustomResourceDefinition listing custom resource definition objects works
308 [Conformance]'
309 description: Create a API extension client, define 10 labeled custom resource definitions
310 and list them using a label selector; the list result MUST contain only the labeled
311 custom resource definitions. Delete the labeled custom resource definitions via
312 delete collection; the delete MUST be successful and MUST delete only the labeled
313 custom resource definitions.
314 release: v1.16
315 file: test/e2e/apimachinery/custom_resource_definition.go
316- testname: Custom Resource Definition, defaulting
317 codename: '[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
318 custom resource defaulting for requests and from storage works [Conformance]'
319 description: Create a custom resource definition without default. Create CR. Add
320 default and read CR until the default is applied. Create another CR. Remove default,
321 add default for another field and read CR until new field is defaulted, but old
322 default stays.
323 release: v1.17
324 file: test/e2e/apimachinery/custom_resource_definition.go
325- testname: Custom Resource Definition, discovery
326 codename: '[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
327 should include custom resource definition resources in discovery documents [Conformance]'
328 description: Fetch /apis, /apis/apiextensions.k8s.io, and /apis/apiextensions.k8s.io/v1
329 discovery documents, and ensure they indicate CustomResourceDefinition apiextensions.k8s.io/v1
330 resources are available.
331 release: v1.16
332 file: test/e2e/apimachinery/custom_resource_definition.go
333- testname: Custom Resource OpenAPI Publish, stop serving version
334 codename: '[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
335 removes definition from spec when one version gets changed to not be served [Conformance]'
336 description: Register a custom resource definition with multiple versions. OpenAPI
337 definitions MUST be published for custom resource definitions. Update the custom
338 resource definition to not serve one of the versions. OpenAPI definitions MUST
339 be updated to not contain the version that is no longer served.
340 release: v1.16
341 file: test/e2e/apimachinery/crd_publish_openapi.go
342- testname: Custom Resource OpenAPI Publish, version rename
343 codename: '[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
344 updates the published spec when one version gets renamed [Conformance]'
345 description: Register a custom resource definition with multiple versions; OpenAPI
346 definitions MUST be published for custom resource definitions. Rename one of the
347 versions of the custom resource definition via a patch; OpenAPI definitions MUST
348 update to reflect the rename.
349 release: v1.16
350 file: test/e2e/apimachinery/crd_publish_openapi.go
351- testname: Custom Resource OpenAPI Publish, with x-kubernetes-preserve-unknown-fields
352 at root
353 codename: '[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
354 works for CRD preserving unknown fields at the schema root [Conformance]'
355 description: Register a custom resource definition with x-kubernetes-preserve-unknown-fields
356 in the schema root. Attempt to create and apply a change a custom resource, via
357 kubectl; kubectl validation MUST accept unknown properties. Attempt kubectl explain;
358 the output MUST show the custom resource KIND.
359 release: v1.16
360 file: test/e2e/apimachinery/crd_publish_openapi.go
361- testname: Custom Resource OpenAPI Publish, with x-kubernetes-preserve-unknown-fields
362 in embedded object
363 codename: '[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
364 works for CRD preserving unknown fields in an embedded object [Conformance]'
365 description: Register a custom resource definition with x-kubernetes-preserve-unknown-fields
366 in an embedded object. Attempt to create and apply a change a custom resource,
367 via kubectl; kubectl validation MUST accept unknown properties. Attempt kubectl
368 explain; the output MUST show that x-preserve-unknown-properties is used on the
369 nested field.
370 release: v1.16
371 file: test/e2e/apimachinery/crd_publish_openapi.go
372- testname: Custom Resource OpenAPI Publish, with validation schema
373 codename: '[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
374 works for CRD with validation schema [Conformance]'
375 description: Register a custom resource definition with a validating schema consisting
376 of objects, arrays and primitives. Attempt to create and apply a change a custom
377 resource using valid properties, via kubectl; kubectl validation MUST pass. Attempt
378 both operations with unknown properties and without required properties; kubectl
379 validation MUST reject the operations. Attempt kubectl explain; the output MUST
380 explain the custom resource properties. Attempt kubectl explain on custom resource
381 properties; the output MUST explain the nested custom resource properties. All
382 validation should be the same.
383 release: v1.16
384 file: test/e2e/apimachinery/crd_publish_openapi.go
385- testname: Custom Resource OpenAPI Publish, with x-kubernetes-preserve-unknown-fields
386 in object
387 codename: '[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
388 works for CRD without validation schema [Conformance]'
389 description: Register a custom resource definition with x-kubernetes-preserve-unknown-fields
390 in the top level object. Attempt to create and apply a change a custom resource,
391 via kubectl; kubectl validation MUST accept unknown properties. Attempt kubectl
392 explain; the output MUST contain a valid DESCRIPTION stanza.
393 release: v1.16
394 file: test/e2e/apimachinery/crd_publish_openapi.go
395- testname: Custom Resource OpenAPI Publish, varying groups
396 codename: '[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
397 works for multiple CRDs of different groups [Conformance]'
398 description: Register multiple custom resource definitions spanning different groups
399 and versions; OpenAPI definitions MUST be published for custom resource definitions.
400 release: v1.16
401 file: test/e2e/apimachinery/crd_publish_openapi.go
402- testname: Custom Resource OpenAPI Publish, varying kinds
403 codename: '[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
404 works for multiple CRDs of same group and version but different kinds [Conformance]'
405 description: Register multiple custom resource definitions in the same group and
406 version but spanning different kinds; OpenAPI definitions MUST be published for
407 custom resource definitions.
408 release: v1.16
409 file: test/e2e/apimachinery/crd_publish_openapi.go
410- testname: Custom Resource OpenAPI Publish, varying versions
411 codename: '[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
412 works for multiple CRDs of same group but different versions [Conformance]'
413 description: Register a custom resource definition with multiple versions; OpenAPI
414 definitions MUST be published for custom resource definitions.
415 release: v1.16
416 file: test/e2e/apimachinery/crd_publish_openapi.go
417- testname: Discovery, confirm the groupVerion and a resourcefrom each apiGroup
418 codename: '[sig-api-machinery] Discovery should locate the groupVersion and a resource
419 within each APIGroup [Conformance]'
420 description: A resourceList MUST be found for each apiGroup that is retrieved. For
421 each apiGroup the groupVersion MUST equal the groupVersion as reported by the
422 schema. From each resourceList a valid resource MUST be found.
423 release: v1.28
424 file: test/e2e/apimachinery/discovery.go
425- testname: Discovery, confirm the PreferredVersion for each api group
426 codename: '[sig-api-machinery] Discovery should validate PreferredVersion for each
427 APIGroup [Conformance]'
428 description: Ensure that a list of apis is retrieved. Each api group found MUST
429 return a valid PreferredVersion unless the group suffix is example.com.
430 release: v1.19
431 file: test/e2e/apimachinery/discovery.go
432- testname: Server side field validation, unknown fields CR no validation schema
433 codename: '[sig-api-machinery] FieldValidation should create/apply a CR with unknown
434 fields for CRD with no validation schema [Conformance]'
435 description: When a CRD does not have a validation schema, it should succeed when
436 a CR with unknown fields is applied.
437 release: v1.27
438 file: test/e2e/apimachinery/field_validation.go
439- testname: Server side field validation, valid CR with validation schema
440 codename: '[sig-api-machinery] FieldValidation should create/apply a valid CR for
441 CRD with validation schema [Conformance]'
442 description: When a CRD has a validation schema, it should succeed when a valid
443 CR is applied.
444 release: v1.27
445 file: test/e2e/apimachinery/field_validation.go
446- testname: Server side field validation, unknown fields CR fails validation
447 codename: '[sig-api-machinery] FieldValidation should create/apply an invalid CR
448 with extra properties for CRD with validation schema [Conformance]'
449 description: When a CRD does have a validation schema, it should reject CRs with
450 unknown fields.
451 release: v1.27
452 file: test/e2e/apimachinery/field_validation.go
453- testname: Server side field validation, CR duplicates
454 codename: '[sig-api-machinery] FieldValidation should detect duplicates in a CR
455 when preserving unknown fields [Conformance]'
456 description: The server should reject CRs with duplicate fields even when preserving
457 unknown fields.
458 release: v1.27
459 file: test/e2e/apimachinery/field_validation.go
460- testname: Server side field validation, typed object
461 codename: '[sig-api-machinery] FieldValidation should detect unknown and duplicate
462 fields of a typed object [Conformance]'
463 description: It should reject the request if a typed object has unknown or duplicate
464 fields.
465 release: v1.27
466 file: test/e2e/apimachinery/field_validation.go
467- testname: Server side field validation, unknown metadata
468 codename: '[sig-api-machinery] FieldValidation should detect unknown metadata fields
469 in both the root and embedded object of a CR [Conformance]'
470 description: The server should reject CRs with unknown metadata fields in both the
471 root and embedded objects of a CR.
472 release: v1.27
473 file: test/e2e/apimachinery/field_validation.go
474- testname: Server side field validation, typed unknown metadata
475 codename: '[sig-api-machinery] FieldValidation should detect unknown metadata fields
476 of a typed object [Conformance]'
477 description: It should reject the request if a typed object has unknown fields in
478 the metadata.
479 release: v1.27
480 file: test/e2e/apimachinery/field_validation.go
481- testname: Garbage Collector, delete deployment, propagation policy background
482 codename: '[sig-api-machinery] Garbage collector should delete RS created by deployment
483 when not orphaning [Conformance]'
484 description: Create a deployment with a replicaset. Once replicaset is created ,
485 delete the deployment with deleteOptions.PropagationPolicy set to Background.
486 Deleting the deployment MUST delete the replicaset created by the deployment and
487 also the Pods that belong to the deployments MUST be deleted.
488 release: v1.9
489 file: test/e2e/apimachinery/garbage_collector.go
490- testname: Garbage Collector, delete replication controller, propagation policy background
491 codename: '[sig-api-machinery] Garbage collector should delete pods created by rc
492 when not orphaning [Conformance]'
493 description: Create a replication controller with 2 Pods. Once RC is created and
494 the first Pod is created, delete RC with deleteOptions.PropagationPolicy set to
495 Background. Deleting the Replication Controller MUST cause pods created by that
496 RC to be deleted.
497 release: v1.9
498 file: test/e2e/apimachinery/garbage_collector.go
499- testname: Garbage Collector, delete replication controller, after owned pods
500 codename: '[sig-api-machinery] Garbage collector should keep the rc around until
501 all its pods are deleted if the deleteOptions says so [Conformance]'
502 description: Create a replication controller with maximum allocatable Pods between
503 10 and 100 replicas. Once RC is created and the all Pods are created, delete RC
504 with deleteOptions.PropagationPolicy set to Foreground. Deleting the Replication
505 Controller MUST cause pods created by that RC to be deleted before the RC is deleted.
506 release: v1.9
507 file: test/e2e/apimachinery/garbage_collector.go
508- testname: Garbage Collector, dependency cycle
509 codename: '[sig-api-machinery] Garbage collector should not be blocked by dependency
510 circle [Conformance]'
511 description: Create three pods, patch them with Owner references such that pod1
512 has pod3, pod2 has pod1 and pod3 has pod2 as owner references respectively. Delete
513 pod1 MUST delete all pods. The dependency cycle MUST not block the garbage collection.
514 release: v1.9
515 file: test/e2e/apimachinery/garbage_collector.go
516- testname: Garbage Collector, multiple owners
517 codename: '[sig-api-machinery] Garbage collector should not delete dependents that
518 have both valid owner and owner that''s waiting for dependents to be deleted [Conformance]'
519 description: Create a replication controller RC1, with maximum allocatable Pods
520 between 10 and 100 replicas. Create second replication controller RC2 and set
521 RC2 as owner for half of those replicas. Once RC1 is created and the all Pods
522 are created, delete RC1 with deleteOptions.PropagationPolicy set to Foreground.
523 Half of the Pods that has RC2 as owner MUST not be deleted or have a deletion
524 timestamp. Deleting the Replication Controller MUST not delete Pods that are owned
525 by multiple replication controllers.
526 release: v1.9
527 file: test/e2e/apimachinery/garbage_collector.go
528- testname: Garbage Collector, delete deployment, propagation policy orphan
529 codename: '[sig-api-machinery] Garbage collector should orphan RS created by deployment
530 when deleteOptions.PropagationPolicy is Orphan [Conformance]'
531 description: Create a deployment with a replicaset. Once replicaset is created ,
532 delete the deployment with deleteOptions.PropagationPolicy set to Orphan. Deleting
533 the deployment MUST cause the replicaset created by the deployment to be orphaned,
534 also the Pods created by the deployments MUST be orphaned.
535 release: v1.9
536 file: test/e2e/apimachinery/garbage_collector.go
537- testname: Garbage Collector, delete replication controller, propagation policy orphan
538 codename: '[sig-api-machinery] Garbage collector should orphan pods created by rc
539 if delete options say so [Conformance]'
540 description: Create a replication controller with maximum allocatable Pods between
541 10 and 100 replicas. Once RC is created and the all Pods are created, delete RC
542 with deleteOptions.PropagationPolicy set to Orphan. Deleting the Replication Controller
543 MUST cause pods created by that RC to be orphaned.
544 release: v1.9
545 file: test/e2e/apimachinery/garbage_collector.go
546- testname: Namespace, apply finalizer to a namespace
547 codename: '[sig-api-machinery] Namespaces [Serial] should apply a finalizer to a
548 Namespace [Conformance]'
549 description: Attempt to create a Namespace which MUST be succeed. Updating the namespace
550 with a fake finalizer MUST succeed. The fake finalizer MUST be found. Removing
551 the fake finalizer from the namespace MUST succeed and MUST NOT be found.
552 release: v1.26
553 file: test/e2e/apimachinery/namespace.go
554- testname: Namespace, apply update to a namespace
555 codename: '[sig-api-machinery] Namespaces [Serial] should apply an update to a Namespace
556 [Conformance]'
557 description: When updating the namespace it MUST succeed and the field MUST equal
558 the new value.
559 release: v1.26
560 file: test/e2e/apimachinery/namespace.go
561- testname: Namespace, apply changes to a namespace status
562 codename: '[sig-api-machinery] Namespaces [Serial] should apply changes to a namespace
563 status [Conformance]'
564 description: Getting the current namespace status MUST succeed. The reported status
565 phase MUST be active. Given the patching of the namespace status, the fields MUST
566 equal the new values. Given the updating of the namespace status, the fields MUST
567 equal the new values.
568 release: v1.25
569 file: test/e2e/apimachinery/namespace.go
570- testname: namespace-deletion-removes-pods
571 codename: '[sig-api-machinery] Namespaces [Serial] should ensure that all pods are
572 removed when a namespace is deleted [Conformance]'
573 description: Ensure that if a namespace is deleted then all pods are removed from
574 that namespace.
575 release: v1.11
576 file: test/e2e/apimachinery/namespace.go
577- testname: namespace-deletion-removes-services
578 codename: '[sig-api-machinery] Namespaces [Serial] should ensure that all services
579 are removed when a namespace is deleted [Conformance]'
580 description: Ensure that if a namespace is deleted then all services are removed
581 from that namespace.
582 release: v1.11
583 file: test/e2e/apimachinery/namespace.go
584- testname: Namespace patching
585 codename: '[sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]'
586 description: A Namespace is created. The Namespace is patched. The Namespace and
587 MUST now include the new Label.
588 release: v1.18
589 file: test/e2e/apimachinery/namespace.go
590- testname: ResourceQuota, apply changes to a ResourceQuota status
591 codename: '[sig-api-machinery] ResourceQuota should apply changes to a resourcequota
592 status [Conformance]'
593 description: Attempt to create a ResourceQuota for CPU and Memory quota limits.
594 Creation MUST be successful. Updating the hard status values MUST succeed and
595 the new values MUST be found. The reported hard status values MUST equal the spec
596 hard values. Patching the spec hard values MUST succeed and the new values MUST
597 be found. Patching the hard status values MUST succeed. The reported hard status
598 values MUST equal the new spec hard values. Getting the /status MUST succeed and
599 the reported hard status values MUST equal the spec hard values. Repatching the
600 hard status values MUST succeed. The spec MUST NOT be changed when patching /status.
601 release: v1.26
602 file: test/e2e/apimachinery/resource_quota.go
603- testname: ResourceQuota, update and delete
604 codename: '[sig-api-machinery] ResourceQuota should be able to update and delete
605 ResourceQuota. [Conformance]'
606 description: Create a ResourceQuota for CPU and Memory quota limits. Creation MUST
607 be successful. When ResourceQuota is updated to modify CPU and Memory quota limits,
608 update MUST succeed with updated values for CPU and Memory limits. When ResourceQuota
609 is deleted, it MUST not be available in the namespace.
610 release: v1.16
611 file: test/e2e/apimachinery/resource_quota.go
612- testname: ResourceQuota, object count quota, configmap
613 codename: '[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture
614 the life of a configMap. [Conformance]'
615 description: Create a ResourceQuota. Creation MUST be successful and its ResourceQuotaStatus
616 MUST match to expected used and total allowed resource quota count within namespace.
617 Create a ConfigMap. Its creation MUST be successful and resource usage count against
618 the ConfigMap object MUST be captured in ResourceQuotaStatus of the ResourceQuota.
619 Delete the ConfigMap. Deletion MUST succeed and resource usage count against the
620 ConfigMap object MUST be released from ResourceQuotaStatus of the ResourceQuota.
621 release: v1.16
622 file: test/e2e/apimachinery/resource_quota.go
623- testname: ResourceQuota, object count quota, pod
624 codename: '[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture
625 the life of a pod. [Conformance]'
626 description: Create a ResourceQuota. Creation MUST be successful and its ResourceQuotaStatus
627 MUST match to expected used and total allowed resource quota count within namespace.
628 Create a Pod with resource request count for CPU, Memory, EphemeralStorage and
629 ExtendedResourceName. Pod creation MUST be successful and respective resource
630 usage count MUST be captured in ResourceQuotaStatus of the ResourceQuota. Create
631 another Pod with resource request exceeding remaining quota. Pod creation MUST
632 fail as the request exceeds ResourceQuota limits. Update the successfully created
633 pod's resource requests. Updation MUST fail as a Pod can not dynamically update
634 its resource requirements. Delete the successfully created Pod. Pod Deletion MUST
635 be scuccessful and it MUST release the allocated resource counts from ResourceQuotaStatus
636 of the ResourceQuota.
637 release: v1.16
638 file: test/e2e/apimachinery/resource_quota.go
639- testname: ResourceQuota, object count quota, replicaSet
640 codename: '[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture
641 the life of a replica set. [Conformance]'
642 description: Create a ResourceQuota. Creation MUST be successful and its ResourceQuotaStatus
643 MUST match to expected used and total allowed resource quota count within namespace.
644 Create a ReplicaSet. Its creation MUST be successful and resource usage count
645 against the ReplicaSet object MUST be captured in ResourceQuotaStatus of the ResourceQuota.
646 Delete the ReplicaSet. Deletion MUST succeed and resource usage count against
647 the ReplicaSet object MUST be released from ResourceQuotaStatus of the ResourceQuota.
648 release: v1.16
649 file: test/e2e/apimachinery/resource_quota.go
650- testname: ResourceQuota, object count quota, replicationController
651 codename: '[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture
652 the life of a replication controller. [Conformance]'
653 description: Create a ResourceQuota. Creation MUST be successful and its ResourceQuotaStatus
654 MUST match to expected used and total allowed resource quota count within namespace.
655 Create a ReplicationController. Its creation MUST be successful and resource usage
656 count against the ReplicationController object MUST be captured in ResourceQuotaStatus
657 of the ResourceQuota. Delete the ReplicationController. Deletion MUST succeed
658 and resource usage count against the ReplicationController object MUST be released
659 from ResourceQuotaStatus of the ResourceQuota.
660 release: v1.16
661 file: test/e2e/apimachinery/resource_quota.go
662- testname: ResourceQuota, object count quota, secret
663 codename: '[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture
664 the life of a secret. [Conformance]'
665 description: Create a ResourceQuota. Creation MUST be successful and its ResourceQuotaStatus
666 MUST match to expected used and total allowed resource quota count within namespace.
667 Create a Secret. Its creation MUST be successful and resource usage count against
668 the Secret object and resourceQuota object MUST be captured in ResourceQuotaStatus
669 of the ResourceQuota. Delete the Secret. Deletion MUST succeed and resource usage
670 count against the Secret object MUST be released from ResourceQuotaStatus of the
671 ResourceQuota.
672 release: v1.16
673 file: test/e2e/apimachinery/resource_quota.go
674- testname: ResourceQuota, object count quota, service
675 codename: '[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture
676 the life of a service. [Conformance]'
677 description: Create a ResourceQuota. Creation MUST be successful and its ResourceQuotaStatus
678 MUST match to expected used and total allowed resource quota count within namespace.
679 Create a Service. Its creation MUST be successful and resource usage count against
680 the Service object and resourceQuota object MUST be captured in ResourceQuotaStatus
681 of the ResourceQuota. Delete the Service. Deletion MUST succeed and resource usage
682 count against the Service object MUST be released from ResourceQuotaStatus of
683 the ResourceQuota.
684 release: v1.16
685 file: test/e2e/apimachinery/resource_quota.go
686- testname: ResourceQuota, object count quota, resourcequotas
687 codename: '[sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure
688 its status is promptly calculated. [Conformance]'
689 description: Create a ResourceQuota. Creation MUST be successful and its ResourceQuotaStatus
690 MUST match to expected used and total allowed resource quota count within namespace.
691 release: v1.16
692 file: test/e2e/apimachinery/resource_quota.go
693- testname: ResourceQuota, manage lifecycle of a ResourceQuota
694 codename: '[sig-api-machinery] ResourceQuota should manage the lifecycle of a ResourceQuota
695 [Conformance]'
696 description: Attempt to create a ResourceQuota for CPU and Memory quota limits.
697 Creation MUST be successful. Attempt to list all namespaces with a label selector
698 which MUST succeed. One list MUST be found. The ResourceQuota when patched MUST
699 succeed. Given the patching of the ResourceQuota, the fields MUST equal the new
700 values. It MUST succeed at deleting a collection of ResourceQuota via a label
701 selector.
702 release: v1.25
703 file: test/e2e/apimachinery/resource_quota.go
704- testname: ResourceQuota, quota scope, BestEffort and NotBestEffort scope
705 codename: '[sig-api-machinery] ResourceQuota should verify ResourceQuota with best
706 effort scope. [Conformance]'
707 description: Create two ResourceQuotas, one with 'BestEffort' scope and another
708 with 'NotBestEffort' scope. Creation MUST be successful and their ResourceQuotaStatus
709 MUST match to expected used and total allowed resource quota count within namespace.
710 Create a 'BestEffort' Pod by not explicitly specifying resource limits and requests.
711 Pod creation MUST be successful and usage count MUST be captured in ResourceQuotaStatus
712 of 'BestEffort' scoped ResourceQuota but MUST NOT in 'NotBestEffort' scoped ResourceQuota.
713 Delete the Pod. Pod deletion MUST succeed and Pod resource usage count MUST be
714 released from ResourceQuotaStatus of 'BestEffort' scoped ResourceQuota. Create
715 a 'NotBestEffort' Pod by explicitly specifying resource limits and requests. Pod
716 creation MUST be successful and usage count MUST be captured in ResourceQuotaStatus
717 of 'NotBestEffort' scoped ResourceQuota but MUST NOT in 'BestEffort' scoped ResourceQuota.
718 Delete the Pod. Pod deletion MUST succeed and Pod resource usage count MUST be
719 released from ResourceQuotaStatus of 'NotBestEffort' scoped ResourceQuota.
720 release: v1.16
721 file: test/e2e/apimachinery/resource_quota.go
722- testname: ResourceQuota, quota scope, Terminating and NotTerminating scope
723 codename: '[sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating
724 scopes. [Conformance]'
725 description: Create two ResourceQuotas, one with 'Terminating' scope and another
726 'NotTerminating' scope. Request and the limit counts for CPU and Memory resources
727 are set for the ResourceQuota. Creation MUST be successful and their ResourceQuotaStatus
728 MUST match to expected used and total allowed resource quota count within namespace.
729 Create a Pod with specified CPU and Memory ResourceRequirements fall within quota
730 limits. Pod creation MUST be successful and usage count MUST be captured in ResourceQuotaStatus
731 of 'NotTerminating' scoped ResourceQuota but MUST NOT in 'Terminating' scoped
732 ResourceQuota. Delete the Pod. Pod deletion MUST succeed and Pod resource usage
733 count MUST be released from ResourceQuotaStatus of 'NotTerminating' scoped ResourceQuota.
734 Create a pod with specified activeDeadlineSeconds and resourceRequirements for
735 CPU and Memory fall within quota limits. Pod creation MUST be successful and usage
736 count MUST be captured in ResourceQuotaStatus of 'Terminating' scoped ResourceQuota
737 but MUST NOT in 'NotTerminating' scoped ResourceQuota. Delete the Pod. Pod deletion
738 MUST succeed and Pod resource usage count MUST be released from ResourceQuotaStatus
739 of 'Terminating' scoped ResourceQuota.
740 release: v1.16
741 file: test/e2e/apimachinery/resource_quota.go
742- testname: API Chunking, server should return chunks of results for list calls
743 codename: '[sig-api-machinery] Servers with support for API chunking should return
744 chunks of results for list calls [Conformance]'
745 description: Create a large number of PodTemplates. Attempt to retrieve the first
746 chunk with limit set; the server MUST return the chunk of the size not exceeding
747 the limit with RemainingItems set in the response. Attempt to retrieve the remaining
748 items by providing the received continuation token and limit; the server MUST
749 return the remaining items in chunks of the size not exceeding the limit, with
750 appropriately set RemainingItems field in the response and with the ResourceVersion
751 returned in the first response. Attempt to list all objects at once without setting
752 the limit; the server MUST return all items in a single response.
753 release: v1.29
754 file: test/e2e/apimachinery/chunking.go
755- testname: API Chunking, server should support continue listing from the last key
756 even if the original version has been compacted away
757 codename: '[sig-api-machinery] Servers with support for API chunking should support
758 continue listing from the last key if the original version has been compacted
759 away, though the list is inconsistent [Slow] [Conformance]'
760 description: Create a large number of PodTemplates. Attempt to retrieve the first
761 chunk with limit set; the server MUST return the chunk of the size not exceeding
762 the limit with RemainingItems set in the response. Attempt to retrieve the second
763 page until the continuation token expires; the server MUST return a continuation
764 token for inconsistent list continuation. Attempt to retrieve the second page
765 with the received inconsistent list continuation token; the server MUST return
766 the number of items not exceeding the limit, a new continuation token and appropriately
767 set RemainingItems field in the response. Attempt to retrieve the remaining pages
768 by passing the received continuation token; the server MUST return the remaining
769 items in chunks of the size not exceeding the limit, with appropriately set RemainingItems
770 field in the response and with the ResourceVersion returned as part of the inconsistent
771 list.
772 release: v1.29
773 file: test/e2e/apimachinery/chunking.go
774- testname: API metadata HTTP return
775 codename: '[sig-api-machinery] Servers with support for Table transformation should
776 return a 406 for a backend which does not implement metadata [Conformance]'
777 description: Issue a HTTP request to the API. HTTP request MUST return a HTTP status
778 code of 406.
779 release: v1.16
780 file: test/e2e/apimachinery/table_conversion.go
781- testname: ValidatingAdmissionPolicy
782 codename: '[sig-api-machinery] ValidatingAdmissionPolicy [Privileged:ClusterAdmin]
783 should allow expressions to refer variables. [Conformance]'
784 description: ' The ValidatingAdmissionPolicy should allow expressions to refer variables.'
785 release: v1.30
786 file: test/e2e/apimachinery/validatingadmissionpolicy.go
787- testname: ValidatingAdmissionPolicy API
788 codename: '[sig-api-machinery] ValidatingAdmissionPolicy [Privileged:ClusterAdmin]
789 should support ValidatingAdmissionPolicy API operations [Conformance]'
790 description: ' The admissionregistration.k8s.io API group MUST exist in the /apis
791 discovery document. The admissionregistration.k8s.io/v1 API group/version MUST
792 exist in the /apis/admissionregistration.k8s.io discovery document. The validatingadmisionpolicy
793 and validatingadmissionpolicy/status resources MUST exist in the /apis/admissionregistration.k8s.io/v1
794 discovery document. The validatingadmisionpolicy resource must support create,
795 get, list, watch, update, patch, delete, and deletecollection.'
796 release: v1.30
797 file: test/e2e/apimachinery/validatingadmissionpolicy.go
798- testname: ValidatingadmissionPolicyBinding API
799 codename: '[sig-api-machinery] ValidatingAdmissionPolicy [Privileged:ClusterAdmin]
800 should support ValidatingAdmissionPolicyBinding API operations [Conformance]'
801 description: ' The admissionregistration.k8s.io API group MUST exist in the /apis
802 discovery document. The admissionregistration.k8s.io/v1 API group/version MUST
803 exist in the /apis/admissionregistration.k8s.io discovery document. The ValidatingadmissionPolicyBinding
804 resources MUST exist in the /apis/admissionregistration.k8s.io/v1 discovery document.
805 The ValidatingadmissionPolicyBinding resource must support create, get, list,
806 watch, update, patch, delete, and deletecollection.'
807 release: v1.30
808 file: test/e2e/apimachinery/validatingadmissionpolicy.go
809- testname: ValidatingAdmissionPolicy
810 codename: '[sig-api-machinery] ValidatingAdmissionPolicy [Privileged:ClusterAdmin]
811 should validate against a Deployment [Conformance]'
812 description: ' The ValidatingAdmissionPolicy should validate a deployment as the
813 expression defined inside the policy.'
814 release: v1.30
815 file: test/e2e/apimachinery/validatingadmissionpolicy.go
816- testname: watch-configmaps-closed-and-restarted
817 codename: '[sig-api-machinery] Watchers should be able to restart watching from
818 the last resource version observed by the previous watch [Conformance]'
819 description: Ensure that a watch can be reopened from the last resource version
820 observed by the previous watch, and it will continue delivering notifications
821 from that point in time.
822 release: v1.11
823 file: test/e2e/apimachinery/watch.go
824- testname: watch-configmaps-from-resource-version
825 codename: '[sig-api-machinery] Watchers should be able to start watching from a
826 specific resource version [Conformance]'
827 description: Ensure that a watch can be opened from a particular resource version
828 in the past and only notifications happening after that resource version are observed.
829 release: v1.11
830 file: test/e2e/apimachinery/watch.go
831- testname: watch-configmaps-with-multiple-watchers
832 codename: '[sig-api-machinery] Watchers should observe add, update, and delete watch
833 notifications on configmaps [Conformance]'
834 description: Ensure that multiple watchers are able to receive all add, update,
835 and delete notifications on configmaps that match a label selector and do not
836 receive notifications for configmaps which do not match that label selector.
837 release: v1.11
838 file: test/e2e/apimachinery/watch.go
839- testname: watch-configmaps-label-changed
840 codename: '[sig-api-machinery] Watchers should observe an object deletion if it
841 stops meeting the requirements of the selector [Conformance]'
842 description: Ensure that a watched object stops meeting the requirements of a watch's
843 selector, the watch will observe a delete, and will not observe notifications
844 for that object until it meets the selector's requirements again.
845 release: v1.11
846 file: test/e2e/apimachinery/watch.go
847- testname: watch-consistency
848 codename: '[sig-api-machinery] Watchers should receive events on concurrent watches
849 in same order [Conformance]'
850 description: Ensure that concurrent watches are consistent with each other by initiating
851 an additional watch for events received from the first watch, initiated at the
852 resource version of the event, and checking that all resource versions of all
853 events match. Events are produced from writes on a background goroutine.
854 release: v1.15
855 file: test/e2e/apimachinery/watch.go
856- testname: Confirm a server version
857 codename: '[sig-api-machinery] server version should find the server version [Conformance]'
858 description: Ensure that an API server version can be retrieved. Both the major
859 and minor versions MUST only be an integer.
860 release: v1.19
861 file: test/e2e/apimachinery/server_version.go
862- testname: ControllerRevision, resource lifecycle
863 codename: '[sig-apps] ControllerRevision [Serial] should manage the lifecycle of
864 a ControllerRevision [Conformance]'
865 description: Creating a DaemonSet MUST succeed. Listing all ControllerRevisions
866 with a label selector MUST find only one. After patching the ControllerRevision
867 with a new label, the label MUST be found. Creating a new ControllerRevision for
868 the DaemonSet MUST succeed. Listing the ControllerRevisions by label selector
869 MUST find only two. Deleting a ControllerRevision MUST succeed. Listing the ControllerRevisions
870 by label selector MUST find only one. After updating the ControllerRevision with
871 a new label, the label MUST be found. Patching the DaemonSet MUST succeed. Listing
872 the ControllerRevisions by label selector MUST find only two. Deleting a collection
873 of ControllerRevision via a label selector MUST succeed. Listing the ControllerRevisions
874 by label selector MUST find only one. The current ControllerRevision revision
875 MUST be 3.
876 release: v1.25
877 file: test/e2e/apps/controller_revision.go
878- testname: CronJob Suspend
879 codename: '[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]'
880 description: CronJob MUST support suspension, which suppresses creation of new jobs.
881 release: v1.21
882 file: test/e2e/apps/cronjob.go
883- testname: CronJob ForbidConcurrent
884 codename: '[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent
885 [Slow] [Conformance]'
886 description: CronJob MUST support ForbidConcurrent policy, allowing to run single,
887 previous job at the time.
888 release: v1.21
889 file: test/e2e/apps/cronjob.go
890- testname: CronJob ReplaceConcurrent
891 codename: '[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]'
892 description: CronJob MUST support ReplaceConcurrent policy, allowing to run single,
893 newer job at the time.
894 release: v1.21
895 file: test/e2e/apps/cronjob.go
896- testname: CronJob AllowConcurrent
897 codename: '[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]'
898 description: CronJob MUST support AllowConcurrent policy, allowing to run multiple
899 jobs at the same time.
900 release: v1.21
901 file: test/e2e/apps/cronjob.go
902- testname: CronJob API Operations
903 codename: '[sig-apps] CronJob should support CronJob API operations [Conformance]'
904 description: ' CronJob MUST support create, get, list, watch, update, patch, delete,
905 and deletecollection. CronJob/status MUST support get, update and patch.'
906 release: v1.21
907 file: test/e2e/apps/cronjob.go
908- testname: DaemonSet, list and delete a collection of DaemonSets
909 codename: '[sig-apps] Daemon set [Serial] should list and delete a collection of
910 DaemonSets [Conformance]'
911 description: When a DaemonSet is created it MUST succeed. It MUST succeed when listing
912 DaemonSets via a label selector. It MUST succeed when deleting the DaemonSet via
913 deleteCollection.
914 release: v1.22
915 file: test/e2e/apps/daemon_set.go
916- testname: DaemonSet-FailedPodCreation
917 codename: '[sig-apps] Daemon set [Serial] should retry creating failed daemon pods
918 [Conformance]'
919 description: A conformant Kubernetes distribution MUST create new DaemonSet Pods
920 when they fail.
921 release: v1.10
922 file: test/e2e/apps/daemon_set.go
923- testname: DaemonSet-Rollback
924 codename: '[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts
925 [Conformance]'
926 description: A conformant Kubernetes distribution MUST support automated, minimally
927 disruptive rollback of updates to a DaemonSet.
928 release: v1.10
929 file: test/e2e/apps/daemon_set.go
930- testname: DaemonSet-NodeSelection
931 codename: '[sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]'
932 description: A conformant Kubernetes distribution MUST support DaemonSet Pod node
933 selection via label selectors.
934 release: v1.10
935 file: test/e2e/apps/daemon_set.go
936- testname: DaemonSet-Creation
937 codename: '[sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]'
938 description: A conformant Kubernetes distribution MUST support the creation of DaemonSets.
939 When a DaemonSet Pod is deleted, the DaemonSet controller MUST create a replacement
940 Pod.
941 release: v1.10
942 file: test/e2e/apps/daemon_set.go
943- testname: DaemonSet-RollingUpdate
944 codename: '[sig-apps] Daemon set [Serial] should update pod when spec was updated
945 and update strategy is RollingUpdate [Conformance]'
946 description: A conformant Kubernetes distribution MUST support DaemonSet RollingUpdates.
947 release: v1.10
948 file: test/e2e/apps/daemon_set.go
949- testname: DaemonSet, status sub-resource
950 codename: '[sig-apps] Daemon set [Serial] should verify changes to a daemon set
951 status [Conformance]'
952 description: When a DaemonSet is created it MUST succeed. Attempt to read, update
953 and patch its status sub-resource; all mutating sub-resource operations MUST be
954 visible to subsequent reads.
955 release: v1.22
956 file: test/e2e/apps/daemon_set.go
957- testname: Deployment, completes the scaling of a Deployment subresource
958 codename: '[sig-apps] Deployment Deployment should have a working scale subresource
959 [Conformance]'
960 description: Create a Deployment with a single Pod. The Pod MUST be verified that
961 it is running. The Deployment MUST get and verify the scale subresource count.
962 The Deployment MUST update and verify the scale subresource. The Deployment MUST
963 patch and verify a scale subresource.
964 release: v1.21
965 file: test/e2e/apps/deployment.go
966- testname: Deployment Recreate
967 codename: '[sig-apps] Deployment RecreateDeployment should delete old pods and create
968 new ones [Conformance]'
969 description: A conformant Kubernetes distribution MUST support the Deployment with
970 Recreate strategy.
971 release: v1.12
972 file: test/e2e/apps/deployment.go
973- testname: Deployment RollingUpdate
974 codename: '[sig-apps] Deployment RollingUpdateDeployment should delete old pods
975 and create new ones [Conformance]'
976 description: A conformant Kubernetes distribution MUST support the Deployment with
977 RollingUpdate strategy.
978 release: v1.12
979 file: test/e2e/apps/deployment.go
980- testname: Deployment RevisionHistoryLimit
981 codename: '[sig-apps] Deployment deployment should delete old replica sets [Conformance]'
982 description: A conformant Kubernetes distribution MUST clean up Deployment's ReplicaSets
983 based on the Deployment's `.spec.revisionHistoryLimit`.
984 release: v1.12
985 file: test/e2e/apps/deployment.go
986- testname: Deployment Proportional Scaling
987 codename: '[sig-apps] Deployment deployment should support proportional scaling
988 [Conformance]'
989 description: A conformant Kubernetes distribution MUST support Deployment proportional
990 scaling, i.e. proportionally scale a Deployment's ReplicaSets when a Deployment
991 is scaled.
992 release: v1.12
993 file: test/e2e/apps/deployment.go
994- testname: Deployment Rollover
995 codename: '[sig-apps] Deployment deployment should support rollover [Conformance]'
996 description: A conformant Kubernetes distribution MUST support Deployment rollover,
997 i.e. allow arbitrary number of changes to desired state during rolling update
998 before the rollout finishes.
999 release: v1.12
1000 file: test/e2e/apps/deployment.go
1001- testname: Deployment, completes the lifecycle of a Deployment
1002 codename: '[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]'
1003 description: When a Deployment is created it MUST succeed with the required number
1004 of replicas. It MUST succeed when the Deployment is patched. When scaling the
1005 deployment is MUST succeed. When fetching and patching the DeploymentStatus it
1006 MUST succeed. It MUST succeed when deleting the Deployment.
1007 release: v1.20
1008 file: test/e2e/apps/deployment.go
1009- testname: Deployment, status sub-resource
1010 codename: '[sig-apps] Deployment should validate Deployment Status endpoints [Conformance]'
1011 description: When a Deployment is created it MUST succeed. Attempt to read, update
1012 and patch its status sub-resource; all mutating sub-resource operations MUST be
1013 visible to subsequent reads.
1014 release: v1.22
1015 file: test/e2e/apps/deployment.go
1016- testname: 'PodDisruptionBudget: list and delete collection'
1017 codename: '[sig-apps] DisruptionController Listing PodDisruptionBudgets for all
1018 namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]'
1019 description: PodDisruptionBudget API must support list and deletecollection operations.
1020 release: v1.21
1021 file: test/e2e/apps/disruption.go
1022- testname: 'PodDisruptionBudget: block an eviction until the PDB is updated to allow
1023 it'
1024 codename: '[sig-apps] DisruptionController should block an eviction until the PDB
1025 is updated to allow it [Conformance]'
1026 description: Eviction API must block an eviction until the PDB is updated to allow
1027 it
1028 release: v1.22
1029 file: test/e2e/apps/disruption.go
1030- testname: 'PodDisruptionBudget: create, update, patch, and delete object'
1031 codename: '[sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]'
1032 description: PodDisruptionBudget API must support create, update, patch, and delete
1033 operations.
1034 release: v1.21
1035 file: test/e2e/apps/disruption.go
1036- testname: 'PodDisruptionBudget: Status updates'
1037 codename: '[sig-apps] DisruptionController should observe PodDisruptionBudget status
1038 updated [Conformance]'
1039 description: Disruption controller MUST update the PDB status with how many disruptions
1040 are allowed.
1041 release: v1.21
1042 file: test/e2e/apps/disruption.go
1043- testname: 'PodDisruptionBudget: update and patch status'
1044 codename: '[sig-apps] DisruptionController should update/patch PodDisruptionBudget
1045 status [Conformance]'
1046 description: PodDisruptionBudget API must support update and patch operations on
1047 status subresource.
1048 release: v1.21
1049 file: test/e2e/apps/disruption.go
1050- testname: Jobs, orphan pods, re-adoption
1051 codename: '[sig-apps] Job should adopt matching orphans and release non-matching
1052 pods [Conformance]'
1053 description: Create a parallel job. The number of Pods MUST equal the level of parallelism.
1054 Orphan a Pod by modifying its owner reference. The Job MUST re-adopt the orphan
1055 pod. Modify the labels of one of the Job's Pods. The Job MUST release the Pod.
1056 release: v1.16
1057 file: test/e2e/apps/job.go
1058- testname: Jobs, apply changes to status
1059 codename: '[sig-apps] Job should apply changes to a job status [Conformance]'
1060 description: Attempt to create a running Job which MUST succeed. Attempt to patch
1061 the Job status which MUST succeed. An annotation for the job that was patched
1062 MUST be found. Attempt to replace the job status with update which MUST succeed.
1063 Attempt to read its status sub-resource which MUST succeed
1064 release: v1.24
1065 file: test/e2e/apps/job.go
1066- testname: Ensure Pods of an Indexed Job get a unique index.
1067 codename: '[sig-apps] Job should create pods for an Indexed job with completion
1068 indexes and specified hostname [Conformance]'
1069 description: Create an Indexed job. Job MUST complete successfully. Ensure that
1070 created pods have completion index annotation and environment variable.
1071 release: v1.24
1072 file: test/e2e/apps/job.go
1073- testname: Jobs, active pods, graceful termination
1074 codename: '[sig-apps] Job should delete a job [Conformance]'
1075 description: Create a job. Ensure the active pods reflect parallelism in the namespace
1076 and delete the job. Job MUST be deleted successfully.
1077 release: v1.15
1078 file: test/e2e/apps/job.go
1079- testname: Jobs, manage lifecycle
1080 codename: '[sig-apps] Job should manage the lifecycle of a job [Conformance]'
1081 description: Attempt to create a suspended Job which MUST succeed. Attempt to patch
1082 the Job to include a new label which MUST succeed. The label MUST be found. Attempt
1083 to replace the Job to include a new annotation which MUST succeed. The annotation
1084 MUST be found. Attempt to list all namespaces with a label selector which MUST
1085 succeed. One list MUST be found. It MUST succeed at deleting a collection of jobs
1086 via a label selector.
1087 release: v1.25
1088 file: test/e2e/apps/job.go
1089- testname: Jobs, completion after task failure
1090 codename: '[sig-apps] Job should run a job to completion when tasks sometimes fail
1091 and are locally restarted [Conformance]'
1092 description: Explicitly cause the tasks to fail once initially. After restarting,
1093 the Job MUST execute to completion.
1094 release: v1.16
1095 file: test/e2e/apps/job.go
1096- testname: ReplicaSet, is created, Replaced and Patched
1097 codename: '[sig-apps] ReplicaSet Replace and Patch tests [Conformance]'
1098 description: Create a ReplicaSet (RS) with a single Pod. The Pod MUST be verified
1099 that it is running. The RS MUST scale to two replicas and verify the scale count
1100 The RS MUST be patched and verify that patch succeeded.
1101 release: v1.21
1102 file: test/e2e/apps/replica_set.go
1103- testname: ReplicaSet, completes the scaling of a ReplicaSet subresource
1104 codename: '[sig-apps] ReplicaSet Replicaset should have a working scale subresource
1105 [Conformance]'
1106 description: Create a ReplicaSet (RS) with a single Pod. The Pod MUST be verified
1107 that it is running. The RS MUST get and verify the scale subresource count. The
1108 RS MUST update and verify the scale subresource. The RS MUST patch and verify
1109 a scale subresource.
1110 release: v1.21
1111 file: test/e2e/apps/replica_set.go
1112- testname: Replica Set, adopt matching pods and release non matching pods
1113 codename: '[sig-apps] ReplicaSet should adopt matching pods on creation and release
1114 no longer matching pods [Conformance]'
1115 description: A Pod is created, then a Replica Set (RS) whose label selector will
1116 match the Pod. The RS MUST either adopt the Pod or delete and replace it with
1117 a new Pod. When the labels on one of the Pods owned by the RS change to no longer
1118 match the RS's label selector, the RS MUST release the Pod and update the Pod's
1119 owner references
1120 release: v1.13
1121 file: test/e2e/apps/replica_set.go
1122- testname: ReplicaSet, list and delete a collection of ReplicaSets
1123 codename: '[sig-apps] ReplicaSet should list and delete a collection of ReplicaSets
1124 [Conformance]'
1125 description: When a ReplicaSet is created it MUST succeed. It MUST succeed when
1126 listing ReplicaSets via a label selector. It MUST succeed when deleting the ReplicaSet
1127 via deleteCollection.
1128 release: v1.22
1129 file: test/e2e/apps/replica_set.go
1130- testname: Replica Set, run basic image
1131 codename: '[sig-apps] ReplicaSet should serve a basic image on each replica with
1132 a public image [Conformance]'
1133 description: Create a ReplicaSet with a Pod and a single Container. Make sure that
1134 the Pod is running. Pod SHOULD send a valid response when queried.
1135 release: v1.9
1136 file: test/e2e/apps/replica_set.go
1137- testname: ReplicaSet, status sub-resource
1138 codename: '[sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]'
1139 description: Create a ReplicaSet resource which MUST succeed. Attempt to read, update
1140 and patch its status sub-resource; all mutating sub-resource operations MUST be
1141 visible to subsequent reads.
1142 release: v1.22
1143 file: test/e2e/apps/replica_set.go
1144- testname: Replication Controller, adopt matching pods
1145 codename: '[sig-apps] ReplicationController should adopt matching pods on creation
1146 [Conformance]'
1147 description: An ownerless Pod is created, then a Replication Controller (RC) is
1148 created whose label selector will match the Pod. The RC MUST either adopt the
1149 Pod or delete and replace it with a new Pod
1150 release: v1.13
1151 file: test/e2e/apps/rc.go
1152- testname: Replication Controller, get and update ReplicationController scale
1153 codename: '[sig-apps] ReplicationController should get and update a ReplicationController
1154 scale [Conformance]'
1155 description: A ReplicationController is created which MUST succeed. It MUST succeed
1156 when reading the ReplicationController scale. When updating the ReplicationController
1157 scale it MUST succeed and the field MUST equal the new value.
1158 release: v1.26
1159 file: test/e2e/apps/rc.go
1160- testname: Replication Controller, release pods
1161 codename: '[sig-apps] ReplicationController should release no longer matching pods
1162 [Conformance]'
1163 description: A Replication Controller (RC) is created, and its Pods are created.
1164 When the labels on one of the Pods change to no longer match the RC's label selector,
1165 the RC MUST release the Pod and update the Pod's owner references.
1166 release: v1.13
1167 file: test/e2e/apps/rc.go
1168- testname: Replication Controller, run basic image
1169 codename: '[sig-apps] ReplicationController should serve a basic image on each replica
1170 with a public image [Conformance]'
1171 description: Replication Controller MUST create a Pod with Basic Image and MUST
1172 run the service with the provided image. Image MUST be tested by dialing into
1173 the service listening through TCP, UDP and HTTP.
1174 release: v1.9
1175 file: test/e2e/apps/rc.go
1176- testname: Replication Controller, check for issues like exceeding allocated quota
1177 codename: '[sig-apps] ReplicationController should surface a failure condition on
1178 a common issue like exceeded quota [Conformance]'
1179 description: Attempt to create a Replication Controller with pods exceeding the
1180 namespace quota. The creation MUST fail
1181 release: v1.15
1182 file: test/e2e/apps/rc.go
1183- testname: Replication Controller, lifecycle
1184 codename: '[sig-apps] ReplicationController should test the lifecycle of a ReplicationController
1185 [Conformance]'
1186 description: A Replication Controller (RC) is created, read, patched, and deleted
1187 with verification.
1188 release: v1.20
1189 file: test/e2e/apps/rc.go
1190- testname: StatefulSet, Burst Scaling
1191 codename: '[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic]
1192 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]'
1193 description: StatefulSet MUST support the Parallel PodManagementPolicy for burst
1194 scaling. This test does not depend on a preexisting default StorageClass or a
1195 dynamic provisioner.
1196 release: v1.9
1197 file: test/e2e/apps/statefulset.go
1198- testname: StatefulSet, Scaling
1199 codename: '[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic]
1200 Scaling should happen in predictable order and halt if any stateful pod is unhealthy
1201 [Slow] [Conformance]'
1202 description: StatefulSet MUST create Pods in ascending order by ordinal index when
1203 scaling up, and delete Pods in descending order when scaling down. Scaling up
1204 or down MUST pause if any Pods belonging to the StatefulSet are unhealthy. This
1205 test does not depend on a preexisting default StorageClass or a dynamic provisioner.
1206 release: v1.9
1207 file: test/e2e/apps/statefulset.go
1208- testname: StatefulSet, Recreate Failed Pod
1209 codename: '[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic]
1210 Should recreate evicted statefulset [Conformance]'
1211 description: StatefulSet MUST delete and recreate Pods it owns that go into a Failed
1212 state, such as when they are rejected or evicted by a Node. This test does not
1213 depend on a preexisting default StorageClass or a dynamic provisioner.
1214 release: v1.9
1215 file: test/e2e/apps/statefulset.go
1216- testname: StatefulSet resource Replica scaling
1217 codename: '[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic]
1218 should have a working scale subresource [Conformance]'
1219 description: Create a StatefulSet resource. Newly created StatefulSet resource MUST
1220 have a scale of one. Bring the scale of the StatefulSet resource up to two. StatefulSet
1221 scale MUST be at two replicas.
1222 release: v1.16, v1.21
1223 file: test/e2e/apps/statefulset.go
1224- testname: StatefulSet, list, patch and delete a collection of StatefulSets
1225 codename: '[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic]
1226 should list, patch and delete a collection of StatefulSets [Conformance]'
1227 description: When a StatefulSet is created it MUST succeed. It MUST succeed when
1228 listing StatefulSets via a label selector. It MUST succeed when patching a StatefulSet.
1229 It MUST succeed when deleting the StatefulSet via deleteCollection.
1230 release: v1.22
1231 file: test/e2e/apps/statefulset.go
1232- testname: StatefulSet, Rolling Update with Partition
1233 codename: '[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic]
1234 should perform canary updates and phased rolling updates of template modifications
1235 [Conformance]'
1236 description: StatefulSet's RollingUpdate strategy MUST support the Partition parameter
1237 for canaries and phased rollouts. If a Pod is deleted while a rolling update is
1238 in progress, StatefulSet MUST restore the Pod without violating the Partition.
1239 This test does not depend on a preexisting default StorageClass or a dynamic provisioner.
1240 release: v1.9
1241 file: test/e2e/apps/statefulset.go
1242- testname: StatefulSet, Rolling Update
1243 codename: '[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic]
1244 should perform rolling updates and roll backs of template modifications [Conformance]'
1245 description: StatefulSet MUST support the RollingUpdate strategy to automatically
1246 replace Pods one at a time when the Pod template changes. The StatefulSet's status
1247 MUST indicate the CurrentRevision and UpdateRevision. If the template is changed
1248 to match a prior revision, StatefulSet MUST detect this as a rollback instead
1249 of creating a new revision. This test does not depend on a preexisting default
1250 StorageClass or a dynamic provisioner.
1251 release: v1.9
1252 file: test/e2e/apps/statefulset.go
1253- testname: StatefulSet, status sub-resource
1254 codename: '[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic]
1255 should validate Statefulset Status endpoints [Conformance]'
1256 description: When a StatefulSet is created it MUST succeed. Attempt to read, update
1257 and patch its status sub-resource; all mutating sub-resource operations MUST be
1258 visible to subsequent reads.
1259 release: v1.22
1260 file: test/e2e/apps/statefulset.go
1261- testname: Conformance tests minimum number of nodes.
1262 codename: '[sig-architecture] Conformance Tests should have at least two untainted
1263 nodes [Conformance]'
1264 description: Conformance tests requires at least two untainted nodes where pods
1265 can be scheduled.
1266 release: v1.23
1267 file: test/e2e/architecture/conformance.go
1268- testname: CertificateSigningRequest API
1269 codename: '[sig-auth] Certificates API [Privileged:ClusterAdmin] should support
1270 CSR API operations [Conformance]'
1271 description: ' The certificates.k8s.io API group MUST exists in the /apis discovery
1272 document. The certificates.k8s.io/v1 API group/version MUST exist in the /apis/certificates.k8s.io
1273 discovery document. The certificatesigningrequests, certificatesigningrequests/approval,
1274 and certificatesigningrequests/status resources MUST exist in the /apis/certificates.k8s.io/v1
1275 discovery document. The certificatesigningrequests resource must support create,
1276 get, list, watch, update, patch, delete, and deletecollection. The certificatesigningrequests/approval
1277 resource must support get, update, patch. The certificatesigningrequests/status
1278 resource must support get, update, patch.'
1279 release: v1.19
1280 file: test/e2e/auth/certificates.go
1281- testname: OIDC Discovery (ServiceAccountIssuerDiscovery)
1282 codename: '[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support
1283 OIDC discovery of service account issuer [Conformance]'
1284 description: Ensure kube-apiserver serves correct OIDC discovery endpoints by deploying
1285 a Pod that verifies its own token against these endpoints.
1286 release: v1.21
1287 file: test/e2e/auth/service_accounts.go
1288- testname: Service account tokens auto mount optionally
1289 codename: '[sig-auth] ServiceAccounts should allow opting out of API token automount
1290 [Conformance]'
1291 description: Ensure that Service Account keys are mounted into the Pod only when
1292 AutoMountServiceToken is not set to false. We test the following scenarios here.
1293 1. Create Pod, Pod Spec has AutomountServiceAccountToken set to nil a) Service
1294 Account with default value, b) Service Account is an configured AutomountServiceAccountToken
1295 set to true, c) Service Account is an configured AutomountServiceAccountToken
1296 set to false 2. Create Pod, Pod Spec has AutomountServiceAccountToken set to true
1297 a) Service Account with default value, b) Service Account is configured with AutomountServiceAccountToken
1298 set to true, c) Service Account is configured with AutomountServiceAccountToken
1299 set to false 3. Create Pod, Pod Spec has AutomountServiceAccountToken set to false
1300 a) Service Account with default value, b) Service Account is configured with AutomountServiceAccountToken
1301 set to true, c) Service Account is configured with AutomountServiceAccountToken
1302 set to false The Containers running in these pods MUST verify that the ServiceTokenVolume
1303 path is auto mounted only when Pod Spec has AutomountServiceAccountToken not set
1304 to false and ServiceAccount object has AutomountServiceAccountToken not set to
1305 false, this include test cases 1a,1b,2a,2b and 2c. In the test cases 1c,3a,3b
1306 and 3c the ServiceTokenVolume MUST not be auto mounted.
1307 release: v1.9
1308 file: test/e2e/auth/service_accounts.go
1309- testname: RootCA ConfigMap test
1310 codename: '[sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in
1311 any namespace [Conformance]'
1312 description: Ensure every namespace exist a ConfigMap for root ca cert. 1. Created
1313 automatically 2. Recreated if deleted 3. Reconciled if modified
1314 release: v1.21
1315 file: test/e2e/auth/service_accounts.go
1316- testname: Service Account Tokens Must AutoMount
1317 codename: '[sig-auth] ServiceAccounts should mount an API token into pods [Conformance]'
1318 description: Ensure that Service Account keys are mounted into the Container. Pod
1319 contains three containers each will read Service Account token, root CA and default
1320 namespace respectively from the default API Token Mount path. All these three
1321 files MUST exist and the Service Account mount path MUST be auto mounted to the
1322 Container.
1323 release: v1.9
1324 file: test/e2e/auth/service_accounts.go
1325- testname: TokenRequestProjection should mount a projected volume with token using
1326 TokenRequest API.
1327 codename: '[sig-auth] ServiceAccounts should mount projected service account token
1328 [Conformance]'
1329 description: Ensure that projected service account token is mounted.
1330 release: v1.20
1331 file: test/e2e/auth/service_accounts.go
1332- testname: ServiceAccount lifecycle test
1333 codename: '[sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount
1334 [Conformance]'
1335 description: Creates a ServiceAccount with a static Label MUST be added as shown
1336 in watch event. Patching the ServiceAccount MUST return it's new property. Listing
1337 the ServiceAccounts MUST return the test ServiceAccount with it's patched values.
1338 ServiceAccount will be deleted and MUST find a deleted watch event.
1339 release: v1.19
1340 file: test/e2e/auth/service_accounts.go
1341- testname: ServiceAccount, update a ServiceAccount
1342 codename: '[sig-auth] ServiceAccounts should update a ServiceAccount [Conformance]'
1343 description: A ServiceAccount is created which MUST succeed. When updating the ServiceAccount
1344 it MUST succeed and the field MUST equal the new value.
1345 release: v1.26
1346 file: test/e2e/auth/service_accounts.go
1347- testname: SubjectReview, API Operations
1348 codename: '[sig-auth] SubjectReview should support SubjectReview API operations
1349 [Conformance]'
1350 description: A ServiceAccount is created which MUST succeed. A clientset is created
1351 to impersonate the ServiceAccount. A SubjectAccessReview is created for the ServiceAccount
1352 which MUST succeed. The allowed status for the SubjectAccessReview MUST match
1353 the expected allowed for the impersonated client call. A LocalSubjectAccessReviews
1354 is created for the ServiceAccount which MUST succeed. The allowed status for the
1355 LocalSubjectAccessReview MUST match the expected allowed for the impersonated
1356 client call.
1357 release: v1.27
1358 file: test/e2e/auth/subjectreviews.go
1359- testname: Kubectl, guestbook application
1360 codename: '[sig-cli] Kubectl client Guestbook application should create and stop
1361 a working application [Conformance]'
1362 description: Create Guestbook application that contains an agnhost primary server,
1363 2 agnhost replicas, frontend application, frontend service and agnhost primary
1364 service and agnhost replica service. Using frontend service, the test will write
1365 an entry into the guestbook application which will store the entry into the backend
1366 agnhost store. Application flow MUST work as expected and the data written MUST
1367 be available to read.
1368 release: v1.9
1369 file: test/e2e/kubectl/kubectl.go
1370- testname: Kubectl, check version v1
1371 codename: '[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in
1372 available api versions [Conformance]'
1373 description: Run kubectl to get api versions, output MUST contain returned versions
1374 with 'v1' listed.
1375 release: v1.9
1376 file: test/e2e/kubectl/kubectl.go
1377- testname: Kubectl, cluster info
1378 codename: '[sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes
1379 control plane services is included in cluster-info [Conformance]'
1380 description: Call kubectl to get cluster-info, output MUST contain cluster-info
1381 returned and Kubernetes control plane SHOULD be running.
1382 release: v1.9
1383 file: test/e2e/kubectl/kubectl.go
1384- testname: Kubectl, describe pod or rc
1385 codename: '[sig-cli] Kubectl client Kubectl describe should check if kubectl describe
1386 prints relevant information for rc and pods [Conformance]'
1387 description: Deploy an agnhost controller and an agnhost service. Kubectl describe
1388 pods SHOULD return the name, namespace, labels, state and other information as
1389 expected. Kubectl describe on rc, service, node and namespace SHOULD also return
1390 proper information.
1391 release: v1.9
1392 file: test/e2e/kubectl/kubectl.go
1393- testname: Kubectl, diff Deployment
1394 codename: '[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds
1395 a difference for Deployments [Conformance]'
1396 description: Create a Deployment with httpd image. Declare the same Deployment with
1397 a different image, busybox. Diff of live Deployment with declared Deployment MUST
1398 include the difference between live and declared image.
1399 release: v1.19
1400 file: test/e2e/kubectl/kubectl.go
1401- testname: Kubectl, create service, replication controller
1402 codename: '[sig-cli] Kubectl client Kubectl expose should create services for rc
1403 [Conformance]'
1404 description: Create a Pod running agnhost listening to port 6379. Using kubectl
1405 expose the agnhost primary replication controllers at port 1234. Validate that
1406 the replication controller is listening on port 1234 and the target port is set
1407 to 6379, port that agnhost primary is listening. Using kubectl expose the agnhost
1408 primary as a service at port 2345. The service MUST be listening on port 2345
1409 and the target port is set to 6379, port that agnhost primary is listening.
1410 release: v1.9
1411 file: test/e2e/kubectl/kubectl.go
1412- testname: Kubectl, label update
1413 codename: '[sig-cli] Kubectl client Kubectl label should update the label on a resource
1414 [Conformance]'
1415 description: When a Pod is running, update a Label using 'kubectl label' command.
1416 The label MUST be created in the Pod. A 'kubectl get pod' with -l option on the
1417 container MUST verify that the label can be read back. Use 'kubectl label label-'
1418 to remove the label. 'kubectl get pod' with -l option SHOULD not list the deleted
1419 label as the label is removed.
1420 release: v1.9
1421 file: test/e2e/kubectl/kubectl.go
1422- testname: Kubectl, patch to annotate
1423 codename: '[sig-cli] Kubectl client Kubectl patch should add annotations for pods
1424 in rc [Conformance]'
1425 description: Start running agnhost and a replication controller. When the pod is
1426 running, using 'kubectl patch' command add annotations. The annotation MUST be
1427 added to running pods and SHOULD be able to read added annotations from each of
1428 the Pods running under the replication controller.
1429 release: v1.9
1430 file: test/e2e/kubectl/kubectl.go
1431- testname: Kubectl, replace
1432 codename: '[sig-cli] Kubectl client Kubectl replace should update a single-container
1433 pod''s image [Conformance]'
1434 description: Command 'kubectl replace' on a existing Pod with a new spec MUST update
1435 the image of the container running in the Pod. A -f option to 'kubectl replace'
1436 SHOULD force to re-create the resource. The new Pod SHOULD have the container
1437 with new change to the image.
1438 release: v1.9
1439 file: test/e2e/kubectl/kubectl.go
1440- testname: Kubectl, run pod
1441 codename: '[sig-cli] Kubectl client Kubectl run pod should create a pod from an
1442 image when restart is Never [Conformance]'
1443 description: Command 'kubectl run' MUST create a pod, when a image name is specified
1444 in the run command. After the run command there SHOULD be a pod that should exist
1445 with one container running the specified image.
1446 release: v1.9
1447 file: test/e2e/kubectl/kubectl.go
1448- testname: Kubectl, server-side dry-run Pod
1449 codename: '[sig-cli] Kubectl client Kubectl server-side dry-run should check if
1450 kubectl can dry-run update Pods [Conformance]'
1451 description: The command 'kubectl run' must create a pod with the specified image
1452 name. After, the command 'kubectl patch pod -p {...} --dry-run=server' should
1453 update the Pod with the new image name and server-side dry-run enabled. The image
1454 name must not change.
1455 release: v1.19
1456 file: test/e2e/kubectl/kubectl.go
1457- testname: Kubectl, version
1458 codename: '[sig-cli] Kubectl client Kubectl version should check is all data is
1459 printed [Conformance]'
1460 description: The command 'kubectl version' MUST return the major, minor versions, GitCommit,
1461 etc of the Client and the Server that the kubectl is configured to connect to.
1462 release: v1.9
1463 file: test/e2e/kubectl/kubectl.go
1464- testname: Kubectl, proxy socket
1465 codename: '[sig-cli] Kubectl client Proxy server should support --unix-socket=/path
1466 [Conformance]'
1467 description: Start a proxy server on by running 'kubectl proxy' with --unix-socket=<some
1468 path>. Call the proxy server by requesting api versions from http://locahost:0/api.
1469 The proxy server MUST provide at least one version string
1470 release: v1.9
1471 file: test/e2e/kubectl/kubectl.go
1472- testname: Kubectl, proxy port zero
1473 codename: '[sig-cli] Kubectl client Proxy server should support proxy with --port
1474 0 [Conformance]'
1475 description: Start a proxy server on port zero by running 'kubectl proxy' with --port=0.
1476 Call the proxy server by requesting api versions from unix socket. The proxy server
1477 MUST provide at least one version string.
1478 release: v1.9
1479 file: test/e2e/kubectl/kubectl.go
1480- testname: Kubectl, replication controller
1481 codename: '[sig-cli] Kubectl client Update Demo should create and stop a replication
1482 controller [Conformance]'
1483 description: Create a Pod and a container with a given image. Configure replication
1484 controller to run 2 replicas. The number of running instances of the Pod MUST
1485 equal the number of replicas set on the replication controller which is 2.
1486 release: v1.9
1487 file: test/e2e/kubectl/kubectl.go
1488- testname: Kubectl, scale replication controller
1489 codename: '[sig-cli] Kubectl client Update Demo should scale a replication controller
1490 [Conformance]'
1491 description: Create a Pod and a container with a given image. Configure replication
1492 controller to run 2 replicas. The number of running instances of the Pod MUST
1493 equal the number of replicas set on the replication controller which is 2. Update
1494 the replicaset to 1. Number of running instances of the Pod MUST be 1. Update
1495 the replicaset to 2. Number of running instances of the Pod MUST be 2.
1496 release: v1.9
1497 file: test/e2e/kubectl/kubectl.go
1498- testname: Kubectl, logs
1499 codename: '[sig-cli] Kubectl logs logs should be able to retrieve and filter logs
1500 [Conformance]'
1501 description: When a Pod is running then it MUST generate logs. Starting a Pod should
1502 have a expected log line. Also log command options MUST work as expected and described
1503 below. 'kubectl logs -tail=1' should generate a output of one line, the last line
1504 in the log. 'kubectl --limit-bytes=1' should generate a single byte output. 'kubectl
1505 --tail=1 --timestamp should generate one line with timestamp in RFC3339 format
1506 'kubectl --since=1s' should output logs that are only 1 second older from now
1507 'kubectl --since=24h' should output logs that are only 1 day older from now
1508 release: v1.9
1509 file: test/e2e/kubectl/logs.go
1510- testname: New Event resource lifecycle, testing a list of events
1511 codename: '[sig-instrumentation] Events API should delete a collection of events
1512 [Conformance]'
1513 description: Create a list of events, the events MUST exist. The events are deleted
1514 and MUST NOT show up when listing all events.
1515 release: v1.19
1516 file: test/e2e/instrumentation/events.go
1517- testname: New Event resource lifecycle, testing a single event
1518 codename: '[sig-instrumentation] Events API should ensure that an event can be fetched,
1519 patched, deleted, and listed [Conformance]'
1520 description: Create an event, the event MUST exist. The event is patched with a
1521 new note, the check MUST have the update note. The event is updated with a new
1522 series, the check MUST have the update series. The event is deleted and MUST NOT
1523 show up when listing all events.
1524 release: v1.19
1525 file: test/e2e/instrumentation/events.go
1526- testname: Event, delete a collection
1527 codename: '[sig-instrumentation] Events should delete a collection of events [Conformance]'
1528 description: A set of events is created with a label selector which MUST be found
1529 when listed. The set of events is deleted and MUST NOT show up when listed by
1530 its label selector.
1531 release: v1.20
1532 file: test/e2e/instrumentation/core_events.go
1533- testname: Event, manage lifecycle of an Event
1534 codename: '[sig-instrumentation] Events should manage the lifecycle of an event
1535 [Conformance]'
1536 description: Attempt to create an event which MUST succeed. Attempt to list all
1537 namespaces with a label selector which MUST succeed. One list MUST be found. The
1538 event is patched with a new message, the check MUST have the update message. The
1539 event is updated with a new series of events, the check MUST confirm this update.
1540 The event is deleted and MUST NOT show up when listing all events.
1541 release: v1.25
1542 file: test/e2e/instrumentation/core_events.go
1543- testname: DNS, cluster
1544 codename: '[sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance]'
1545 description: When a Pod is created, the pod MUST be able to resolve cluster dns
1546 entries such as kubernetes.default via /etc/hosts.
1547 release: v1.14
1548 file: test/e2e/network/dns.go
1549- testname: DNS, for ExternalName Services
1550 codename: '[sig-network] DNS should provide DNS for ExternalName services [Conformance]'
1551 description: Create a service with externalName. Pod MUST be able to resolve the
1552 address for this service via CNAME. When externalName of this service is changed,
1553 Pod MUST resolve to new DNS entry for the service. Change the service type from
1554 externalName to ClusterIP, Pod MUST resolve DNS to the service by serving A records.
1555 release: v1.15
1556 file: test/e2e/network/dns.go
1557- testname: DNS, resolve the hostname
1558 codename: '[sig-network] DNS should provide DNS for pods for Hostname [Conformance]'
1559 description: Create a headless service with label. Create a Pod with label to match
1560 service's label, with hostname and a subdomain same as service name. Pod MUST
1561 be able to resolve its fully qualified domain name as well as hostname by serving
1562 an A record at that name.
1563 release: v1.15
1564 file: test/e2e/network/dns.go
1565- testname: DNS, resolve the subdomain
1566 codename: '[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]'
1567 description: Create a headless service with label. Create a Pod with label to match
1568 service's label, with hostname and a subdomain same as service name. Pod MUST
1569 be able to resolve its fully qualified domain name as well as subdomain by serving
1570 an A record at that name.
1571 release: v1.15
1572 file: test/e2e/network/dns.go
1573- testname: DNS, services
1574 codename: '[sig-network] DNS should provide DNS for services [Conformance]'
1575 description: When a headless service is created, the service MUST be able to resolve
1576 all the required service endpoints. When the service is created, any pod in the
1577 same namespace must be able to resolve the service by all of the expected DNS
1578 names.
1579 release: v1.9
1580 file: test/e2e/network/dns.go
1581- testname: DNS, cluster
1582 codename: '[sig-network] DNS should provide DNS for the cluster [Conformance]'
1583 description: When a Pod is created, the pod MUST be able to resolve cluster dns
1584 entries such as kubernetes.default via DNS.
1585 release: v1.9
1586 file: test/e2e/network/dns.go
1587- testname: DNS, PQDN for services
1588 codename: '[sig-network] DNS should resolve DNS of partial qualified names for services
1589 [LinuxOnly] [Conformance]'
1590 description: 'Create a headless service and normal service. Both the services MUST
1591 be able to resolve partial qualified DNS entries of their service endpoints by
1592 serving A records and SRV records. [LinuxOnly]: As Windows currently does not
1593 support resolving PQDNs.'
1594 release: v1.17
1595 file: test/e2e/network/dns.go
1596- testname: DNS, custom dnsConfig
1597 codename: '[sig-network] DNS should support configurable pod DNS nameservers [Conformance]'
1598 description: Create a Pod with DNSPolicy as None and custom DNS configuration, specifying
1599 nameservers and search path entries. Pod creation MUST be successful and provided
1600 DNS configuration MUST be configured in the Pod.
1601 release: v1.17
1602 file: test/e2e/network/dns.go
1603- testname: EndpointSlice API
1604 codename: '[sig-network] EndpointSlice should create Endpoints and EndpointSlices
1605 for Pods matching a Service [Conformance]'
1606 description: The discovery.k8s.io API group MUST exist in the /apis discovery document.
1607 The discovery.k8s.io/v1 API group/version MUST exist in the /apis/discovery.k8s.io
1608 discovery document. The endpointslices resource MUST exist in the /apis/discovery.k8s.io/v1
1609 discovery document. The endpointslice controller must create EndpointSlices for
1610 Pods mataching a Service.
1611 release: v1.21
1612 file: test/e2e/network/endpointslice.go
1613- testname: EndpointSlice API
1614 codename: '[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices
1615 for a Service with a selector specified [Conformance]'
1616 description: The discovery.k8s.io API group MUST exist in the /apis discovery document.
1617 The discovery.k8s.io/v1 API group/version MUST exist in the /apis/discovery.k8s.io
1618 discovery document. The endpointslices resource MUST exist in the /apis/discovery.k8s.io/v1
1619 discovery document. The endpointslice controller should create and delete EndpointSlices
1620 for Pods matching a Service.
1621 release: v1.21
1622 file: test/e2e/network/endpointslice.go
1623- testname: EndpointSlice API
1624 codename: '[sig-network] EndpointSlice should have Endpoints and EndpointSlices
1625 pointing to API Server [Conformance]'
1626 description: The discovery.k8s.io API group MUST exist in the /apis discovery document.
1627 The discovery.k8s.io/v1 API group/version MUST exist in the /apis/discovery.k8s.io
1628 discovery document. The endpointslices resource MUST exist in the /apis/discovery.k8s.io/v1
1629 discovery document. The cluster MUST have a service named "kubernetes" on the
1630 default namespace referencing the API servers. The "kubernetes.default" service
1631 MUST have Endpoints and EndpointSlices pointing to each API server instance.
1632 release: v1.21
1633 file: test/e2e/network/endpointslice.go
1634- testname: EndpointSlice API
1635 codename: '[sig-network] EndpointSlice should support creating EndpointSlice API
1636 operations [Conformance]'
1637 description: The discovery.k8s.io API group MUST exist in the /apis discovery document.
1638 The discovery.k8s.io/v1 API group/version MUST exist in the /apis/discovery.k8s.io
1639 discovery document. The endpointslices resource MUST exist in the /apis/discovery.k8s.io/v1
1640 discovery document. The endpointslices resource must support create, get, list,
1641 watch, update, patch, delete, and deletecollection.
1642 release: v1.21
1643 file: test/e2e/network/endpointslice.go
1644- testname: EndpointSlice Mirroring
1645 codename: '[sig-network] EndpointSliceMirroring should mirror a custom Endpoints
1646 resource through create update and delete [Conformance]'
1647 description: The discovery.k8s.io API group MUST exist in the /apis discovery document.
1648 The discovery.k8s.io/v1 API group/version MUST exist in the /apis/discovery.k8s.io
1649 discovery document. The endpointslices resource MUST exist in the /apis/discovery.k8s.io/v1
1650 discovery document. The endpointslices mirrorowing must mirror endpoint create,
1651 update, and delete actions.
1652 release: v1.21
1653 file: test/e2e/network/endpointslicemirroring.go
1654- testname: Scheduling, HostPort matching and HostIP and Protocol not-matching
1655 codename: '[sig-network] HostPort validates that there is no conflict between pods
1656 with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]'
1657 description: Pods with the same HostPort value MUST be able to be scheduled to the
1658 same node if the HostIP or Protocol is different. This test is marked LinuxOnly
1659 since hostNetwork is not supported on Windows.
1660 release: v1.16, v1.21
1661 file: test/e2e/network/hostport.go
1662- testname: Ingress API
1663 codename: '[sig-network] Ingress API should support creating Ingress API operations
1664 [Conformance]'
1665 description: ' The networking.k8s.io API group MUST exist in the /apis discovery
1666 document. The networking.k8s.io/v1 API group/version MUST exist in the /apis/networking.k8s.io
1667 discovery document. The ingresses resources MUST exist in the /apis/networking.k8s.io/v1
1668 discovery document. The ingresses resource must support create, get, list, watch,
1669 update, patch, delete, and deletecollection. The ingresses/status resource must
1670 support update and patch'
1671 release: v1.19
1672 file: test/e2e/network/ingress.go
1673- testname: IngressClass API
1674 codename: '[sig-network] IngressClass API should support creating IngressClass API
1675 operations [Conformance]'
1676 description: ' - The networking.k8s.io API group MUST exist in the /apis discovery
1677 document. - The networking.k8s.io/v1 API group/version MUST exist in the /apis/networking.k8s.io
1678 discovery document. - The ingressclasses resource MUST exist in the /apis/networking.k8s.io/v1
1679 discovery document. - The ingressclass resource must support create, get, list,
1680 watch, update, patch, delete, and deletecollection.'
1681 release: v1.19
1682 file: test/e2e/network/ingressclass.go
1683- testname: Networking, intra pod http
1684 codename: '[sig-network] Networking Granular Checks: Pods should function for intra-pod
1685 communication: http [NodeConformance] [Conformance]'
1686 description: Create a hostexec pod that is capable of curl to netcat commands. Create
1687 a test Pod that will act as a webserver front end exposing ports 8080 for tcp
1688 and 8081 for udp. The netserver service proxies are created on specified number
1689 of nodes. The kubectl exec on the webserver container MUST reach a http port on
1690 the each of service proxy endpoints in the cluster and the request MUST be successful.
1691 Container will execute curl command to reach the service port within specified
1692 max retry limit and MUST result in reporting unique hostnames.
1693 release: v1.9, v1.18
1694 file: test/e2e/common/network/networking.go
1695- testname: Networking, intra pod udp
1696 codename: '[sig-network] Networking Granular Checks: Pods should function for intra-pod
1697 communication: udp [NodeConformance] [Conformance]'
1698 description: Create a hostexec pod that is capable of curl to netcat commands. Create
1699 a test Pod that will act as a webserver front end exposing ports 8080 for tcp
1700 and 8081 for udp. The netserver service proxies are created on specified number
1701 of nodes. The kubectl exec on the webserver container MUST reach a udp port on
1702 the each of service proxy endpoints in the cluster and the request MUST be successful.
1703 Container will execute curl command to reach the service port within specified
1704 max retry limit and MUST result in reporting unique hostnames.
1705 release: v1.9, v1.18
1706 file: test/e2e/common/network/networking.go
1707- testname: Networking, intra pod http, from node
1708 codename: '[sig-network] Networking Granular Checks: Pods should function for node-pod
1709 communication: http [LinuxOnly] [NodeConformance] [Conformance]'
1710 description: Create a hostexec pod that is capable of curl to netcat commands. Create
1711 a test Pod that will act as a webserver front end exposing ports 8080 for tcp
1712 and 8081 for udp. The netserver service proxies are created on specified number
1713 of nodes. The kubectl exec on the webserver container MUST reach a http port on
1714 the each of service proxy endpoints in the cluster using a http post(protocol=tcp) and
1715 the request MUST be successful. Container will execute curl command to reach the
1716 service port within specified max retry limit and MUST result in reporting unique
1717 hostnames. This test is marked LinuxOnly it breaks when using Overlay networking
1718 with Windows.
1719 release: v1.9
1720 file: test/e2e/common/network/networking.go
1721- testname: Networking, intra pod http, from node
1722 codename: '[sig-network] Networking Granular Checks: Pods should function for node-pod
1723 communication: udp [LinuxOnly] [NodeConformance] [Conformance]'
1724 description: Create a hostexec pod that is capable of curl to netcat commands. Create
1725 a test Pod that will act as a webserver front end exposing ports 8080 for tcp
1726 and 8081 for udp. The netserver service proxies are created on specified number
1727 of nodes. The kubectl exec on the webserver container MUST reach a http port on
1728 the each of service proxy endpoints in the cluster using a http post(protocol=udp) and
1729 the request MUST be successful. Container will execute curl command to reach the
1730 service port within specified max retry limit and MUST result in reporting unique
1731 hostnames. This test is marked LinuxOnly it breaks when using Overlay networking
1732 with Windows.
1733 release: v1.9
1734 file: test/e2e/common/network/networking.go
1735- testname: Proxy, validate Proxy responses
1736 codename: '[sig-network] Proxy version v1 A set of valid responses are returned
1737 for both pod and service Proxy [Conformance]'
1738 description: Attempt to create a pod and a service. A set of pod and service endpoints
1739 MUST be accessed via Proxy using a list of http methods. A valid response MUST
1740 be returned for each endpoint.
1741 release: v1.24
1742 file: test/e2e/network/proxy.go
1743- testname: Proxy, validate ProxyWithPath responses
1744 codename: '[sig-network] Proxy version v1 A set of valid responses are returned
1745 for both pod and service ProxyWithPath [Conformance]'
1746 description: Attempt to create a pod and a service. A set of pod and service endpoints
1747 MUST be accessed via ProxyWithPath using a list of http methods. A valid response
1748 MUST be returned for each endpoint.
1749 release: v1.21
1750 file: test/e2e/network/proxy.go
1751- testname: Proxy, logs service endpoint
1752 codename: '[sig-network] Proxy version v1 should proxy through a service and a pod
1753 [Conformance]'
1754 description: Select any node in the cluster to invoke /logs endpoint using the
1755 /nodes/proxy subresource from the kubelet port. This endpoint MUST be reachable.
1756 release: v1.9
1757 file: test/e2e/network/proxy.go
1758- testname: Service endpoint latency, thresholds
1759 codename: '[sig-network] Service endpoints latency should not be very high [Conformance]'
1760 description: Run 100 iterations of create service with the Pod running the pause
1761 image, measure the time it takes for creating the service and the endpoint with
1762 the service name is available. These durations are captured for 100 iterations,
1763 then the durations are sorted to compute 50th, 90th and 99th percentile. The single
1764 server latency MUST not exceed liberally set thresholds of 20s for 50th percentile
1765 and 50s for the 90th percentile.
1766 release: v1.9
1767 file: test/e2e/network/service_latency.go
1768- testname: Service, change type, ClusterIP to ExternalName
1769 codename: '[sig-network] Services should be able to change the type from ClusterIP
1770 to ExternalName [Conformance]'
1771 description: Create a service of type ClusterIP. Service creation MUST be successful
1772 by assigning ClusterIP to the service. Update service type from ClusterIP to ExternalName
1773 by setting CNAME entry as externalName. Service update MUST be successful and
1774 service MUST not has associated ClusterIP. Service MUST be able to resolve to
1775 IP address by returning A records ensuring service is pointing to provided externalName.
1776 release: v1.16
1777 file: test/e2e/network/service.go
1778- testname: Service, change type, ExternalName to ClusterIP
1779 codename: '[sig-network] Services should be able to change the type from ExternalName
1780 to ClusterIP [Conformance]'
1781 description: Create a service of type ExternalName, pointing to external DNS. ClusterIP
1782 MUST not be assigned to the service. Update the service from ExternalName to ClusterIP
1783 by removing ExternalName entry, assigning port 80 as service port and TCP as protocol.
1784 Service update MUST be successful by assigning ClusterIP to the service and it
1785 MUST be reachable over serviceName and ClusterIP on provided service port.
1786 release: v1.16
1787 file: test/e2e/network/service.go
1788- testname: Service, change type, ExternalName to NodePort
1789 codename: '[sig-network] Services should be able to change the type from ExternalName
1790 to NodePort [Conformance]'
1791 description: Create a service of type ExternalName, pointing to external DNS. ClusterIP
1792 MUST not be assigned to the service. Update the service from ExternalName to NodePort,
1793 assigning port 80 as service port and, TCP as protocol. service update MUST be
1794 successful by exposing service on every node's IP on dynamically assigned NodePort
1795 and, ClusterIP MUST be assigned to route service requests. Service MUST be reachable
1796 over serviceName and the ClusterIP on servicePort. Service MUST also be reachable
1797 over node's IP on NodePort.
1798 release: v1.16
1799 file: test/e2e/network/service.go
1800- testname: Service, change type, NodePort to ExternalName
1801 codename: '[sig-network] Services should be able to change the type from NodePort
1802 to ExternalName [Conformance]'
1803 description: Create a service of type NodePort. Service creation MUST be successful
1804 by exposing service on every node's IP on dynamically assigned NodePort and, ClusterIP
1805 MUST be assigned to route service requests. Update the service type from NodePort
1806 to ExternalName by setting CNAME entry as externalName. Service update MUST be
1807 successful and, MUST not has ClusterIP associated with the service and, allocated
1808 NodePort MUST be released. Service MUST be able to resolve to IP address by returning
1809 A records ensuring service is pointing to provided externalName.
1810 release: v1.16
1811 file: test/e2e/network/service.go
1812- testname: Service, NodePort Service
1813 codename: '[sig-network] Services should be able to create a functioning NodePort
1814 service [Conformance]'
1815 description: Create a TCP NodePort service, and test reachability from a client
1816 Pod. The client Pod MUST be able to access the NodePort service by service name
1817 and cluster IP on the service port, and on nodes' internal and external IPs on
1818 the NodePort.
1819 release: v1.16
1820 file: test/e2e/network/service.go
1821- testname: Service, NodePort type, session affinity to None
1822 codename: '[sig-network] Services should be able to switch session affinity for
1823 NodePort service [LinuxOnly] [Conformance]'
1824 description: 'Create a service of type "NodePort" and provide service port and protocol.
1825 Service''s sessionAffinity is set to "ClientIP". Service creation MUST be successful
1826 by assigning a "ClusterIP" to the service and allocating NodePort on all the nodes.
1827 Create a Replication Controller to ensure that 3 pods are running and are targeted
1828 by the service to serve hostname of the pod when requests are sent to the service.
1829 Create another pod to make requests to the service. Update the service''s sessionAffinity
1830 to "None". Service update MUST be successful. When a requests are made to the
1831 service on node''s IP and NodePort, service MUST be able serve the hostname from
1832 any pod of the replica. When service''s sessionAffinily is updated back to "ClientIP",
1833 service MUST serve the hostname from the same pod of the replica for all consecutive
1834 requests. Service MUST be reachable over serviceName and the ClusterIP on servicePort.
1835 Service MUST also be reachable over node''s IP on NodePort. [LinuxOnly]: Windows
1836 does not support session affinity.'
1837 release: v1.19
1838 file: test/e2e/network/service.go
1839- testname: Service, ClusterIP type, session affinity to None
1840 codename: '[sig-network] Services should be able to switch session affinity for
1841 service with type clusterIP [LinuxOnly] [Conformance]'
1842 description: 'Create a service of type "ClusterIP". Service''s sessionAffinity is
1843 set to "ClientIP". Service creation MUST be successful by assigning "ClusterIP"
1844 to the service. Create a Replication Controller to ensure that 3 pods are running
1845 and are targeted by the service to serve hostname of the pod when requests are
1846 sent to the service. Create another pod to make requests to the service. Update
1847 the service''s sessionAffinity to "None". Service update MUST be successful. When
1848 a requests are made to the service, it MUST be able serve the hostname from any
1849 pod of the replica. When service''s sessionAffinily is updated back to "ClientIP",
1850 service MUST serve the hostname from the same pod of the replica for all consecutive
1851 requests. Service MUST be reachable over serviceName and the ClusterIP on servicePort.
1852 [LinuxOnly]: Windows does not support session affinity.'
1853 release: v1.19
1854 file: test/e2e/network/service.go
1855- testname: Service, complete ServiceStatus lifecycle
1856 codename: '[sig-network] Services should complete a service status lifecycle [Conformance]'
1857 description: Create a service, the service MUST exist. When retrieving /status the
1858 action MUST be validated. When patching /status the action MUST be validated.
1859 When updating /status the action MUST be validated. When patching a service the
1860 action MUST be validated.
1861 release: v1.21
1862 file: test/e2e/network/service.go
1863- testname: Service, deletes a collection of services
1864 codename: '[sig-network] Services should delete a collection of services [Conformance]'
1865 description: Create three services with the required labels and ports. It MUST locate
1866 three services in the test namespace. It MUST succeed at deleting a collection
1867 of services via a label selector. It MUST locate only one service after deleting
1868 the service collection.
1869 release: v1.23
1870 file: test/e2e/network/service.go
1871- testname: Find Kubernetes Service in default Namespace
1872 codename: '[sig-network] Services should find a service from listing all namespaces
1873 [Conformance]'
1874 description: List all Services in all Namespaces, response MUST include a Service
1875 named Kubernetes with the Namespace of default.
1876 release: v1.18
1877 file: test/e2e/network/service.go
1878- testname: Service, NodePort type, session affinity to ClientIP
1879 codename: '[sig-network] Services should have session affinity work for NodePort
1880 service [LinuxOnly] [Conformance]'
1881 description: 'Create a service of type "NodePort" and provide service port and protocol.
1882 Service''s sessionAffinity is set to "ClientIP". Service creation MUST be successful
1883 by assigning a "ClusterIP" to service and allocating NodePort on all nodes. Create
1884 a Replication Controller to ensure that 3 pods are running and are targeted by
1885 the service to serve hostname of the pod when a requests are sent to the service.
1886 Create another pod to make requests to the service on node''s IP and NodePort.
1887 Service MUST serve the hostname from the same pod of the replica for all consecutive
1888 requests. Service MUST be reachable over serviceName and the ClusterIP on servicePort.
1889 Service MUST also be reachable over node''s IP on NodePort. [LinuxOnly]: Windows
1890 does not support session affinity.'
1891 release: v1.19
1892 file: test/e2e/network/service.go
1893- testname: Service, ClusterIP type, session affinity to ClientIP
1894 codename: '[sig-network] Services should have session affinity work for service
1895 with type clusterIP [LinuxOnly] [Conformance]'
1896 description: 'Create a service of type "ClusterIP". Service''s sessionAffinity is
1897 set to "ClientIP". Service creation MUST be successful by assigning "ClusterIP"
1898 to the service. Create a Replication Controller to ensure that 3 pods are running
1899 and are targeted by the service to serve hostname of the pod when requests are
1900 sent to the service. Create another pod to make requests to the service. Service
1901 MUST serve the hostname from the same pod of the replica for all consecutive requests.
1902 Service MUST be reachable over serviceName and the ClusterIP on servicePort. [LinuxOnly]:
1903 Windows does not support session affinity.'
1904 release: v1.19
1905 file: test/e2e/network/service.go
1906- testname: Kubernetes Service
1907 codename: '[sig-network] Services should provide secure master service [Conformance]'
1908 description: By default when a kubernetes cluster is running there MUST be a 'kubernetes'
1909 service running in the cluster.
1910 release: v1.9
1911 file: test/e2e/network/service.go
1912- testname: Service, endpoints
1913 codename: '[sig-network] Services should serve a basic endpoint from pods [Conformance]'
1914 description: Create a service with a endpoint without any Pods, the service MUST
1915 run and show empty endpoints. Add a pod to the service and the service MUST validate
1916 to show all the endpoints for the ports exposed by the Pod. Add another Pod then
1917 the list of all Ports exposed by both the Pods MUST be valid and have corresponding
1918 service endpoint. Once the second Pod is deleted then set of endpoint MUST be
1919 validated to show only ports from the first container that are exposed. Once both
1920 pods are deleted the endpoints from the service MUST be empty.
1921 release: v1.9
1922 file: test/e2e/network/service.go
1923- testname: Service, should serve endpoints on same port and different protocols.
1924 codename: '[sig-network] Services should serve endpoints on same port and different
1925 protocols [Conformance]'
1926 description: Create one service with two ports, same port number and different protocol
1927 TCP and UDP. It MUST be able to forward traffic to both ports. Update the Service
1928 to expose only the TCP port, it MUST succeed to connect to the TCP port and fail
1929 to connect to the UDP port. Update the Service to expose only the UDP port, it
1930 MUST succeed to connect to the UDP port and fail to connect to the TCP port.
1931 release: v1.29
1932 file: test/e2e/network/service.go
1933- testname: Service, endpoints with multiple ports
1934 codename: '[sig-network] Services should serve multiport endpoints from pods [Conformance]'
1935 description: Create a service with two ports but no Pods are added to the service
1936 yet. The service MUST run and show empty set of endpoints. Add a Pod to the first
1937 port, service MUST list one endpoint for the Pod on that port. Add another Pod
1938 to the second port, service MUST list both the endpoints. Delete the first Pod
1939 and the service MUST list only the endpoint to the second Pod. Delete the second
1940 Pod and the service must now have empty set of endpoints.
1941 release: v1.9
1942 file: test/e2e/network/service.go
1943- testname: Endpoint resource lifecycle
1944 codename: '[sig-network] Services should test the lifecycle of an Endpoint [Conformance]'
1945 description: Create an endpoint, the endpoint MUST exist. The endpoint is updated
1946 with a new label, a check after the update MUST find the changes. The endpoint
1947 is then patched with a new IPv4 address and port, a check after the patch MUST
1948 the changes. The endpoint is deleted by it's label, a watch listens for the deleted
1949 watch event.
1950 release: v1.19
1951 file: test/e2e/network/service.go
1952- testname: ConfigMap, from environment field
1953 codename: '[sig-node] ConfigMap should be consumable via environment variable [NodeConformance]
1954 [Conformance]'
1955 description: Create a Pod with an environment variable value set using a value from
1956 ConfigMap. A ConfigMap value MUST be accessible in the container environment.
1957 release: v1.9
1958 file: test/e2e/common/node/configmap.go
1959- testname: ConfigMap, from environment variables
1960 codename: '[sig-node] ConfigMap should be consumable via the environment [NodeConformance]
1961 [Conformance]'
1962 description: Create a Pod with a environment source from ConfigMap. All ConfigMap
1963 values MUST be available as environment variables in the container.
1964 release: v1.9
1965 file: test/e2e/common/node/configmap.go
1966- testname: ConfigMap, with empty-key
1967 codename: '[sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]'
1968 description: Attempt to create a ConfigMap with an empty key. The creation MUST
1969 fail.
1970 release: v1.14
1971 file: test/e2e/common/node/configmap.go
1972- testname: ConfigMap lifecycle
1973 codename: '[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]'
1974 description: Attempt to create a ConfigMap. Patch the created ConfigMap. Fetching
1975 the ConfigMap MUST reflect changes. By fetching all the ConfigMaps via a Label
1976 selector it MUST find the ConfigMap by it's static label and updated value. The
1977 ConfigMap must be deleted by Collection.
1978 release: v1.19
1979 file: test/e2e/common/node/configmap.go
1980- testname: Pod Lifecycle, post start exec hook
1981 codename: '[sig-node] Container Lifecycle Hook when create a pod with lifecycle
1982 hook should execute poststart exec hook properly [NodeConformance] [Conformance]'
1983 description: When a post start handler is specified in the container lifecycle using
1984 a 'Exec' action, then the handler MUST be invoked after the start of the container.
1985 A server pod is created that will serve http requests, create a second pod with
1986 a container lifecycle specifying a post start that invokes the server pod using
1987 ExecAction to validate that the post start is executed.
1988 release: v1.9
1989 file: test/e2e/common/node/lifecycle_hook.go
1990- testname: Pod Lifecycle, post start http hook
1991 codename: '[sig-node] Container Lifecycle Hook when create a pod with lifecycle
1992 hook should execute poststart http hook properly [NodeConformance] [Conformance]'
1993 description: When a post start handler is specified in the container lifecycle using
1994 a HttpGet action, then the handler MUST be invoked after the start of the container.
1995 A server pod is created that will serve http requests, create a second pod on
1996 the same node with a container lifecycle specifying a post start that invokes
1997 the server pod to validate that the post start is executed.
1998 release: v1.9
1999 file: test/e2e/common/node/lifecycle_hook.go
2000- testname: Pod Lifecycle, prestop exec hook
2001 codename: '[sig-node] Container Lifecycle Hook when create a pod with lifecycle
2002 hook should execute prestop exec hook properly [NodeConformance] [Conformance]'
2003 description: When a pre-stop handler is specified in the container lifecycle using
2004 a 'Exec' action, then the handler MUST be invoked before the container is terminated.
2005 A server pod is created that will serve http requests, create a second pod with
2006 a container lifecycle specifying a pre-stop that invokes the server pod using
2007 ExecAction to validate that the pre-stop is executed.
2008 release: v1.9
2009 file: test/e2e/common/node/lifecycle_hook.go
2010- testname: Pod Lifecycle, prestop http hook
2011 codename: '[sig-node] Container Lifecycle Hook when create a pod with lifecycle
2012 hook should execute prestop http hook properly [NodeConformance] [Conformance]'
2013 description: When a pre-stop handler is specified in the container lifecycle using
2014 a 'HttpGet' action, then the handler MUST be invoked before the container is terminated.
2015 A server pod is created that will serve http requests, create a second pod on
2016 the same node with a container lifecycle specifying a pre-stop that invokes the
2017 server pod to validate that the pre-stop is executed.
2018 release: v1.9
2019 file: test/e2e/common/node/lifecycle_hook.go
2020- testname: Container Runtime, TerminationMessage, from log output of succeeding container
2021 codename: '[sig-node] Container Runtime blackbox test on terminated container should
2022 report termination message as empty when pod succeeds and TerminationMessagePolicy
2023 FallbackToLogsOnError is set [NodeConformance] [Conformance]'
2024 description: Create a pod with an container. Container's output is recorded in log
2025 and container exits successfully without an error. When container is terminated,
2026 terminationMessage MUST have no content as container succeed.
2027 release: v1.15
2028 file: test/e2e/common/node/runtime.go
2029- testname: Container Runtime, TerminationMessage, from file of succeeding container
2030 codename: '[sig-node] Container Runtime blackbox test on terminated container should
2031 report termination message from file when pod succeeds and TerminationMessagePolicy
2032 FallbackToLogsOnError is set [NodeConformance] [Conformance]'
2033 description: Create a pod with an container. Container's output is recorded in a
2034 file and the container exits successfully without an error. When container is
2035 terminated, terminationMessage MUST match with the content from file.
2036 release: v1.15
2037 file: test/e2e/common/node/runtime.go
2038- testname: Container Runtime, TerminationMessage, from container's log output of
2039 failing container
2040 codename: '[sig-node] Container Runtime blackbox test on terminated container should
2041 report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError
2042 is set [NodeConformance] [Conformance]'
2043 description: Create a pod with an container. Container's output is recorded in log
2044 and container exits with an error. When container is terminated, termination message
2045 MUST match the expected output recorded from container's log.
2046 release: v1.15
2047 file: test/e2e/common/node/runtime.go
2048- testname: Container Runtime, TerminationMessagePath, non-root user and non-default
2049 path
2050 codename: '[sig-node] Container Runtime blackbox test on terminated container should
2051 report termination message if TerminationMessagePath is set as non-root user and
2052 at a non-default path [NodeConformance] [Conformance]'
2053 description: Create a pod with a container to run it as a non-root user with a custom
2054 TerminationMessagePath set. Pod redirects the output to the provided path successfully.
2055 When the container is terminated, the termination message MUST match the expected
2056 output logged in the provided custom path.
2057 release: v1.15
2058 file: test/e2e/common/node/runtime.go
2059- testname: Container Runtime, Restart Policy, Pod Phases
2060 codename: '[sig-node] Container Runtime blackbox test when starting a container
2061 that exits should run with the expected status [NodeConformance] [Conformance]'
2062 description: If the restart policy is set to 'Always', Pod MUST be restarted when
2063 terminated, If restart policy is 'OnFailure', Pod MUST be started only if it is
2064 terminated with non-zero exit code. If the restart policy is 'Never', Pod MUST
2065 never be restarted. All these three test cases MUST verify the restart counts
2066 accordingly.
2067 release: v1.13
2068 file: test/e2e/common/node/runtime.go
2069- testname: Containers, with arguments
2070 codename: '[sig-node] Containers should be able to override the image''s default
2071 arguments (container cmd) [NodeConformance] [Conformance]'
2072 description: Default command and from the container image entrypoint MUST be used
2073 when Pod does not specify the container command but the arguments from Pod spec
2074 MUST override when specified.
2075 release: v1.9
2076 file: test/e2e/common/node/containers.go
2077- testname: Containers, with command
2078 codename: '[sig-node] Containers should be able to override the image''s default
2079 command (container entrypoint) [NodeConformance] [Conformance]'
2080 description: Default command from the container image entrypoint MUST NOT be used
2081 when Pod specifies the container command. Command from Pod spec MUST override
2082 the command in the image.
2083 release: v1.9
2084 file: test/e2e/common/node/containers.go
2085- testname: Containers, with command and arguments
2086 codename: '[sig-node] Containers should be able to override the image''s default
2087 command and arguments [NodeConformance] [Conformance]'
2088 description: Default command and arguments from the container image entrypoint MUST
2089 NOT be used when Pod specifies the container command and arguments. Command and
2090 arguments from Pod spec MUST override the command and arguments in the image.
2091 release: v1.9
2092 file: test/e2e/common/node/containers.go
2093- testname: Containers, without command and arguments
2094 codename: '[sig-node] Containers should use the image defaults if command and args
2095 are blank [NodeConformance] [Conformance]'
2096 description: Default command and arguments from the container image entrypoint MUST
2097 be used when Pod does not specify the container command
2098 release: v1.9
2099 file: test/e2e/common/node/containers.go
2100- testname: DownwardAPI, environment for CPU and memory limits and requests
2101 codename: '[sig-node] Downward API should provide container''s limits.cpu/memory
2102 and requests.cpu/memory as env vars [NodeConformance] [Conformance]'
2103 description: Downward API MUST expose CPU request and Memory request set through
2104 environment variables at runtime in the container.
2105 release: v1.9
2106 file: test/e2e/common/node/downwardapi.go
2107- testname: DownwardAPI, environment for default CPU and memory limits and requests
2108 codename: '[sig-node] Downward API should provide default limits.cpu/memory from
2109 node allocatable [NodeConformance] [Conformance]'
2110 description: Downward API MUST expose CPU request and Memory limits set through
2111 environment variables at runtime in the container.
2112 release: v1.9
2113 file: test/e2e/common/node/downwardapi.go
2114- testname: DownwardAPI, environment for host ip
2115 codename: '[sig-node] Downward API should provide host IP as an env var [NodeConformance]
2116 [Conformance]'
2117 description: Downward API MUST expose Pod and Container fields as environment variables.
2118 Specify host IP as environment variable in the Pod Spec are visible at runtime
2119 in the container.
2120 release: v1.9
2121 file: test/e2e/common/node/downwardapi.go
2122- testname: DownwardAPI, environment for Pod UID
2123 codename: '[sig-node] Downward API should provide pod UID as env vars [NodeConformance]
2124 [Conformance]'
2125 description: Downward API MUST expose Pod UID set through environment variables
2126 at runtime in the container.
2127 release: v1.9
2128 file: test/e2e/common/node/downwardapi.go
2129- testname: DownwardAPI, environment for name, namespace and ip
2130 codename: '[sig-node] Downward API should provide pod name, namespace and IP address
2131 as env vars [NodeConformance] [Conformance]'
2132 description: Downward API MUST expose Pod and Container fields as environment variables.
2133 Specify Pod Name, namespace and IP as environment variable in the Pod Spec are
2134 visible at runtime in the container.
2135 release: v1.9
2136 file: test/e2e/common/node/downwardapi.go
2137- testname: Ephemeral Container, update ephemeral containers
2138 codename: '[sig-node] Ephemeral Containers [NodeConformance] should update the ephemeral
2139 containers in an existing pod [Conformance]'
2140 description: Adding an ephemeral container to pod.spec MUST result in the container
2141 running. There MUST now be only one ephermal container found. Updating the pod
2142 with another ephemeral container MUST succeed. There MUST now be two ephermal
2143 containers found.
2144 release: v1.28
2145 file: test/e2e/common/node/ephemeral_containers.go
2146- testname: Ephemeral Container Creation
2147 codename: '[sig-node] Ephemeral Containers [NodeConformance] will start an ephemeral
2148 container in an existing pod [Conformance]'
2149 description: Adding an ephemeral container to pod.spec MUST result in the container
2150 running.
2151 release: "1.25"
2152 file: test/e2e/common/node/ephemeral_containers.go
2153- testname: init-container-starts-app-restartalways-pod
2154 codename: '[sig-node] InitContainer [NodeConformance] should invoke init containers
2155 on a RestartAlways pod [Conformance]'
2156 description: Ensure that all InitContainers are started and all containers in pod
2157 started and at least one container is still running or is in the process of being
2158 restarted when Pod has restart policy as RestartAlways.
2159 release: v1.12
2160 file: test/e2e/common/node/init_container.go
2161- testname: init-container-starts-app-restartnever-pod
2162 codename: '[sig-node] InitContainer [NodeConformance] should invoke init containers
2163 on a RestartNever pod [Conformance]'
2164 description: Ensure that all InitContainers are started and all containers in pod
2165 are voluntarily terminated with exit status 0, and the system is not going to
2166 restart any of these containers when Pod has restart policy as RestartNever.
2167 release: v1.12
2168 file: test/e2e/common/node/init_container.go
2169- testname: init-container-fails-stops-app-restartnever-pod
2170 codename: '[sig-node] InitContainer [NodeConformance] should not start app containers
2171 and fail the pod if init containers fail on a RestartNever pod [Conformance]'
2172 description: Ensure that app container is not started when at least one InitContainer
2173 fails to start and Pod has restart policy as RestartNever.
2174 release: v1.12
2175 file: test/e2e/common/node/init_container.go
2176- testname: init-container-fails-stops-app-restartalways-pod
2177 codename: '[sig-node] InitContainer [NodeConformance] should not start app containers
2178 if init containers fail on a RestartAlways pod [Conformance]'
2179 description: Ensure that app container is not started when all InitContainers failed
2180 to start and Pod has restarted for few occurrences and pod has restart policy
2181 as RestartAlways.
2182 release: v1.12
2183 file: test/e2e/common/node/init_container.go
2184- testname: Kubelet, log output, default
2185 codename: '[sig-node] Kubelet when scheduling a busybox command in a pod should
2186 print the output to logs [NodeConformance] [Conformance]'
2187 description: By default the stdout and stderr from the process being executed in
2188 a pod MUST be sent to the pod's logs.
2189 release: v1.13
2190 file: test/e2e/common/node/kubelet.go
2191- testname: Kubelet, failed pod, delete
2192 codename: '[sig-node] Kubelet when scheduling a busybox command that always fails
2193 in a pod should be possible to delete [NodeConformance] [Conformance]'
2194 description: Create a Pod with terminated state. This terminated pod MUST be able
2195 to be deleted.
2196 release: v1.13
2197 file: test/e2e/common/node/kubelet.go
2198- testname: Kubelet, failed pod, terminated reason
2199 codename: '[sig-node] Kubelet when scheduling a busybox command that always fails
2200 in a pod should have an terminated reason [NodeConformance] [Conformance]'
2201 description: Create a Pod with terminated state. Pod MUST have only one container.
2202 Container MUST be in terminated state and MUST have an terminated reason.
2203 release: v1.13
2204 file: test/e2e/common/node/kubelet.go
2205- testname: Kubelet, pod with read only root file system
2206 codename: '[sig-node] Kubelet when scheduling a read only busybox container should
2207 not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]'
2208 description: Create a Pod with security context set with ReadOnlyRootFileSystem
2209 set to true. The Pod then tries to write to the /file on the root, write operation
2210 to the root filesystem MUST fail as expected. This test is marked LinuxOnly since
2211 Windows does not support creating containers with read-only access.
2212 release: v1.13
2213 file: test/e2e/common/node/kubelet.go
2214- testname: Kubelet, hostAliases
2215 codename: '[sig-node] Kubelet when scheduling an agnhost Pod with hostAliases should
2216 write entries to /etc/hosts [NodeConformance] [Conformance]'
2217 description: Create a Pod with hostAliases and a container with command to output
2218 /etc/hosts entries. Pod's logs MUST have matching entries of specified hostAliases
2219 to the output of /etc/hosts entries.
2220 release: v1.13
2221 file: test/e2e/common/node/kubelet.go
2222- testname: Kubelet, managed etc hosts
2223 codename: '[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts
2224 file [LinuxOnly] [NodeConformance] [Conformance]'
2225 description: Create a Pod with containers with hostNetwork set to false, one of
2226 the containers mounts the /etc/hosts file form the host. Create a second Pod with
2227 hostNetwork set to true. 1. The Pod with hostNetwork=false MUST have /etc/hosts
2228 of containers managed by the Kubelet. 2. The Pod with hostNetwork=false but the
2229 container mounts /etc/hosts file from the host. The /etc/hosts file MUST not be
2230 managed by the Kubelet. 3. The Pod with hostNetwork=true , /etc/hosts file MUST
2231 not be managed by the Kubelet. This test is marked LinuxOnly since Windows cannot
2232 mount individual files in Containers.
2233 release: v1.9
2234 file: test/e2e/common/node/kubelet_etc_hosts.go
2235- testname: lease API should be available
2236 codename: '[sig-node] Lease lease API should be available [Conformance]'
2237 description: "Create Lease object, and get it; create and get MUST be successful
2238 and Spec of the read Lease MUST match Spec of original Lease. Update the Lease
2239 and get it; update and get MUST be successful\tand Spec of the read Lease MUST
2240 match Spec of updated Lease. Patch the Lease and get it; patch and get MUST be
2241 successful and Spec of the read Lease MUST match Spec of patched Lease. Create
2242 a second Lease with labels and list Leases; create and list MUST be successful
2243 and list MUST return both leases. Delete the labels lease via delete collection;
2244 the delete MUST be successful and MUST delete only the labels lease. List leases;
2245 list MUST be successful and MUST return just the remaining lease. Delete the lease;
2246 delete MUST be successful. Get the lease; get MUST return not found error."
2247 release: v1.17
2248 file: test/e2e/common/node/lease.go
2249- testname: Pod Eviction, Toleration limits
2250 codename: '[sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with
2251 minTolerationSeconds [Disruptive] [Conformance]'
2252 description: In a multi-pods scenario with tolerationSeconds, the pods MUST be evicted
2253 as per the toleration time limit.
2254 release: v1.16
2255 file: test/e2e/node/taints.go
2256- testname: Taint, Pod Eviction on taint removal
2257 codename: '[sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels
2258 eviction [Disruptive] [Conformance]'
2259 description: The Pod with toleration timeout scheduled on a tainted Node MUST not
2260 be evicted if the taint is removed before toleration time ends.
2261 release: v1.16
2262 file: test/e2e/node/taints.go
2263- testname: PodTemplate, delete a collection
2264 codename: '[sig-node] PodTemplates should delete a collection of pod templates [Conformance]'
2265 description: A set of Pod Templates is created with a label selector which MUST
2266 be found when listed. The set of Pod Templates is deleted and MUST NOT show up
2267 when listed by its label selector.
2268 release: v1.19
2269 file: test/e2e/common/node/podtemplates.go
2270- testname: PodTemplate, replace
2271 codename: '[sig-node] PodTemplates should replace a pod template [Conformance]'
2272 description: Attempt to create a PodTemplate which MUST succeed. Attempt to replace
2273 the PodTemplate to include a new annotation which MUST succeed. The annotation
2274 MUST be found in the new PodTemplate.
2275 release: v1.24
2276 file: test/e2e/common/node/podtemplates.go
2277- testname: PodTemplate lifecycle
2278 codename: '[sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]'
2279 description: Attempt to create a PodTemplate. Patch the created PodTemplate. Fetching
2280 the PodTemplate MUST reflect changes. By fetching all the PodTemplates via a Label
2281 selector it MUST find the PodTemplate by it's static label and updated value.
2282 The PodTemplate must be deleted.
2283 release: v1.19
2284 file: test/e2e/common/node/podtemplates.go
2285- testname: Pods, QOS
2286 codename: '[sig-node] Pods Extended Pods Set QOS Class should be set on Pods with
2287 matching resource requests and limits for memory and cpu [Conformance]'
2288 description: Create a Pod with CPU and Memory request and limits. Pod status MUST
2289 have QOSClass set to PodQOSGuaranteed.
2290 release: v1.9
2291 file: test/e2e/node/pods.go
2292- testname: Pods, ActiveDeadlineSeconds
2293 codename: '[sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance]
2294 [Conformance]'
2295 description: Create a Pod with a unique label. Query for the Pod with the label
2296 as selector MUST be successful. The Pod is updated with ActiveDeadlineSeconds
2297 set on the Pod spec. Pod MUST terminate of the specified time elapses.
2298 release: v1.9
2299 file: test/e2e/common/node/pods.go
2300- testname: Pods, lifecycle
2301 codename: '[sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]'
2302 description: A Pod is created with a unique label. Pod MUST be accessible when queried
2303 using the label selector upon creation. Add a watch, check if the Pod is running.
2304 Pod then deleted, The pod deletion timestamp is observed. The watch MUST return
2305 the pod deleted event. Query with the original selector for the Pod MUST return
2306 empty list.
2307 release: v1.9
2308 file: test/e2e/common/node/pods.go
2309- testname: Pods, update
2310 codename: '[sig-node] Pods should be updated [NodeConformance] [Conformance]'
2311 description: Create a Pod with a unique label. Query for the Pod with the label
2312 as selector MUST be successful. Update the pod to change the value of the Label.
2313 Query for the Pod with the new value for the label MUST be successful.
2314 release: v1.9
2315 file: test/e2e/common/node/pods.go
2316- testname: Pods, service environment variables
2317 codename: '[sig-node] Pods should contain environment variables for services [NodeConformance]
2318 [Conformance]'
2319 description: Create a server Pod listening on port 9376. A Service called fooservice
2320 is created for the server Pod listening on port 8765 targeting port 8080. If a
2321 new Pod is created in the cluster then the Pod MUST have the fooservice environment
2322 variables available from this new Pod. The new create Pod MUST have environment
2323 variables such as FOOSERVICE_SERVICE_HOST, FOOSERVICE_SERVICE_PORT, FOOSERVICE_PORT,
2324 FOOSERVICE_PORT_8765_TCP_PORT, FOOSERVICE_PORT_8765_TCP_PROTO, FOOSERVICE_PORT_8765_TCP
2325 and FOOSERVICE_PORT_8765_TCP_ADDR that are populated with proper values.
2326 release: v1.9
2327 file: test/e2e/common/node/pods.go
2328- testname: Pods, delete a collection
2329 codename: '[sig-node] Pods should delete a collection of pods [Conformance]'
2330 description: A set of pods is created with a label selector which MUST be found
2331 when listed. The set of pods is deleted and MUST NOT show up when listed by its
2332 label selector.
2333 release: v1.19
2334 file: test/e2e/common/node/pods.go
2335- testname: Pods, assigned hostip
2336 codename: '[sig-node] Pods should get a host IP [NodeConformance] [Conformance]'
2337 description: Create a Pod. Pod status MUST return successfully and contains a valid
2338 IP address.
2339 release: v1.9
2340 file: test/e2e/common/node/pods.go
2341- testname: Pods, patching status
2342 codename: '[sig-node] Pods should patch a pod status [Conformance]'
2343 description: A pod is created which MUST succeed and be found running. The pod status
2344 when patched MUST succeed. Given the patching of the pod status, the fields MUST
2345 equal the new values.
2346 release: v1.25
2347 file: test/e2e/common/node/pods.go
2348- testname: Pods, completes the lifecycle of a Pod and the PodStatus
2349 codename: '[sig-node] Pods should run through the lifecycle of Pods and PodStatus
2350 [Conformance]'
2351 description: A Pod is created with a static label which MUST succeed. It MUST succeed
2352 when patching the label and the pod data. When checking and replacing the PodStatus
2353 it MUST succeed. It MUST succeed when deleting the Pod.
2354 release: v1.20
2355 file: test/e2e/common/node/pods.go
2356- testname: Pods, remote command execution over websocket
2357 codename: '[sig-node] Pods should support remote command execution over websockets
2358 [NodeConformance] [Conformance]'
2359 description: A Pod is created. Websocket is created to retrieve exec command output
2360 from this pod. Message retrieved form Websocket MUST match with expected exec
2361 command output.
2362 release: v1.13
2363 file: test/e2e/common/node/pods.go
2364- testname: Pods, logs from websockets
2365 codename: '[sig-node] Pods should support retrieving logs from the container over
2366 websockets [NodeConformance] [Conformance]'
2367 description: A Pod is created. Websocket is created to retrieve log of a container
2368 from this pod. Message retrieved form Websocket MUST match with container's output.
2369 release: v1.13
2370 file: test/e2e/common/node/pods.go
2371- testname: Pods, prestop hook
2372 codename: '[sig-node] PreStop should call prestop when killing a pod [Conformance]'
2373 description: Create a server pod with a rest endpoint '/write' that changes state.Received
2374 field. Create a Pod with a pre-stop handle that posts to the /write endpoint on
2375 the server Pod. Verify that the Pod with pre-stop hook is running. Delete the
2376 Pod with the pre-stop hook. Before the Pod is deleted, pre-stop handler MUST be
2377 called when configured. Verify that the Pod is deleted and a call to prestop hook
2378 is verified by checking the status received on the server Pod.
2379 release: v1.9
2380 file: test/e2e/node/pre_stop.go
2381- testname: Pod liveness probe, using http endpoint, failure
2382 codename: '[sig-node] Probing container should *not* be restarted with a /healthz
2383 http liveness probe [NodeConformance] [Conformance]'
2384 description: A Pod is created with liveness probe on http endpoint '/'. Liveness
2385 probe on this endpoint will not fail. When liveness probe does not fail then the
2386 restart count MUST remain zero.
2387 release: v1.9
2388 file: test/e2e/common/node/container_probe.go
2389- testname: Pod liveness probe, using grpc call, success
2390 codename: '[sig-node] Probing container should *not* be restarted with a GRPC liveness
2391 probe [NodeConformance] [Conformance]'
2392 description: A Pod is created with liveness probe on grpc service. Liveness probe
2393 on this endpoint will not fail. When liveness probe does not fail then the restart
2394 count MUST remain zero.
2395 release: v1.23
2396 file: test/e2e/common/node/container_probe.go
2397- testname: Pod liveness probe, using local file, no restart
2398 codename: '[sig-node] Probing container should *not* be restarted with a exec "cat
2399 /tmp/health" liveness probe [NodeConformance] [Conformance]'
2400 description: Pod is created with liveness probe that uses 'exec' command to cat
2401 /temp/health file. Liveness probe MUST not fail to check health and the restart
2402 count should remain 0.
2403 release: v1.9
2404 file: test/e2e/common/node/container_probe.go
2405- testname: Pod liveness probe, using tcp socket, no restart
2406 codename: '[sig-node] Probing container should *not* be restarted with a tcp:8080
2407 liveness probe [NodeConformance] [Conformance]'
2408 description: A Pod is created with liveness probe on tcp socket 8080. The http handler
2409 on port 8080 will return http errors after 10 seconds, but the socket will remain
2410 open. Liveness probe MUST not fail to check health and the restart count should
2411 remain 0.
2412 release: v1.18
2413 file: test/e2e/common/node/container_probe.go
2414- testname: Pod liveness probe, using http endpoint, restart
2415 codename: '[sig-node] Probing container should be restarted with a /healthz http
2416 liveness probe [NodeConformance] [Conformance]'
2417 description: A Pod is created with liveness probe on http endpoint /healthz. The
2418 http handler on the /healthz will return a http error after 10 seconds since the
2419 Pod is started. This MUST result in liveness check failure. The Pod MUST now be
2420 killed and restarted incrementing restart count to 1.
2421 release: v1.9
2422 file: test/e2e/common/node/container_probe.go
2423- testname: Pod liveness probe, using grpc call, failure
2424 codename: '[sig-node] Probing container should be restarted with a GRPC liveness
2425 probe [NodeConformance] [Conformance]'
2426 description: A Pod is created with liveness probe on grpc service. Liveness probe
2427 on this endpoint should fail because of wrong probe port. When liveness probe
2428 does fail then the restart count should +1.
2429 release: v1.23
2430 file: test/e2e/common/node/container_probe.go
2431- testname: Pod liveness probe, using local file, restart
2432 codename: '[sig-node] Probing container should be restarted with a exec "cat /tmp/health"
2433 liveness probe [NodeConformance] [Conformance]'
2434 description: Create a Pod with liveness probe that uses ExecAction handler to cat
2435 /temp/health file. The Container deletes the file /temp/health after 10 second,
2436 triggering liveness probe to fail. The Pod MUST now be killed and restarted incrementing
2437 restart count to 1.
2438 release: v1.9
2439 file: test/e2e/common/node/container_probe.go
2440- testname: Pod liveness probe, using http endpoint, multiple restarts (slow)
2441 codename: '[sig-node] Probing container should have monotonically increasing restart
2442 count [NodeConformance] [Conformance]'
2443 description: A Pod is created with liveness probe on http endpoint /healthz. The
2444 http handler on the /healthz will return a http error after 10 seconds since the
2445 Pod is started. This MUST result in liveness check failure. The Pod MUST now be
2446 killed and restarted incrementing restart count to 1. The liveness probe must
2447 fail again after restart once the http handler for /healthz enpoind on the Pod
2448 returns an http error after 10 seconds from the start. Restart counts MUST increment
2449 every time health check fails, measure up to 5 restart.
2450 release: v1.9
2451 file: test/e2e/common/node/container_probe.go
2452- testname: Pod readiness probe, with initial delay
2453 codename: '[sig-node] Probing container with readiness probe should not be ready
2454 before initial delay and never restart [NodeConformance] [Conformance]'
2455 description: Create a Pod that is configured with a initial delay set on the readiness
2456 probe. Check the Pod Start time to compare to the initial delay. The Pod MUST
2457 be ready only after the specified initial delay.
2458 release: v1.9
2459 file: test/e2e/common/node/container_probe.go
2460- testname: Pod readiness probe, failure
2461 codename: '[sig-node] Probing container with readiness probe that fails should never
2462 be ready and never restart [NodeConformance] [Conformance]'
2463 description: Create a Pod with a readiness probe that fails consistently. When this
2464 Pod is created, then the Pod MUST never be ready, never be running and restart
2465 count MUST be zero.
2466 release: v1.9
2467 file: test/e2e/common/node/container_probe.go
2468- testname: Pod with the deleted RuntimeClass is rejected.
2469 codename: '[sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass
2470 [NodeConformance] [Conformance]'
2471 description: Pod requesting the deleted RuntimeClass must be rejected.
2472 release: v1.20
2473 file: test/e2e/common/node/runtimeclass.go
2474- testname: Pod with the non-existing RuntimeClass is rejected.
2475 codename: '[sig-node] RuntimeClass should reject a Pod requesting a non-existent
2476 RuntimeClass [NodeConformance] [Conformance]'
2477 description: The Pod requesting the non-existing RuntimeClass must be rejected.
2478 release: v1.20
2479 file: test/e2e/common/node/runtimeclass.go
2480- testname: RuntimeClass Overhead field must be respected.
2481 codename: '[sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass
2482 and initialize its Overhead [NodeConformance] [Conformance]'
2483 description: The Pod requesting the existing RuntimeClass must be scheduled. This
2484 test doesn't validate that the Pod will actually start because this functionality
2485 depends on container runtime and preconfigured handler. Runtime-specific functionality
2486 is not being tested here.
2487 release: v1.24
2488 file: test/e2e/common/node/runtimeclass.go
2489- testname: Can schedule a pod requesting existing RuntimeClass.
2490 codename: '[sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass
2491 without PodOverhead [NodeConformance] [Conformance]'
2492 description: The Pod requesting the existing RuntimeClass must be scheduled. This
2493 test doesn't validate that the Pod will actually start because this functionality
2494 depends on container runtime and preconfigured handler. Runtime-specific functionality
2495 is not being tested here.
2496 release: v1.20
2497 file: test/e2e/common/node/runtimeclass.go
2498- testname: RuntimeClass API
2499 codename: '[sig-node] RuntimeClass should support RuntimeClasses API operations
2500 [Conformance]'
2501 description: ' The node.k8s.io API group MUST exist in the /apis discovery document.
2502 The node.k8s.io/v1 API group/version MUST exist in the /apis/mode.k8s.io discovery
2503 document. The runtimeclasses resource MUST exist in the /apis/node.k8s.io/v1 discovery
2504 document. The runtimeclasses resource must support create, get, list, watch, update,
2505 patch, delete, and deletecollection.'
2506 release: v1.20
2507 file: test/e2e/common/node/runtimeclass.go
2508- testname: Secrets, pod environment field
2509 codename: '[sig-node] Secrets should be consumable from pods in env vars [NodeConformance]
2510 [Conformance]'
2511 description: Create a secret. Create a Pod with Container that declares a environment
2512 variable which references the secret created to extract a key value from the secret.
2513 Pod MUST have the environment variable that contains proper value for the key
2514 to the secret.
2515 release: v1.9
2516 file: test/e2e/common/node/secrets.go
2517- testname: Secrets, pod environment from source
2518 codename: '[sig-node] Secrets should be consumable via the environment [NodeConformance]
2519 [Conformance]'
2520 description: Create a secret. Create a Pod with Container that declares a environment
2521 variable using 'EnvFrom' which references the secret created to extract a key
2522 value from the secret. Pod MUST have the environment variable that contains proper
2523 value for the key to the secret.
2524 release: v1.9
2525 file: test/e2e/common/node/secrets.go
2526- testname: Secrets, with empty-key
2527 codename: '[sig-node] Secrets should fail to create secret due to empty secret key
2528 [Conformance]'
2529 description: Attempt to create a Secret with an empty key. The creation MUST fail.
2530 release: v1.15
2531 file: test/e2e/common/node/secrets.go
2532- testname: Secret patching
2533 codename: '[sig-node] Secrets should patch a secret [Conformance]'
2534 description: A Secret is created. Listing all Secrets MUST return an empty list.
2535 Given the patching and fetching of the Secret, the fields MUST equal the new values.
2536 The Secret is deleted by it's static Label. Secrets are listed finally, the list
2537 MUST NOT include the originally created Secret.
2538 release: v1.18
2539 file: test/e2e/common/node/secrets.go
2540- testname: Security Context, runAsUser=65534
2541 codename: '[sig-node] Security Context When creating a container with runAsUser
2542 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]'
2543 description: 'Container is created with runAsUser option by passing uid 65534 to
2544 run as unpriviledged user. Pod MUST be in Succeeded phase. [LinuxOnly]: This test
2545 is marked as LinuxOnly since Windows does not support running as UID / GID.'
2546 release: v1.15
2547 file: test/e2e/common/node/security_context.go
2548- testname: Security Context, privileged=false.
2549 codename: '[sig-node] Security Context When creating a pod with privileged should
2550 run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]'
2551 description: 'Create a container to run in unprivileged mode by setting pod''s SecurityContext
2552 Privileged option as false. Pod MUST be in Succeeded phase. [LinuxOnly]: This
2553 test is marked as LinuxOnly since it runs a Linux-specific command.'
2554 release: v1.15
2555 file: test/e2e/common/node/security_context.go
2556- testname: Security Context, readOnlyRootFilesystem=false.
2557 codename: '[sig-node] Security Context When creating a pod with readOnlyRootFilesystem
2558 should run the container with writable rootfs when readOnlyRootFilesystem=false
2559 [NodeConformance] [Conformance]'
2560 description: Container is configured to run with readOnlyRootFilesystem to false.
2561 Write operation MUST be allowed and Pod MUST be in Succeeded state.
2562 release: v1.15
2563 file: test/e2e/common/node/security_context.go
2564- testname: Security Context, test RunAsGroup at container level
2565 codename: '[sig-node] Security Context should support container.SecurityContext.RunAsUser
2566 And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]'
2567 description: 'Container is created with runAsUser and runAsGroup option by passing
2568 uid 1001 and gid 2002 at containr level. Pod MUST be in Succeeded phase. [LinuxOnly]:
2569 This test is marked as LinuxOnly since Windows does not support running as UID
2570 / GID.'
2571 release: v1.21
2572 file: test/e2e/node/security_context.go
2573- testname: Security Context, test RunAsGroup at pod level
2574 codename: '[sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser
2575 And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]'
2576 description: 'Container is created with runAsUser and runAsGroup option by passing
2577 uid 1001 and gid 2002 at pod level. Pod MUST be in Succeeded phase. [LinuxOnly]:
2578 This test is marked as LinuxOnly since Windows does not support running as UID
2579 / GID.'
2580 release: v1.21
2581 file: test/e2e/node/security_context.go
2582- testname: Security Context, allowPrivilegeEscalation=false.
2583 codename: '[sig-node] Security Context when creating containers with AllowPrivilegeEscalation
2584 should not allow privilege escalation when false [LinuxOnly] [NodeConformance]
2585 [Conformance]'
2586 description: 'Configuring the allowPrivilegeEscalation to false, does not allow
2587 the privilege escalation operation. A container is configured with allowPrivilegeEscalation=false
2588 and a given uid (1000) which is not 0. When the container is run, container''s
2589 output MUST match with expected output verifying container ran with given uid
2590 i.e. uid=1000. [LinuxOnly]: This test is marked LinuxOnly since Windows does not
2591 support running as UID / GID, or privilege escalation.'
2592 release: v1.15
2593 file: test/e2e/common/node/security_context.go
2594- testname: Sysctls, reject invalid sysctls
2595 codename: '[sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid
2596 sysctls [MinimumKubeletVersion:1.21] [Conformance]'
2597 description: 'Pod is created with one valid and two invalid sysctls. Pod should
2598 not apply invalid sysctls. [LinuxOnly]: This test is marked as LinuxOnly since
2599 Windows does not support sysctls'
2600 release: v1.21
2601 file: test/e2e/common/node/sysctl.go
2602- testname: Sysctl, test sysctls
2603 codename: '[sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls
2604 [MinimumKubeletVersion:1.21] [Environment:NotInUserNS] [Conformance]'
2605 description: 'Pod is created with kernel.shm_rmid_forced sysctl. Kernel.shm_rmid_forced
2606 must be set to 1 [LinuxOnly]: This test is marked as LinuxOnly since Windows does
2607 not support sysctls [Environment:NotInUserNS]: The test fails in UserNS (as expected):
2608 `open /proc/sys/kernel/shm_rmid_forced: permission denied`'
2609 release: v1.21
2610 file: test/e2e/common/node/sysctl.go
2611- testname: Environment variables, expansion
2612 codename: '[sig-node] Variable Expansion should allow composing env vars into new
2613 env vars [NodeConformance] [Conformance]'
2614 description: Create a Pod with environment variables. Environment variables defined
2615 using previously defined environment variables MUST expand to proper values.
2616 release: v1.9
2617 file: test/e2e/common/node/expansion.go
2618- testname: Environment variables, command argument expansion
2619 codename: '[sig-node] Variable Expansion should allow substituting values in a container''s
2620 args [NodeConformance] [Conformance]'
2621 description: Create a Pod with environment variables and container command arguments
2622 using them. Container command arguments using the defined environment variables
2623 MUST expand to proper values.
2624 release: v1.9
2625 file: test/e2e/common/node/expansion.go
2626- testname: Environment variables, command expansion
2627 codename: '[sig-node] Variable Expansion should allow substituting values in a container''s
2628 command [NodeConformance] [Conformance]'
2629 description: Create a Pod with environment variables and container command using
2630 them. Container command using the defined environment variables MUST expand to
2631 proper values.
2632 release: v1.9
2633 file: test/e2e/common/node/expansion.go
2634- testname: VolumeSubpathEnvExpansion, subpath expansion
2635 codename: '[sig-node] Variable Expansion should allow substituting values in a volume
2636 subpath [Conformance]'
2637 description: Make sure a container's subpath can be set using an expansion of environment
2638 variables.
2639 release: v1.19
2640 file: test/e2e/common/node/expansion.go
2641- testname: VolumeSubpathEnvExpansion, subpath with absolute path
2642 codename: '[sig-node] Variable Expansion should fail substituting values in a volume
2643 subpath with absolute path [Slow] [Conformance]'
2644 description: Make sure a container's subpath can not be set using an expansion of
2645 environment variables when absolute path is supplied.
2646 release: v1.19
2647 file: test/e2e/common/node/expansion.go
2648- testname: VolumeSubpathEnvExpansion, subpath with backticks
2649 codename: '[sig-node] Variable Expansion should fail substituting values in a volume
2650 subpath with backticks [Slow] [Conformance]'
2651 description: Make sure a container's subpath can not be set using an expansion of
2652 environment variables when backticks are supplied.
2653 release: v1.19
2654 file: test/e2e/common/node/expansion.go
2655- testname: VolumeSubpathEnvExpansion, subpath test writes
2656 codename: '[sig-node] Variable Expansion should succeed in writing subpaths in container
2657 [Slow] [Conformance]'
2658 description: "Verify that a subpath expansion can be used to write files into subpaths.
2659 1.\tvalid subpathexpr starts a container running 2.\ttest for valid subpath writes
2660 3.\tsuccessful expansion of the subpathexpr isn't required for volume cleanup"
2661 release: v1.19
2662 file: test/e2e/common/node/expansion.go
2663- testname: VolumeSubpathEnvExpansion, subpath ready from failed state
2664 codename: '[sig-node] Variable Expansion should verify that a failing subpath expansion
2665 can be modified during the lifecycle of a container [Slow] [Conformance]'
2666 description: Verify that a failing subpath expansion can be modified during the
2667 lifecycle of a container.
2668 release: v1.19
2669 file: test/e2e/common/node/expansion.go
2670- testname: LimitRange, resources
2671 codename: '[sig-scheduling] LimitRange should create a LimitRange with defaults
2672 and ensure pod has those defaults applied. [Conformance]'
2673 description: Creating a Limitrange and verifying the creation of Limitrange, updating
2674 the Limitrange and validating the Limitrange. Creating Pods with resources and
2675 validate the pod resources are applied to the Limitrange
2676 release: v1.18
2677 file: test/e2e/scheduling/limit_range.go
2678- testname: LimitRange, list, patch and delete a LimitRange by collection
2679 codename: '[sig-scheduling] LimitRange should list, patch and delete a LimitRange
2680 by collection [Conformance]'
2681 description: When two limitRanges are created in different namespaces, both MUST
2682 succeed. Listing limitRanges across all namespaces with a labelSelector MUST find
2683 both limitRanges. When patching the first limitRange it MUST succeed and the fields
2684 MUST equal the new values. When deleting the limitRange by collection with a labelSelector
2685 it MUST delete only one limitRange.
2686 release: v1.26
2687 file: test/e2e/scheduling/limit_range.go
2688- testname: Scheduler, resource limits
2689 codename: '[sig-scheduling] SchedulerPredicates [Serial] validates resource limits
2690 of pods that are allowed to run [Conformance]'
2691 description: Scheduling Pods MUST fail if the resource requests exceed Machine capacity.
2692 release: v1.9
2693 file: test/e2e/scheduling/predicates.go
2694- testname: Scheduler, node selector matching
2695 codename: '[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector
2696 is respected if matching [Conformance]'
2697 description: 'Create a label on the node {k: v}. Then create a Pod with a NodeSelector
2698 set to {k: v}. Check to see if the Pod is scheduled. When the NodeSelector matches
2699 then Pod MUST be scheduled on that node.'
2700 release: v1.9
2701 file: test/e2e/scheduling/predicates.go
2702- testname: Scheduler, node selector not matching
2703 codename: '[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector
2704 is respected if not matching [Conformance]'
2705 description: Create a Pod with a NodeSelector set to a value that does not match
2706 a node in the cluster. Since there are no nodes matching the criteria the Pod
2707 MUST not be scheduled.
2708 release: v1.9
2709 file: test/e2e/scheduling/predicates.go
2710- testname: Scheduling, HostPort and Protocol match, HostIPs different but one is
2711 default HostIP (0.0.0.0)
2712 codename: '[sig-scheduling] SchedulerPredicates [Serial] validates that there exists
2713 conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP
2714 [Conformance]'
2715 description: Pods with the same HostPort and Protocol, but different HostIPs, MUST
2716 NOT schedule to the same node if one of those IPs is the default HostIP of 0.0.0.0,
2717 which represents all IPs on the host.
2718 release: v1.16
2719 file: test/e2e/scheduling/predicates.go
2720- testname: Pod preemption verification
2721 codename: '[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath
2722 runs ReplicaSets to verify preemption running path [Conformance]'
2723 description: Four levels of Pods in ReplicaSets with different levels of Priority,
2724 restricted by given CPU limits MUST launch. Priority 1 - 3 Pods MUST spawn first
2725 followed by Priority 4 Pod. The ReplicaSets with Replicas MUST contain the expected
2726 number of Replicas.
2727 release: v1.19
2728 file: test/e2e/scheduling/preemption.go
2729- testname: Scheduler, Verify PriorityClass endpoints
2730 codename: '[sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints
2731 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]'
2732 description: Verify that PriorityClass endpoints can be listed. When any mutable
2733 field is either patched or updated it MUST succeed. When any immutable field is
2734 either patched or updated it MUST fail.
2735 release: v1.20
2736 file: test/e2e/scheduling/preemption.go
2737- testname: Scheduler, Basic Preemption
2738 codename: '[sig-scheduling] SchedulerPreemption [Serial] validates basic preemption
2739 works [Conformance]'
2740 description: When a higher priority pod is created and no node with enough resources
2741 is found, the scheduler MUST preempt a lower priority pod and schedule the high
2742 priority pod.
2743 release: v1.19
2744 file: test/e2e/scheduling/preemption.go
2745- testname: Scheduler, Preemption for critical pod
2746 codename: '[sig-scheduling] SchedulerPreemption [Serial] validates lower priority
2747 pod preemption by critical pod [Conformance]'
2748 description: When a critical pod is created and no node with enough resources is
2749 found, the scheduler MUST preempt a lower priority pod to schedule the critical
2750 pod.
2751 release: v1.19
2752 file: test/e2e/scheduling/preemption.go
2753- testname: CSIDriver, lifecycle
2754 codename: '[sig-storage] CSIInlineVolumes should run through the lifecycle of a
2755 CSIDriver [Conformance]'
2756 description: Creating two CSIDrivers MUST succeed. Patching a CSIDriver MUST succeed
2757 with its new label found. Updating a CSIDriver MUST succeed with its new label
2758 found. Two CSIDrivers MUST be found when listed. Deleting the first CSIDriver
2759 MUST succeed. Deleting the second CSIDriver via deleteCollection MUST succeed.
2760 release: v1.28
2761 file: test/e2e/storage/csi_inline.go
2762- testname: CSIInlineVolumes should support Pods with inline volumes
2763 codename: '[sig-storage] CSIInlineVolumes should support CSIVolumeSource in Pod
2764 API [Conformance]'
2765 description: Pod resources with CSIVolumeSource should support create, get, list,
2766 patch, and delete operations.
2767 release: v1.26
2768 file: test/e2e/storage/csi_inline.go
2769- testname: CSIStorageCapacity API
2770 codename: '[sig-storage] CSIStorageCapacity should support CSIStorageCapacities
2771 API operations [Conformance]'
2772 description: ' The storage.k8s.io API group MUST exist in the /apis discovery document.
2773 The storage.k8s.io/v1 API group/version MUST exist in the /apis/mode.k8s.io discovery
2774 document. The csistoragecapacities resource MUST exist in the /apis/storage.k8s.io/v1
2775 discovery document. The csistoragecapacities resource must support create, get,
2776 list, watch, update, patch, delete, and deletecollection.'
2777 release: v1.24
2778 file: test/e2e/storage/csistoragecapacity.go
2779- testname: ConfigMap Volume, text data, binary data
2780 codename: '[sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance]
2781 [Conformance]'
2782 description: The ConfigMap that is created with text data and binary data MUST be
2783 accessible to read from the newly created Pod using the volume mount that is mapped
2784 to custom path in the Pod. ConfigMap's text data and binary data MUST be verified
2785 by reading the content from the mounted files in the Pod.
2786 release: v1.12
2787 file: test/e2e/common/storage/configmap_volume.go
2788- testname: ConfigMap Volume, create, update and delete
2789 codename: '[sig-storage] ConfigMap optional updates should be reflected in volume
2790 [NodeConformance] [Conformance]'
2791 description: The ConfigMap that is created MUST be accessible to read from the newly
2792 created Pod using the volume mount that is mapped to custom path in the Pod. When
2793 the config map is updated the change to the config map MUST be verified by reading
2794 the content from the mounted file in the Pod. Also when the item(file) is deleted
2795 from the map that MUST result in a error reading that item(file).
2796 release: v1.9
2797 file: test/e2e/common/storage/configmap_volume.go
2798- testname: ConfigMap Volume, without mapping
2799 codename: '[sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance]
2800 [Conformance]'
2801 description: Create a ConfigMap, create a Pod that mounts a volume and populates
2802 the volume with data stored in the ConfigMap. The ConfigMap that is created MUST
2803 be accessible to read from the newly created Pod using the volume mount. The data
2804 content of the file MUST be readable and verified and file modes MUST default
2805 to 0x644.
2806 release: v1.9
2807 file: test/e2e/common/storage/configmap_volume.go
2808- testname: ConfigMap Volume, without mapping, non-root user
2809 codename: '[sig-storage] ConfigMap should be consumable from pods in volume as non-root
2810 [NodeConformance] [Conformance]'
2811 description: Create a ConfigMap, create a Pod that mounts a volume and populates
2812 the volume with data stored in the ConfigMap. Pod is run as a non-root user with
2813 uid=1000. The ConfigMap that is created MUST be accessible to read from the newly
2814 created Pod using the volume mount. The file on the volume MUST have file mode
2815 set to default value of 0x644.
2816 release: v1.9
2817 file: test/e2e/common/storage/configmap_volume.go
2818- testname: ConfigMap Volume, without mapping, volume mode set
2819 codename: '[sig-storage] ConfigMap should be consumable from pods in volume with
2820 defaultMode set [LinuxOnly] [NodeConformance] [Conformance]'
2821 description: Create a ConfigMap, create a Pod that mounts a volume and populates
2822 the volume with data stored in the ConfigMap. File mode is changed to a custom
2823 value of '0x400'. The ConfigMap that is created MUST be accessible to read from
2824 the newly created Pod using the volume mount. The data content of the file MUST
2825 be readable and verified and file modes MUST be set to the custom value of '0x400'
2826 This test is marked LinuxOnly since Windows does not support setting specific
2827 file permissions.
2828 release: v1.9
2829 file: test/e2e/common/storage/configmap_volume.go
2830- testname: ConfigMap Volume, with mapping
2831 codename: '[sig-storage] ConfigMap should be consumable from pods in volume with
2832 mappings [NodeConformance] [Conformance]'
2833 description: Create a ConfigMap, create a Pod that mounts a volume and populates
2834 the volume with data stored in the ConfigMap. Files are mapped to a path in the
2835 volume. The ConfigMap that is created MUST be accessible to read from the newly
2836 created Pod using the volume mount. The data content of the file MUST be readable
2837 and verified and file modes MUST default to 0x644.
2838 release: v1.9
2839 file: test/e2e/common/storage/configmap_volume.go
2840- testname: ConfigMap Volume, with mapping, volume mode set
2841 codename: '[sig-storage] ConfigMap should be consumable from pods in volume with
2842 mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]'
2843 description: Create a ConfigMap, create a Pod that mounts a volume and populates
2844 the volume with data stored in the ConfigMap. Files are mapped to a path in the
2845 volume. File mode is changed to a custom value of '0x400'. The ConfigMap that
2846 is created MUST be accessible to read from the newly created Pod using the volume
2847 mount. The data content of the file MUST be readable and verified and file modes
2848 MUST be set to the custom value of '0x400' This test is marked LinuxOnly since
2849 Windows does not support setting specific file permissions.
2850 release: v1.9
2851 file: test/e2e/common/storage/configmap_volume.go
2852- testname: ConfigMap Volume, with mapping, non-root user
2853 codename: '[sig-storage] ConfigMap should be consumable from pods in volume with
2854 mappings as non-root [NodeConformance] [Conformance]'
2855 description: Create a ConfigMap, create a Pod that mounts a volume and populates
2856 the volume with data stored in the ConfigMap. Files are mapped to a path in the
2857 volume. Pod is run as a non-root user with uid=1000. The ConfigMap that is created
2858 MUST be accessible to read from the newly created Pod using the volume mount.
2859 The file on the volume MUST have file mode set to default value of 0x644.
2860 release: v1.9
2861 file: test/e2e/common/storage/configmap_volume.go
2862- testname: ConfigMap Volume, multiple volume maps
2863 codename: '[sig-storage] ConfigMap should be consumable in multiple volumes in the
2864 same pod [NodeConformance] [Conformance]'
2865 description: The ConfigMap that is created MUST be accessible to read from the newly
2866 created Pod using the volume mount that is mapped to multiple paths in the Pod.
2867 The content MUST be accessible from all the mapped volume mounts.
2868 release: v1.9
2869 file: test/e2e/common/storage/configmap_volume.go
2870- testname: ConfigMap Volume, immutability
2871 codename: '[sig-storage] ConfigMap should be immutable if `immutable` field is set
2872 [Conformance]'
2873 description: Create a ConfigMap. Update it's data field, the update MUST succeed.
2874 Mark the ConfigMap as immutable, the update MUST succeed. Try to update its data,
2875 the update MUST fail. Try to mark the ConfigMap back as not immutable, the update
2876 MUST fail. Try to update the ConfigMap`s metadata (labels), the update must succeed.
2877 Try to delete the ConfigMap, the deletion must succeed.
2878 release: v1.21
2879 file: test/e2e/common/storage/configmap_volume.go
2880- testname: ConfigMap Volume, update
2881 codename: '[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance]
2882 [Conformance]'
2883 description: The ConfigMap that is created MUST be accessible to read from the newly
2884 created Pod using the volume mount that is mapped to custom path in the Pod. When
2885 the ConfigMap is updated the change to the config map MUST be verified by reading
2886 the content from the mounted file in the Pod.
2887 release: v1.9
2888 file: test/e2e/common/storage/configmap_volume.go
2889- testname: DownwardAPI volume, CPU limits
2890 codename: '[sig-storage] Downward API volume should provide container''s cpu limit
2891 [NodeConformance] [Conformance]'
2892 description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles
2893 contains a item for the CPU limits. The container runtime MUST be able to access
2894 CPU limits from the specified path on the mounted volume.
2895 release: v1.9
2896 file: test/e2e/common/storage/downwardapi_volume.go
2897- testname: DownwardAPI volume, CPU request
2898 codename: '[sig-storage] Downward API volume should provide container''s cpu request
2899 [NodeConformance] [Conformance]'
2900 description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles
2901 contains a item for the CPU request. The container runtime MUST be able to access
2902 CPU request from the specified path on the mounted volume.
2903 release: v1.9
2904 file: test/e2e/common/storage/downwardapi_volume.go
2905- testname: DownwardAPI volume, memory limits
2906 codename: '[sig-storage] Downward API volume should provide container''s memory
2907 limit [NodeConformance] [Conformance]'
2908 description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles
2909 contains a item for the memory limits. The container runtime MUST be able to access
2910 memory limits from the specified path on the mounted volume.
2911 release: v1.9
2912 file: test/e2e/common/storage/downwardapi_volume.go
2913- testname: DownwardAPI volume, memory request
2914 codename: '[sig-storage] Downward API volume should provide container''s memory
2915 request [NodeConformance] [Conformance]'
2916 description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles
2917 contains a item for the memory request. The container runtime MUST be able to
2918 access memory request from the specified path on the mounted volume.
2919 release: v1.9
2920 file: test/e2e/common/storage/downwardapi_volume.go
2921- testname: DownwardAPI volume, CPU limit, default node allocatable
2922 codename: '[sig-storage] Downward API volume should provide node allocatable (cpu)
2923 as default cpu limit if the limit is not set [NodeConformance] [Conformance]'
2924 description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles
2925 contains a item for the CPU limits. CPU limits is not specified for the container.
2926 The container runtime MUST be able to access CPU limits from the specified path
2927 on the mounted volume and the value MUST be default node allocatable.
2928 release: v1.9
2929 file: test/e2e/common/storage/downwardapi_volume.go
2930- testname: DownwardAPI volume, memory limit, default node allocatable
2931 codename: '[sig-storage] Downward API volume should provide node allocatable (memory)
2932 as default memory limit if the limit is not set [NodeConformance] [Conformance]'
2933 description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles
2934 contains a item for the memory limits. memory limits is not specified for the
2935 container. The container runtime MUST be able to access memory limits from the
2936 specified path on the mounted volume and the value MUST be default node allocatable.
2937 release: v1.9
2938 file: test/e2e/common/storage/downwardapi_volume.go
2939- testname: DownwardAPI volume, pod name
2940 codename: '[sig-storage] Downward API volume should provide podname only [NodeConformance]
2941 [Conformance]'
2942 description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles
2943 contains a item for the Pod name. The container runtime MUST be able to access
2944 Pod name from the specified path on the mounted volume.
2945 release: v1.9
2946 file: test/e2e/common/storage/downwardapi_volume.go
2947- testname: DownwardAPI volume, volume mode 0400
2948 codename: '[sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly]
2949 [NodeConformance] [Conformance]'
2950 description: A Pod is configured with DownwardAPIVolumeSource with the volumesource
2951 mode set to -r-------- and DownwardAPIVolumeFiles contains a item for the Pod
2952 name. The container runtime MUST be able to access Pod name from the specified
2953 path on the mounted volume. This test is marked LinuxOnly since Windows does not
2954 support setting specific file permissions.
2955 release: v1.9
2956 file: test/e2e/common/storage/downwardapi_volume.go
2957- testname: DownwardAPI volume, file mode 0400
2958 codename: '[sig-storage] Downward API volume should set mode on item file [LinuxOnly]
2959 [NodeConformance] [Conformance]'
2960 description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles
2961 contains a item for the Pod name with the file mode set to -r--------. The container
2962 runtime MUST be able to access Pod name from the specified path on the mounted
2963 volume. This test is marked LinuxOnly since Windows does not support setting specific
2964 file permissions.
2965 release: v1.9
2966 file: test/e2e/common/storage/downwardapi_volume.go
2967- testname: DownwardAPI volume, update annotations
2968 codename: '[sig-storage] Downward API volume should update annotations on modification
2969 [NodeConformance] [Conformance]'
2970 description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles
2971 contains list of items for each of the Pod annotations. The container runtime
2972 MUST be able to access Pod annotations from the specified path on the mounted
2973 volume. Update the annotations by adding a new annotation to the running Pod.
2974 The new annotation MUST be available from the mounted volume.
2975 release: v1.9
2976 file: test/e2e/common/storage/downwardapi_volume.go
2977- testname: DownwardAPI volume, update label
2978 codename: '[sig-storage] Downward API volume should update labels on modification
2979 [NodeConformance] [Conformance]'
2980 description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles
2981 contains list of items for each of the Pod labels. The container runtime MUST
2982 be able to access Pod labels from the specified path on the mounted volume. Update
2983 the labels by adding a new label to the running Pod. The new label MUST be available
2984 from the mounted volume.
2985 release: v1.9
2986 file: test/e2e/common/storage/downwardapi_volume.go
2987- testname: EmptyDir, Shared volumes between containers
2988 codename: '[sig-storage] EmptyDir volumes pod should support shared volumes between
2989 containers [Conformance]'
2990 description: A Pod created with an 'emptyDir' Volume, should share volumes between
2991 the containeres in the pod. The two busybox image containers should share the
2992 volumes mounted to the pod. The main container should wait until the sub container
2993 drops a file, and main container access the shared data.
2994 release: v1.15
2995 file: test/e2e/common/storage/empty_dir.go
2996- testname: EmptyDir, medium default, volume mode 0644
2997 codename: '[sig-storage] EmptyDir volumes should support (non-root,0644,default)
2998 [LinuxOnly] [NodeConformance] [Conformance]'
2999 description: A Pod created with an 'emptyDir' Volume, the volume mode set to 0644.
3000 Volume is mounted into the container where container is run as a non-root user.
3001 The volume MUST have mode -rw-r--r-- and mount type set to tmpfs and the contents
3002 MUST be readable. This test is marked LinuxOnly since Windows does not support
3003 setting specific file permissions, or running as UID / GID.
3004 release: v1.9
3005 file: test/e2e/common/storage/empty_dir.go
3006- testname: EmptyDir, medium memory, volume mode 0644, non-root user
3007 codename: '[sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly]
3008 [NodeConformance] [Conformance]'
3009 description: A Pod created with an 'emptyDir' Volume and 'medium' as 'Memory', the
3010 volume mode set to 0644. Volume is mounted into the container where container
3011 is run as a non-root user. The volume MUST have mode -rw-r--r-- and mount type
3012 set to tmpfs and the contents MUST be readable. This test is marked LinuxOnly
3013 since Windows does not support setting specific file permissions, or running as
3014 UID / GID, or the medium = 'Memory'.
3015 release: v1.9
3016 file: test/e2e/common/storage/empty_dir.go
3017- testname: EmptyDir, medium default, volume mode 0666
3018 codename: '[sig-storage] EmptyDir volumes should support (non-root,0666,default)
3019 [LinuxOnly] [NodeConformance] [Conformance]'
3020 description: A Pod created with an 'emptyDir' Volume, the volume mode set to 0666.
3021 Volume is mounted into the container where container is run as a non-root user.
3022 The volume MUST have mode -rw-rw-rw- and mount type set to tmpfs and the contents
3023 MUST be readable. This test is marked LinuxOnly since Windows does not support
3024 setting specific file permissions, or running as UID / GID.
3025 release: v1.9
3026 file: test/e2e/common/storage/empty_dir.go
3027- testname: EmptyDir, medium memory, volume mode 0666,, non-root user
3028 codename: '[sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly]
3029 [NodeConformance] [Conformance]'
3030 description: A Pod created with an 'emptyDir' Volume and 'medium' as 'Memory', the
3031 volume mode set to 0666. Volume is mounted into the container where container
3032 is run as a non-root user. The volume MUST have mode -rw-rw-rw- and mount type
3033 set to tmpfs and the contents MUST be readable. This test is marked LinuxOnly
3034 since Windows does not support setting specific file permissions, or running as
3035 UID / GID, or the medium = 'Memory'.
3036 release: v1.9
3037 file: test/e2e/common/storage/empty_dir.go
3038- testname: EmptyDir, medium default, volume mode 0777
3039 codename: '[sig-storage] EmptyDir volumes should support (non-root,0777,default)
3040 [LinuxOnly] [NodeConformance] [Conformance]'
3041 description: A Pod created with an 'emptyDir' Volume, the volume mode set to 0777.
3042 Volume is mounted into the container where container is run as a non-root user.
3043 The volume MUST have mode -rwxrwxrwx and mount type set to tmpfs and the contents
3044 MUST be readable. This test is marked LinuxOnly since Windows does not support
3045 setting specific file permissions, or running as UID / GID.
3046 release: v1.9
3047 file: test/e2e/common/storage/empty_dir.go
3048- testname: EmptyDir, medium memory, volume mode 0777, non-root user
3049 codename: '[sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly]
3050 [NodeConformance] [Conformance]'
3051 description: A Pod created with an 'emptyDir' Volume and 'medium' as 'Memory', the
3052 volume mode set to 0777. Volume is mounted into the container where container
3053 is run as a non-root user. The volume MUST have mode -rwxrwxrwx and mount type
3054 set to tmpfs and the contents MUST be readable. This test is marked LinuxOnly
3055 since Windows does not support setting specific file permissions, or running as
3056 UID / GID, or the medium = 'Memory'.
3057 release: v1.9
3058 file: test/e2e/common/storage/empty_dir.go
3059- testname: EmptyDir, medium default, volume mode 0644
3060 codename: '[sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly]
3061 [NodeConformance] [Conformance]'
3062 description: A Pod created with an 'emptyDir' Volume, the volume mode set to 0644.
3063 The volume MUST have mode -rw-r--r-- and mount type set to tmpfs and the contents
3064 MUST be readable. This test is marked LinuxOnly since Windows does not support
3065 setting specific file permissions, or running as UID / GID.
3066 release: v1.9
3067 file: test/e2e/common/storage/empty_dir.go
3068- testname: EmptyDir, medium memory, volume mode 0644
3069 codename: '[sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly]
3070 [NodeConformance] [Conformance]'
3071 description: A Pod created with an 'emptyDir' Volume and 'medium' as 'Memory', the
3072 volume mode set to 0644. The volume MUST have mode -rw-r--r-- and mount type set
3073 to tmpfs and the contents MUST be readable. This test is marked LinuxOnly since
3074 Windows does not support setting specific file permissions, or running as UID
3075 / GID, or the medium = 'Memory'.
3076 release: v1.9
3077 file: test/e2e/common/storage/empty_dir.go
3078- testname: EmptyDir, medium default, volume mode 0666
3079 codename: '[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly]
3080 [NodeConformance] [Conformance]'
3081 description: A Pod created with an 'emptyDir' Volume, the volume mode set to 0666.
3082 The volume MUST have mode -rw-rw-rw- and mount type set to tmpfs and the contents
3083 MUST be readable. This test is marked LinuxOnly since Windows does not support
3084 setting specific file permissions, or running as UID / GID.
3085 release: v1.9
3086 file: test/e2e/common/storage/empty_dir.go
3087- testname: EmptyDir, medium memory, volume mode 0666
3088 codename: '[sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly]
3089 [NodeConformance] [Conformance]'
3090 description: A Pod created with an 'emptyDir' Volume and 'medium' as 'Memory', the
3091 volume mode set to 0666. The volume MUST have mode -rw-rw-rw- and mount type set
3092 to tmpfs and the contents MUST be readable. This test is marked LinuxOnly since
3093 Windows does not support setting specific file permissions, or running as UID
3094 / GID, or the medium = 'Memory'.
3095 release: v1.9
3096 file: test/e2e/common/storage/empty_dir.go
3097- testname: EmptyDir, medium default, volume mode 0777
3098 codename: '[sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly]
3099 [NodeConformance] [Conformance]'
3100 description: A Pod created with an 'emptyDir' Volume, the volume mode set to 0777. The
3101 volume MUST have mode set as -rwxrwxrwx and mount type set to tmpfs and the contents
3102 MUST be readable. This test is marked LinuxOnly since Windows does not support
3103 setting specific file permissions, or running as UID / GID.
3104 release: v1.9
3105 file: test/e2e/common/storage/empty_dir.go
3106- testname: EmptyDir, medium memory, volume mode 0777
3107 codename: '[sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly]
3108 [NodeConformance] [Conformance]'
3109 description: A Pod created with an 'emptyDir' Volume and 'medium' as 'Memory', the
3110 volume mode set to 0777. The volume MUST have mode set as -rwxrwxrwx and mount
3111 type set to tmpfs and the contents MUST be readable. This test is marked LinuxOnly
3112 since Windows does not support setting specific file permissions, or running as
3113 UID / GID, or the medium = 'Memory'.
3114 release: v1.9
3115 file: test/e2e/common/storage/empty_dir.go
3116- testname: EmptyDir, medium default, volume mode default
3117 codename: '[sig-storage] EmptyDir volumes volume on default medium should have the
3118 correct mode [LinuxOnly] [NodeConformance] [Conformance]'
3119 description: A Pod created with an 'emptyDir' Volume, the volume MUST have mode
3120 set as -rwxrwxrwx and mount type set to tmpfs. This test is marked LinuxOnly since
3121 Windows does not support setting specific file permissions.
3122 release: v1.9
3123 file: test/e2e/common/storage/empty_dir.go
3124- testname: EmptyDir, medium memory, volume mode default
3125 codename: '[sig-storage] EmptyDir volumes volume on tmpfs should have the correct
3126 mode [LinuxOnly] [NodeConformance] [Conformance]'
3127 description: A Pod created with an 'emptyDir' Volume and 'medium' as 'Memory', the
3128 volume MUST have mode set as -rwxrwxrwx and mount type set to tmpfs. This test
3129 is marked LinuxOnly since Windows does not support setting specific file permissions,
3130 or the medium = 'Memory'.
3131 release: v1.9
3132 file: test/e2e/common/storage/empty_dir.go
3133- testname: EmptyDir Wrapper Volume, ConfigMap volumes, no race
3134 codename: '[sig-storage] EmptyDir wrapper volumes should not cause race condition
3135 when used for configmaps [Serial] [Conformance]'
3136 description: Create 50 ConfigMaps Volumes and 5 replicas of pod with these ConfigMapvolumes
3137 mounted. Pod MUST NOT fail waiting for Volumes.
3138 release: v1.13
3139 file: test/e2e/storage/empty_dir_wrapper.go
3140- testname: EmptyDir Wrapper Volume, Secret and ConfigMap volumes, no conflict
3141 codename: '[sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]'
3142 description: Secret volume and ConfigMap volume is created with data. Pod MUST be
3143 able to start with Secret and ConfigMap volumes mounted into the container.
3144 release: v1.13
3145 file: test/e2e/storage/empty_dir_wrapper.go
3146- testname: PersistentVolumes(Claims), apply changes to a pv/pvc status
3147 codename: '[sig-storage] PersistentVolumes CSI Conformance should apply changes
3148 to a pv/pvc status [Conformance]'
3149 description: Creating PV and PVC MUST succeed. Listing PVs with a labelSelector
3150 MUST succeed. Listing PVCs in a namespace MUST succeed. Reading PVC status MUST
3151 succeed with a valid phase found. Reading PV status MUST succeed with a valid
3152 phase found. Patching the PVC status MUST succeed with its new condition found.
3153 Patching the PV status MUST succeed with the new reason/message found. Updating
3154 the PVC status MUST succeed with its new condition found. Updating the PV status
3155 MUST succeed with the new reason/message found.
3156 release: v1.29
3157 file: test/e2e/storage/persistent_volumes.go
3158- testname: PersistentVolumes(Claims), lifecycle
3159 codename: '[sig-storage] PersistentVolumes CSI Conformance should run through the
3160 lifecycle of a PV and a PVC [Conformance]'
3161 description: Creating PV and PVC MUST succeed. Listing PVs with a labelSelector
3162 MUST succeed. Listing PVCs in a namespace MUST succeed. Patching a PV MUST succeed
3163 with its new label found. Patching a PVC MUST succeed with its new label found.
3164 Reading a PV and PVC MUST succeed with required UID retrieved. Deleting a PVC
3165 and PV MUST succeed and it MUST be confirmed. Replacement PV and PVC MUST be created.
3166 Updating a PV MUST succeed with its new label found. Updating a PVC MUST succeed
3167 with its new label found. Deleting the PVC and PV via deleteCollection MUST succeed
3168 and it MUST be confirmed.
3169 release: v1.29
3170 file: test/e2e/storage/persistent_volumes.go
3171- testname: Projected Volume, multiple projections
3172 codename: '[sig-storage] Projected combined should project all components that make
3173 up the projection API [Projection] [NodeConformance] [Conformance]'
3174 description: A Pod is created with a projected volume source for secrets, configMap
3175 and downwardAPI with pod name, cpu and memory limits and cpu and memory requests.
3176 Pod MUST be able to read the secrets, configMap values and the cpu and memory
3177 limits as well as cpu and memory requests from the mounted DownwardAPIVolumeFiles.
3178 release: v1.9
3179 file: test/e2e/common/storage/projected_combined.go
3180- testname: Projected Volume, ConfigMap, create, update and delete
3181 codename: '[sig-storage] Projected configMap optional updates should be reflected
3182 in volume [NodeConformance] [Conformance]'
3183 description: Create a Pod with three containers with ConfigMaps namely a create,
3184 update and delete container. Create Container when started MUST not have configMap,
3185 update and delete containers MUST be created with a ConfigMap value as 'value-1'.
3186 Create a configMap in the create container, the Pod MUST be able to read the configMap
3187 from the create container. Update the configMap in the update container, Pod MUST
3188 be able to read the updated configMap value. Delete the configMap in the delete
3189 container. Pod MUST fail to read the configMap from the delete container.
3190 release: v1.9
3191 file: test/e2e/common/storage/projected_configmap.go
3192- testname: Projected Volume, ConfigMap, volume mode default
3193 codename: '[sig-storage] Projected configMap should be consumable from pods in volume
3194 [NodeConformance] [Conformance]'
3195 description: A Pod is created with projected volume source 'ConfigMap' to store
3196 a configMap with default permission mode. Pod MUST be able to read the content
3197 of the ConfigMap successfully and the mode on the volume MUST be -rw-r--r--.
3198 release: v1.9
3199 file: test/e2e/common/storage/projected_configmap.go
3200- testname: Projected Volume, ConfigMap, non-root user
3201 codename: '[sig-storage] Projected configMap should be consumable from pods in volume
3202 as non-root [NodeConformance] [Conformance]'
3203 description: A Pod is created with projected volume source 'ConfigMap' to store
3204 a configMap as non-root user with uid 1000. Pod MUST be able to read the content
3205 of the ConfigMap successfully and the mode on the volume MUST be -rw-r--r--.
3206 release: v1.9
3207 file: test/e2e/common/storage/projected_configmap.go
3208- testname: Projected Volume, ConfigMap, volume mode 0400
3209 codename: '[sig-storage] Projected configMap should be consumable from pods in volume
3210 with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]'
3211 description: A Pod is created with projected volume source 'ConfigMap' to store
3212 a configMap with permission mode set to 0400. Pod MUST be able to read the content
3213 of the ConfigMap successfully and the mode on the volume MUST be -r--------. This
3214 test is marked LinuxOnly since Windows does not support setting specific file
3215 permissions.
3216 release: v1.9
3217 file: test/e2e/common/storage/projected_configmap.go
3218- testname: Projected Volume, ConfigMap, mapped
3219 codename: '[sig-storage] Projected configMap should be consumable from pods in volume
3220 with mappings [NodeConformance] [Conformance]'
3221 description: A Pod is created with projected volume source 'ConfigMap' to store
3222 a configMap with default permission mode. The ConfigMap is also mapped to a custom
3223 path. Pod MUST be able to read the content of the ConfigMap from the custom location
3224 successfully and the mode on the volume MUST be -rw-r--r--.
3225 release: v1.9
3226 file: test/e2e/common/storage/projected_configmap.go
3227- testname: Projected Volume, ConfigMap, mapped, volume mode 0400
3228 codename: '[sig-storage] Projected configMap should be consumable from pods in volume
3229 with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]'
3230 description: A Pod is created with projected volume source 'ConfigMap' to store
3231 a configMap with permission mode set to 0400. The ConfigMap is also mapped to
3232 a custom path. Pod MUST be able to read the content of the ConfigMap from the
3233 custom location successfully and the mode on the volume MUST be -r--r--r--. This
3234 test is marked LinuxOnly since Windows does not support setting specific file
3235 permissions.
3236 release: v1.9
3237 file: test/e2e/common/storage/projected_configmap.go
3238- testname: Projected Volume, ConfigMap, mapped, non-root user
3239 codename: '[sig-storage] Projected configMap should be consumable from pods in volume
3240 with mappings as non-root [NodeConformance] [Conformance]'
3241 description: A Pod is created with projected volume source 'ConfigMap' to store
3242 a configMap as non-root user with uid 1000. The ConfigMap is also mapped to a
3243 custom path. Pod MUST be able to read the content of the ConfigMap from the custom
3244 location successfully and the mode on the volume MUST be -r--r--r--.
3245 release: v1.9
3246 file: test/e2e/common/storage/projected_configmap.go
3247- testname: Projected Volume, ConfigMap, multiple volume paths
3248 codename: '[sig-storage] Projected configMap should be consumable in multiple volumes
3249 in the same pod [NodeConformance] [Conformance]'
3250 description: A Pod is created with a projected volume source 'ConfigMap' to store
3251 a configMap. The configMap is mapped to two different volume mounts. Pod MUST
3252 be able to read the content of the configMap successfully from the two volume
3253 mounts.
3254 release: v1.9
3255 file: test/e2e/common/storage/projected_configmap.go
3256- testname: Projected Volume, ConfigMap, update
3257 codename: '[sig-storage] Projected configMap updates should be reflected in volume
3258 [NodeConformance] [Conformance]'
3259 description: A Pod is created with projected volume source 'ConfigMap' to store
3260 a configMap and performs a create and update to new value. Pod MUST be able to
3261 create the configMap with value-1. Pod MUST be able to update the value in the
3262 confgiMap to value-2.
3263 release: v1.9
3264 file: test/e2e/common/storage/projected_configmap.go
3265- testname: Projected Volume, DownwardAPI, CPU limits
3266 codename: '[sig-storage] Projected downwardAPI should provide container''s cpu limit
3267 [NodeConformance] [Conformance]'
3268 description: A Pod is created with a projected volume source for downwardAPI with
3269 pod name, cpu and memory limits and cpu and memory requests. Pod MUST be able
3270 to read the cpu limits from the mounted DownwardAPIVolumeFiles.
3271 release: v1.9
3272 file: test/e2e/common/storage/projected_downwardapi.go
3273- testname: Projected Volume, DownwardAPI, CPU request
3274 codename: '[sig-storage] Projected downwardAPI should provide container''s cpu request
3275 [NodeConformance] [Conformance]'
3276 description: A Pod is created with a projected volume source for downwardAPI with
3277 pod name, cpu and memory limits and cpu and memory requests. Pod MUST be able
3278 to read the cpu request from the mounted DownwardAPIVolumeFiles.
3279 release: v1.9
3280 file: test/e2e/common/storage/projected_downwardapi.go
3281- testname: Projected Volume, DownwardAPI, memory limits
3282 codename: '[sig-storage] Projected downwardAPI should provide container''s memory
3283 limit [NodeConformance] [Conformance]'
3284 description: A Pod is created with a projected volume source for downwardAPI with
3285 pod name, cpu and memory limits and cpu and memory requests. Pod MUST be able
3286 to read the memory limits from the mounted DownwardAPIVolumeFiles.
3287 release: v1.9
3288 file: test/e2e/common/storage/projected_downwardapi.go
3289- testname: Projected Volume, DownwardAPI, memory request
3290 codename: '[sig-storage] Projected downwardAPI should provide container''s memory
3291 request [NodeConformance] [Conformance]'
3292 description: A Pod is created with a projected volume source for downwardAPI with
3293 pod name, cpu and memory limits and cpu and memory requests. Pod MUST be able
3294 to read the memory request from the mounted DownwardAPIVolumeFiles.
3295 release: v1.9
3296 file: test/e2e/common/storage/projected_downwardapi.go
3297- testname: Projected Volume, DownwardAPI, CPU limit, node allocatable
3298 codename: '[sig-storage] Projected downwardAPI should provide node allocatable (cpu)
3299 as default cpu limit if the limit is not set [NodeConformance] [Conformance]'
3300 description: A Pod is created with a projected volume source for downwardAPI with
3301 pod name, cpu and memory limits and cpu and memory requests. The CPU and memory
3302 resources for requests and limits are NOT specified for the container. Pod MUST
3303 be able to read the default cpu limits from the mounted DownwardAPIVolumeFiles.
3304 release: v1.9
3305 file: test/e2e/common/storage/projected_downwardapi.go
3306- testname: Projected Volume, DownwardAPI, memory limit, node allocatable
3307 codename: '[sig-storage] Projected downwardAPI should provide node allocatable (memory)
3308 as default memory limit if the limit is not set [NodeConformance] [Conformance]'
3309 description: A Pod is created with a projected volume source for downwardAPI with
3310 pod name, cpu and memory limits and cpu and memory requests. The CPU and memory
3311 resources for requests and limits are NOT specified for the container. Pod MUST
3312 be able to read the default memory limits from the mounted DownwardAPIVolumeFiles.
3313 release: v1.9
3314 file: test/e2e/common/storage/projected_downwardapi.go
3315- testname: Projected Volume, DownwardAPI, pod name
3316 codename: '[sig-storage] Projected downwardAPI should provide podname only [NodeConformance]
3317 [Conformance]'
3318 description: A Pod is created with a projected volume source for downwardAPI with
3319 pod name, cpu and memory limits and cpu and memory requests. Pod MUST be able
3320 to read the pod name from the mounted DownwardAPIVolumeFiles.
3321 release: v1.9
3322 file: test/e2e/common/storage/projected_downwardapi.go
3323- testname: Projected Volume, DownwardAPI, volume mode 0400
3324 codename: '[sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly]
3325 [NodeConformance] [Conformance]'
3326 description: A Pod is created with a projected volume source for downwardAPI with
3327 pod name, cpu and memory limits and cpu and memory requests. The default mode
3328 for the volume mount is set to 0400. Pod MUST be able to read the pod name from
3329 the mounted DownwardAPIVolumeFiles and the volume mode must be -r--------. This
3330 test is marked LinuxOnly since Windows does not support setting specific file
3331 permissions.
3332 release: v1.9
3333 file: test/e2e/common/storage/projected_downwardapi.go
3334- testname: Projected Volume, DownwardAPI, volume mode 0400
3335 codename: '[sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly]
3336 [NodeConformance] [Conformance]'
3337 description: A Pod is created with a projected volume source for downwardAPI with
3338 pod name, cpu and memory limits and cpu and memory requests. The default mode
3339 for the volume mount is set to 0400. Pod MUST be able to read the pod name from
3340 the mounted DownwardAPIVolumeFiles and the volume mode must be -r--------. This
3341 test is marked LinuxOnly since Windows does not support setting specific file
3342 permissions.
3343 release: v1.9
3344 file: test/e2e/common/storage/projected_downwardapi.go
3345- testname: Projected Volume, DownwardAPI, update annotation
3346 codename: '[sig-storage] Projected downwardAPI should update annotations on modification
3347 [NodeConformance] [Conformance]'
3348 description: A Pod is created with a projected volume source for downwardAPI with
3349 pod name, cpu and memory limits and cpu and memory requests and annotation items.
3350 Pod MUST be able to read the annotations from the mounted DownwardAPIVolumeFiles.
3351 Annotations are then updated. Pod MUST be able to read the updated values for
3352 the Annotations.
3353 release: v1.9
3354 file: test/e2e/common/storage/projected_downwardapi.go
3355- testname: Projected Volume, DownwardAPI, update labels
3356 codename: '[sig-storage] Projected downwardAPI should update labels on modification
3357 [NodeConformance] [Conformance]'
3358 description: A Pod is created with a projected volume source for downwardAPI with
3359 pod name, cpu and memory limits and cpu and memory requests and label items. Pod
3360 MUST be able to read the labels from the mounted DownwardAPIVolumeFiles. Labels
3361 are then updated. Pod MUST be able to read the updated values for the Labels.
3362 release: v1.9
3363 file: test/e2e/common/storage/projected_downwardapi.go
3364- testname: Projected Volume, Secrets, create, update delete
3365 codename: '[sig-storage] Projected secret optional updates should be reflected in
3366 volume [NodeConformance] [Conformance]'
3367 description: Create a Pod with three containers with secrets namely a create, update
3368 and delete container. Create Container when started MUST no have a secret, update
3369 and delete containers MUST be created with a secret value. Create a secret in
3370 the create container, the Pod MUST be able to read the secret from the create
3371 container. Update the secret in the update container, Pod MUST be able to read
3372 the updated secret value. Delete the secret in the delete container. Pod MUST
3373 fail to read the secret from the delete container.
3374 release: v1.9
3375 file: test/e2e/common/storage/projected_secret.go
3376- testname: Projected Volume, Secrets, volume mode default
3377 codename: '[sig-storage] Projected secret should be consumable from pods in volume
3378 [NodeConformance] [Conformance]'
3379 description: A Pod is created with a projected volume source 'secret' to store a
3380 secret with a specified key with default permission mode. Pod MUST be able to
3381 read the content of the key successfully and the mode MUST be -rw-r--r-- by default.
3382 release: v1.9
3383 file: test/e2e/common/storage/projected_secret.go
3384- testname: Project Volume, Secrets, non-root, custom fsGroup
3385 codename: '[sig-storage] Projected secret should be consumable from pods in volume
3386 as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]'
3387 description: A Pod is created with a projected volume source 'secret' to store a
3388 secret with a specified key. The volume has permission mode set to 0440, fsgroup
3389 set to 1001 and user set to non-root uid of 1000. Pod MUST be able to read the
3390 content of the key successfully and the mode MUST be -r--r-----. This test is
3391 marked LinuxOnly since Windows does not support setting specific file permissions,
3392 or running as UID / GID.
3393 release: v1.9
3394 file: test/e2e/common/storage/projected_secret.go
3395- testname: Projected Volume, Secrets, volume mode 0400
3396 codename: '[sig-storage] Projected secret should be consumable from pods in volume
3397 with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]'
3398 description: A Pod is created with a projected volume source 'secret' to store a
3399 secret with a specified key with permission mode set to 0x400 on the Pod. Pod
3400 MUST be able to read the content of the key successfully and the mode MUST be
3401 -r--------. This test is marked LinuxOnly since Windows does not support setting
3402 specific file permissions.
3403 release: v1.9
3404 file: test/e2e/common/storage/projected_secret.go
3405- testname: Projected Volume, Secrets, mapped
3406 codename: '[sig-storage] Projected secret should be consumable from pods in volume
3407 with mappings [NodeConformance] [Conformance]'
3408 description: A Pod is created with a projected volume source 'secret' to store a
3409 secret with a specified key with default permission mode. The secret is also mapped
3410 to a custom path. Pod MUST be able to read the content of the key successfully
3411 and the mode MUST be -r--------on the mapped volume.
3412 release: v1.9
3413 file: test/e2e/common/storage/projected_secret.go
3414- testname: Projected Volume, Secrets, mapped, volume mode 0400
3415 codename: '[sig-storage] Projected secret should be consumable from pods in volume
3416 with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]'
3417 description: A Pod is created with a projected volume source 'secret' to store a
3418 secret with a specified key with permission mode set to 0400. The secret is also
3419 mapped to a specific name. Pod MUST be able to read the content of the key successfully
3420 and the mode MUST be -r-------- on the mapped volume. This test is marked LinuxOnly
3421 since Windows does not support setting specific file permissions.
3422 release: v1.9
3423 file: test/e2e/common/storage/projected_secret.go
3424- testname: Projected Volume, Secrets, mapped, multiple paths
3425 codename: '[sig-storage] Projected secret should be consumable in multiple volumes
3426 in a pod [NodeConformance] [Conformance]'
3427 description: A Pod is created with a projected volume source 'secret' to store a
3428 secret with a specified key. The secret is mapped to two different volume mounts.
3429 Pod MUST be able to read the content of the key successfully from the two volume
3430 mounts and the mode MUST be -r-------- on the mapped volumes.
3431 release: v1.9
3432 file: test/e2e/common/storage/projected_secret.go
3433- testname: Secrets Volume, create, update and delete
3434 codename: '[sig-storage] Secrets optional updates should be reflected in volume
3435 [NodeConformance] [Conformance]'
3436 description: Create a Pod with three containers with secrets volume sources namely
3437 a create, update and delete container. Create Container when started MUST not
3438 have secret, update and delete containers MUST be created with a secret value.
3439 Create a secret in the create container, the Pod MUST be able to read the secret
3440 from the create container. Update the secret in the update container, Pod MUST
3441 be able to read the updated secret value. Delete the secret in the delete container.
3442 Pod MUST fail to read the secret from the delete container.
3443 release: v1.9
3444 file: test/e2e/common/storage/secrets_volume.go
3445- testname: Secrets Volume, volume mode default, secret with same name in different
3446 namespace
3447 codename: '[sig-storage] Secrets should be able to mount in a volume regardless
3448 of a different secret existing with same name in different namespace [NodeConformance]
3449 [Conformance]'
3450 description: Create a secret with same name in two namespaces. Create a Pod with
3451 secret volume source configured into the container. Pod MUST be able to read the
3452 secrets from the mounted volume from the container runtime and only secrets which
3453 are associated with namespace where pod is created. The file mode of the secret
3454 MUST be -rw-r--r-- by default.
3455 release: v1.12
3456 file: test/e2e/common/storage/secrets_volume.go
3457- testname: Secrets Volume, default
3458 codename: '[sig-storage] Secrets should be consumable from pods in volume [NodeConformance]
3459 [Conformance]'
3460 description: Create a secret. Create a Pod with secret volume source configured
3461 into the container. Pod MUST be able to read the secret from the mounted volume
3462 from the container runtime and the file mode of the secret MUST be -rw-r--r--
3463 by default.
3464 release: v1.9
3465 file: test/e2e/common/storage/secrets_volume.go
3466- testname: Secrets Volume, volume mode 0440, fsGroup 1001 and uid 1000
3467 codename: '[sig-storage] Secrets should be consumable from pods in volume as non-root
3468 with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]'
3469 description: Create a secret. Create a Pod with secret volume source configured
3470 into the container with file mode set to 0x440 as a non-root user with uid 1000
3471 and fsGroup id 1001. Pod MUST be able to read the secret from the mounted volume
3472 from the container runtime and the file mode of the secret MUST be -r--r-----by
3473 default. This test is marked LinuxOnly since Windows does not support setting
3474 specific file permissions, or running as UID / GID.
3475 release: v1.9
3476 file: test/e2e/common/storage/secrets_volume.go
3477- testname: Secrets Volume, volume mode 0400
3478 codename: '[sig-storage] Secrets should be consumable from pods in volume with defaultMode
3479 set [LinuxOnly] [NodeConformance] [Conformance]'
3480 description: Create a secret. Create a Pod with secret volume source configured
3481 into the container with file mode set to 0x400. Pod MUST be able to read the secret
3482 from the mounted volume from the container runtime and the file mode of the secret
3483 MUST be -r-------- by default. This test is marked LinuxOnly since Windows does
3484 not support setting specific file permissions.
3485 release: v1.9
3486 file: test/e2e/common/storage/secrets_volume.go
3487- testname: Secrets Volume, mapping
3488 codename: '[sig-storage] Secrets should be consumable from pods in volume with mappings
3489 [NodeConformance] [Conformance]'
3490 description: Create a secret. Create a Pod with secret volume source configured
3491 into the container with a custom path. Pod MUST be able to read the secret from
3492 the mounted volume from the specified custom path. The file mode of the secret
3493 MUST be -rw-r--r-- by default.
3494 release: v1.9
3495 file: test/e2e/common/storage/secrets_volume.go
3496- testname: Secrets Volume, mapping, volume mode 0400
3497 codename: '[sig-storage] Secrets should be consumable from pods in volume with mappings
3498 and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]'
3499 description: Create a secret. Create a Pod with secret volume source configured
3500 into the container with a custom path and file mode set to 0x400. Pod MUST be
3501 able to read the secret from the mounted volume from the specified custom path.
3502 The file mode of the secret MUST be -r--r--r--. This test is marked LinuxOnly
3503 since Windows does not support setting specific file permissions.
3504 release: v1.9
3505 file: test/e2e/common/storage/secrets_volume.go
3506- testname: Secrets Volume, mapping multiple volume paths
3507 codename: '[sig-storage] Secrets should be consumable in multiple volumes in a pod
3508 [NodeConformance] [Conformance]'
3509 description: Create a secret. Create a Pod with two secret volume sources configured
3510 into the container in to two different custom paths. Pod MUST be able to read
3511 the secret from the both the mounted volumes from the two specified custom paths.
3512 release: v1.9
3513 file: test/e2e/common/storage/secrets_volume.go
3514- testname: Secrets Volume, immutability
3515 codename: '[sig-storage] Secrets should be immutable if `immutable` field is set
3516 [Conformance]'
3517 description: Create a secret. Update it's data field, the update MUST succeed. Mark
3518 the secret as immutable, the update MUST succeed. Try to update its data, the
3519 update MUST fail. Try to mark the secret back as not immutable, the update MUST
3520 fail. Try to update the secret`s metadata (labels), the update must succeed. Try
3521 to delete the secret, the deletion must succeed.
3522 release: v1.21
3523 file: test/e2e/common/storage/secrets_volume.go
3524- testname: StorageClass, lifecycle
3525 codename: '[sig-storage] StorageClasses CSI Conformance should run through the lifecycle
3526 of a StorageClass [Conformance]'
3527 description: Creating a StorageClass MUST succeed. Reading the StorageClass MUST
3528 succeed. Patching the StorageClass MUST succeed with its new label found. Deleting
3529 the StorageClass MUST succeed and it MUST be confirmed. Replacement StorageClass
3530 MUST be created. Updating the StorageClass MUST succeed with its new label found.
3531 Deleting the StorageClass via deleteCollection MUST succeed and it MUST be confirmed.
3532 release: v1.29
3533 file: test/e2e/storage/storageclass.go
3534- testname: 'SubPath: Reading content from a configmap volume.'
3535 codename: '[sig-storage] Subpath Atomic writer volumes should support subpaths with
3536 configmap pod [Conformance]'
3537 description: Containers in a pod can read content from a configmap mounted volume
3538 which was configured with a subpath.
3539 release: v1.12
3540 file: test/e2e/storage/subpath.go
3541- testname: 'SubPath: Reading content from a configmap volume.'
3542 codename: '[sig-storage] Subpath Atomic writer volumes should support subpaths with
3543 configmap pod with mountPath of existing file [Conformance]'
3544 description: Containers in a pod can read content from a configmap mounted volume
3545 which was configured with a subpath and also using a mountpath that is a specific
3546 file.
3547 release: v1.12
3548 file: test/e2e/storage/subpath.go
3549- testname: 'SubPath: Reading content from a downwardAPI volume.'
3550 codename: '[sig-storage] Subpath Atomic writer volumes should support subpaths with
3551 downward pod [Conformance]'
3552 description: Containers in a pod can read content from a downwardAPI mounted volume
3553 which was configured with a subpath.
3554 release: v1.12
3555 file: test/e2e/storage/subpath.go
3556- testname: 'SubPath: Reading content from a projected volume.'
3557 codename: '[sig-storage] Subpath Atomic writer volumes should support subpaths with
3558 projected pod [Conformance]'
3559 description: Containers in a pod can read content from a projected mounted volume
3560 which was configured with a subpath.
3561 release: v1.12
3562 file: test/e2e/storage/subpath.go
3563- testname: 'SubPath: Reading content from a secret volume.'
3564 codename: '[sig-storage] Subpath Atomic writer volumes should support subpaths with
3565 secret pod [Conformance]'
3566 description: Containers in a pod can read content from a secret mounted volume which
3567 was configured with a subpath.
3568 release: v1.12
3569 file: test/e2e/storage/subpath.go
3570- testname: VolumeAttachment, lifecycle
3571 codename: '[sig-storage] VolumeAttachment Conformance should run through the lifecycle
3572 of a VolumeAttachment [Conformance]'
3573 description: Creating an initial VolumeAttachment MUST succeed. Reading the VolumeAttachment
3574 MUST succeed with with required name retrieved. Patching a VolumeAttachment MUST
3575 succeed with its new label found. Listing VolumeAttachment with a labelSelector
3576 MUST succeed with a single item retrieved. Deleting a VolumeAttachment MUST succeed
3577 and it MUST be confirmed. Creating a second VolumeAttachment MUST succeed. Updating
3578 the second VolumentAttachment with a new label MUST succeed with its new label
3579 found. Creating a third VolumeAttachment MUST succeed. Updating the third VolumentAttachment
3580 with a new label MUST succeed with its new label found. Deleting both VolumeAttachments
3581 via deleteCollection MUST succeed and it MUST be confirmed.
3582 release: v1.30
3583 file: test/e2e/storage/volume_attachment.go
3584
View as plain text