# Changes Please visit Linkerd's [Release page][gh-releases] for for the latest release notes moving forward! [gh-releases]: https://github.com/linkerd/linkerd2/releases ## edge-24.2.5 * Migrated edge release change notes to use GitHub's automated release notes feature. ## edge-24.2.4 * Updated the ExternalWorkload CRD to v1beta1, renaming the meshTls field to meshTLS ([#12098]) * Updated the proxy to address some logging and metrics inconsistencies ([#12099]) [#12098]: https://github.com/linkerd/linkerd2/pull/12098 [#12099]: https://github.com/linkerd/linkerd2/pull/12099 ## edge-24.2.3 * Allowed the `MutatingWebhookConfig` timeout value to be configured ([#12028]) (thanks @mikebell90) * Added a counter for items dropped from destination controller workqueue ([#12079]) * Fixed a spurious `linkerd check` error when using container images with digests ([#12059]) * Fixed an issue where inbound policy could be incorrect after certain policy resources are deleted ([#12088]) [#12028]: https://github.com/linkerd/linkerd2/pull/12028 [#12079]: https://github.com/linkerd/linkerd2/pull/12079 [#12059]: https://github.com/linkerd/linkerd2/pull/12059 [#12088]: https://github.com/linkerd/linkerd2/pull/12088 ## edge-24.2.2 This release addresses some issues in the destination service that could cause it to behave unexpectedly when processing updates. * Fixed a race condition in the destination service that could cause panics under very specific conditions ([#12022]; fixes [#12010]) * Changed how updates to a `Server` selector are handled in the destination service. When a `Server` that marks a port as opaque no longer selects a resource, the resource's opaqueness will reverted to default settings ([#12031]; fixes [#11995]) * Introduced Helm configuration values for liveness and readiness probe timeouts and delays ([#11458]; fixes [#11453]) (thanks @jan-kantert!) [#12010]: https://github.com/linkerd/linkerd2/issues/12010 [#12022]: https://github.com/linkerd/linkerd2/pull/12022 [#11995]: https://github.com/linkerd/linkerd2/issues/11995 [#12031]: https://github.com/linkerd/linkerd2/pull/12031 [#11453]: https://github.com/linkerd/linkerd2/issues/11453 [#11458]: https://github.com/linkerd/linkerd2/pull/11458 ## edge-24.2.1 This edge release contains performance and stability improvements to the Destination controller, and continues stabilizing support for ExternalWorkloads. * Reduced the load on the Destination controller by only processing Server updates on workloads affected by the Server ([#12017]) * Changed how the Destination controller reacts to target clusters (in multicluster pod-to-pod mode) whose Server CRD is outdated: skip them and log an error instead of panicking ([#12008]) * Improved the leader election of the ExternalWorkloads Endpoints controller to avoid missing events ([#12021]) * Improved naming of EndpointSlices generated by ExternWorkloads ([#12016]) * Restriced the number of IPs an ExternalWorkload can have ([#12026]) [#12017]: https://github.com/linkerd/linkerd2/pull/12017 [#12008]: https://github.com/linkerd/linkerd2/pull/12008 [#12021]: https://github.com/linkerd/linkerd2/pull/12021 [#12016]: https://github.com/linkerd/linkerd2/pull/12016 [#12026]: https://github.com/linkerd/linkerd2/pull/12026 ## edge-24.1.3 This release continues support for ExternalWorkload resources throughout the control and data planes. * Updated the proxy to use SPIRE to instrument identity outside of Kubernetes. * Updated the Destination controller to return `INVALID_ARGUMENT` status codes properly when a `ServiceProfile` is requested for a service that does not exist. (#11980) * An ExternalWorkload EndpointSlice controller has been added to the Destination controller. * Added a `createNamespaceMetadataJob` Helm value to control whether the namespace-metadata job is run during install (#11782) ## edge-24.1.2 This edge release incrementally improves support for ExternalWorkload resources throughout the control plane. ## edge-24.1.1 This edge release introduces a number of different fixes and improvements. More notably, it introduces a new `cni-repair-controller` binary to the CNI plugin image. The controller will automatically restart pods that have not received their iptables configuration. * Removed shortnames from Tap API resources to avoid colliding with existing Kubernetes resources ([#11816]; fixes [#11784]) * Introduced a new ExternalWorkload CRD to support upcoming mesh expansion feature ([#11805]) * Changed `MeshTLSAuthentication` resource validation to allow SPIFFE URI identities ([#11882]) * Introduced a new `cni-repair-controller` to the `linkerd-cni` DaemonSet to automatically restart misconfigured pods that are missing iptables rules ([#11699]; fixes [#11073]) * Fixed a `"duplicate metrics"` warning in the multicluster service-mirror component ([#11875]; fixes [#11839]) * Added metric labels and weights to `linkerd diagnostics endpoints` json output ([#11889]) * Changed how `Server` updates are handled in the destination service. The change will ensure that during a cluster resync, consumers won't be overloaded by redundant updates ([#11907]) * Changed `linkerd install` error output to add a newline when a Kubernetes client cannot be successfully initialised ([#11917]) [#11816]: https://github.com/linkerd/linkerd2/pull/11816 [#11784]: https://github.com/linkerd/linkerd2/issues/11784 [#11805]: https://github.com/linkerd/linkerd2/pull/11805 [#11882]: https://github.com/linkerd/linkerd2/pull/11882 [#11699]: https://github.com/linkerd/linkerd2/pull/11699 [#11073]: https://github.com/linkerd/linkerd2/issues/11073 [#11875]: https://github.com/linkerd/linkerd2/pull/11875 [#11839]: https://github.com/linkerd/linkerd2/issues/11839 [#11889]: https://github.com/linkerd/linkerd2/pull/11889 [#11907]: https://github.com/linkerd/linkerd2/pull/11907 [#11917]: https://github.com/linkerd/linkerd2/pull/11917 ## edge-23.12.4 This edge release includes fixes and improvements to the destination controller's endpoint resolution API. * Fixed an issue in the control plane where discovery for pod IP addresses could hang indefinitely ([#11815]) * Updated the proxy to enforce time limits on control plane response streams so that proxies more naturally distribute load over control plane replicas ([#11837]) * Fixed the policy's controller service metadata responses so that proxy logs and metrics have informative values ([#11842]) [#11842]: https://github.com/linkerd/linkerd2/pull/11842 [#11837]: https://github.com/linkerd/linkerd2/pull/11837 [#11815]: https://github.com/linkerd/linkerd2/pull/11815 ## edge-23.12.3 This edge release contains improvements to the logging and diagnostics of the destination controller. * Added a control plane metric to count errors talking to the Kubernetes API ([#11774]) * Fixed an issue causing spurious destination controller error messages for profile lookups on unmeshed pods with port in default opaque list ([#11550]) [#11774]: https://github.com/linkerd/linkerd2/pull/11774 [#11550]: https://github.com/linkerd/linkerd2/pull/11550 ## edge-23.12.2 This edge release includes a restructuring of the proxy's balancer along with accompanying new metrics. The new minimum supported Kubernetes version is 1.22. * Restructured the proxy's balancer ([#11750]): balancer changes may now occur independently of request processing. Fail-fast circuit breaking is enforced on the balancer's queue so that requests can't get stuck in a queue indefinitely. This new balancer is instrumented with new metrics: request (in-queue) latency histograms, failfast states, discovery updates counts, and balancer endpoint pool sizes. * Changed how the policy controller updates HTTPRoute status so that it doesn't affect statuses from other non-linkerd controllers ([#11705]; fixes [#11659]) [#11750]: https://github.com/linkerd/linkerd2/pull/11750 [#11705]: https://github.com/linkerd/linkerd2/pull/11705 [#11659]: https://github.com/linkerd/linkerd2/pull/11659 ## edge-23.12.1 This edge release introduces new configuration values in the identity controller for client-go's `QPS` and `Burst` settings. Default values for these settings have also been raised from `5` (QPS) and `10` (Burst) to `100` and `200` respectively. * Added `namespaceSelector` fields for the tap-injector and jaeger-injector webhooks. The webhooks are now configured to skip `kube-system` by default ([#11649]; fixes [#11647]) (thanks @mikutas!) * Added the ability to configure client-go's `QPS` and `Burst` settings in the identity controller ([#11644]) * Improved client-go logging visibility throughout the control plane's components ([#11632]) * Introduced `PodDisruptionBudgets` in the linkerd-viz Helm chart for tap and tap-injector ([#11628]; fixes [#11248]) (thanks @mcharriere!) [#11649]: https://github.com/linkerd/linkerd2/pull/11649 [#11647]: https://github.com/linkerd/linkerd2/issues/11647 [#11644]: https://github.com/linkerd/linkerd2/pull/11644 [#11632]: https://github.com/linkerd/linkerd2/pull/11632 [#11628]: https://github.com/linkerd/linkerd2/pull/11628 [#11248]: https://github.com/linkerd/linkerd2/issues/11248 ## edge-23.11.4 This edge release introduces support for the native sidecar containers entering beta support in Kubernetes 1.29. This improves the startup and shutdown ordering for the proxy relative to other containers, fixing the long-standing shutdown issue with injected `Job`s. Furthermore, traffic from other `initContainer`s can now be proxied by Linkerd. In addition, this edge release includes Helm chart improvements, and improvements to the multicluster extension. * Added a new `config.alpha.linkerd.io/proxy-enable-native-sidecar` annotation and `Proxy.NativeSidecar` Helm option that causes the proxy container to run as an init-container (thanks @teejaded!) ([#11465]; fixes [#11461]) * Fixed broken affinity rules for the multicluster `service-mirror` when running in HA mode ([#11609]; fixes [#11603]) * Added a new check to `linkerd check` that ensures all extension namespaces are configured properly ([#11629]; fixes [#11509]) * Updated the Prometheus Docker image used by the `linkerd-viz` extension to v2.48.0, resolving a number of CVEs in older Prometheus versions ([#11633]) * Added `nodeAffinity` to `deployment` templates in the `linkerd-viz` and `linkerd-jaeger` Helm charts (thanks @naing2victor!) ([#11464]; fixes [#10680]) [#11465]: https://github.com/linkerd/linkerd2/pull/11465 [#11461]: https://github.com/linkerd/linkerd2/issues/11461 [#11609]: https://github.com/linkerd/linkerd2/pull/11609 [#11603]: https://github.com/linkerd/linkerd2/issues/11603 [#11629]: https://github.com/linkerd/linkerd2/pull/11629 [#11509]: https://github.com/linkerd/linkerd2/issues/11509 [#11633]: https://github.com/linkerd/linkerd2/pull/11633 [#11464]: https://github.com/linkerd/linkerd2/pull/11464 [#10680]: https://github.com/linkerd/linkerd2/issues/10680 ## edge-23.11.3 This edge release fixes a bug where Linkerd could cause EOF errors during bursts of TCP connections. * Fixed a bug where the `linkerd multicluster link` command's `--gateway-addresses` flag was not respected when a remote gateway exists ([#11564]) * proxy: Increased DEFAULT_OUTBOUND_TCP_QUEUE_CAPACITY to prevent EOF errors during bursts of TCP connections [#11564]: https://github.com/linkerd/linkerd2/pull/11564 ## edge-23.11.2 This edge release contains observability improvements and bug fixes to the Destination controller, and a refinement to the multicluster gateway resolution logic. * Fixed an issue where the Destination controller could stop processing service profile updates, if a proxy subscribed to those updates stops reading them; this is a followup to the issue [#11491] fixed in [edge-23.10.3] ([#11546]) * In the Destination controller, added informer lag histogram metrics to track whenever the Kubernetes objects watched by the controller are falling behind the state in the kube-apiserver ([#11534]) * In the multicluster service mirror, extended the target gateway resolution logic to take into account all the possible IPs a hostname might resolve to, rather than just the first one (thanks @MrFreezeex!) ([#11499]) * Added probes to the debug container to appease environments requiring probes for all containers ([#11308]) [edge-23.10.3]: https://github.com/linkerd/linkerd2/releases/tag/edge-23.10.3 [#11546]: https://github.com/linkerd/linkerd2/pull/11546 [#11534]: https://github.com/linkerd/linkerd2/pull/11534 [#11499]: https://github.com/linkerd/linkerd2/pull/11499 [#11308]: https://github.com/linkerd/linkerd2/pull/11308 ## edge-23.11.1 This edge release fixes two bugs in the Destination controller that could cause outbound connections to hang indefinitely. * helm: Introduce configurable values for protocol detection ([#11536]) * destination: Fix GetProfiles error when address is opaque and unmeshed ([#11556]) * destination: Return NotFound for unknown pod names ([#11540]) * proxy: Log controller errors at WARN * proxy: Fix grpc_status metric labels for inbound traffic [#11536]: https://github.com/linkerd/linkerd2/pull/11536 [#11556]: https://github.com/linkerd/linkerd2/pull/11556 [#11540]: https://github.com/linkerd/linkerd2/pull/11540 ## edge-23.10.4 This edge release includes a fix for the `ServiceProfile` CRD resource schema. The schema incorrectly required `not` response matches to be arrays, while the in-cluster validator parsed `not` response matches as objects. In addition, an issues has been fixed in `linkerd profile`. When used with the `--open-api` flag, it would not strip trailing slashes when generating a resource from swagger specifications. * Fixed an issue where trailing slashes wouldn't be stripped when generating `ServiceProfile` resources through `linkerd profile --open-api` ([#11519]) * Fixed an issue in the `ServiceProfile` CRD schema. The schema incorrectly required that a `not` response match should be an array, which the service profile validator rejected since it expected an object. The schema has been updated to properly indicate that `not` values should be an object ([#11510]; fixes [#11483]) * Improved logging in the destination controller by adding the client pod's name to the logging context. This will improve visibility into the messages sent and received by the control plane from a specific proxy ([#11532]) * Fixed an issue in the destination controller where the metadata API would not initialize a `Job` informer. The destination controller uses the metadata API to retrieve `Job` metadata, and relies mostly on informers. Without an initialized informer, an error message would be logged, and the controller relied on direct API calls ([#11541]; fixes [#11531]) [#11541]: https://github.com/linkerd/linkerd2/pull/11541 [#11532]: https://github.com/linkerd/linkerd2/pull/11532 [#11531]: https://github.com/linkerd/linkerd2/issues/11531 [#11519]: https://github.com/linkerd/linkerd2/pull/11519 [#11510]: https://github.com/linkerd/linkerd2/pull/11510 [#11483]: https://github.com/linkerd/linkerd2/issues/11483 ## edge-23.10.3 This edge release fixes issues in the proxy and Destination controller which can result in Linkerd proxies sending traffic to stale endpoints. In addition, it contains other bugfixes and updates dependencies to include patches for the security advisories [CVE-2023-44487]/GHSA-qppj-fm5r-hxr3 and GHSA-c827-hfw6-qwvm. * Fixed an issue where the Destination controller could stop processing changes in the endpoints of a destination, if a proxy subscribed to that destination stops reading service discovery updates. This issue results in proxies attempting to send traffic for that destination to stale endpoints ([#11491], fixes [#11480], [#11279], and [#10590]) * Fixed a regression introduced in stable-2.13.0 where proxies would not terminate unused service discovery watches, exerting backpressure on the Destination controller which could cause it to become stuck ([linkerd2-proxy#2484] and [linkerd2-proxy#2486]) * Added `INFO`-level logging to the proxy when endpoints are added or removed from a load balancer. These logs are enabled by default, and can be disabled by [setting the proxy log level][proxy-log-level] to `warn,linkerd=info,linkerd_proxy_balance=warn` or similar ([linkerd2-proxy#2486]) * Fixed a regression where the proxy rendered `grpc_status` metric labels as a string rather than as the numeric status code ([linkerd2-proxy#2480]; fixes [#11449]) * Extended `linkerd-jaeger`'s `imagePullSecrets` Helm value to also apply to the `namespace-metadata` ServiceAccount ([#11504]) * Updated the control plane's dependency on the `golang.google.org/grpc` Go package to include patches for [CVE-2023-44487]/GHSA-qppj-fm5r-hxr3 ([#11496]) * Updated dependencies on `rustix` to include patches for GHSA-c827-hfw6-qwvm ([linkerd2-proxy#2488] and [#11512]). [#10590]: https://github.com/linkerd/linkerd2/issues/10590 [#11279]: https://github.com/linkerd/linkerd2/issues/11279 [#11491]: https://github.com/linkerd/linkerd2/pull/11491 [#11449]: https://github.com/linkerd/linkerd2/issues/11449 [#11480]: https://github.com/linkerd/linkerd2/issues/11480 [#11504]: https://github.com/linkerd/linkerd2/issues/11504 [#11512]: https://github.com/linkerd/linkerd2/issues/11512 [linkerd2-proxy#2480]: https://github.com/linkerd/linkerd2-proxy/pull/2480 [linkerd2-proxy#2484]: https://github.com/linkerd/linkerd2-proxy/pull/2484 [linkerd2-proxy#2486]: https://github.com/linkerd/linkerd2-proxy/pull/2486 [linkerd2-proxy#2488]: https://github.com/linkerd/linkerd2-proxy/pull/2488 [proxy-log-level]: https://linkerd.io/2.14/tasks/modifying-proxy-log-level/ [CVE-2023-44487]: https://github.com/advisories/GHSA-qppj-fm5r-hxr3 ## edge-23.10.2 This edge release includes a fix addressing an issue during upgrades for instances not relying on automated webhook certificate management (like cert-manager provides). * Added a `checksum/config` annotation to the destination and proxy injector deployment manifests, to force restarting those workloads whenever their webhook secrets change during upgrade (thanks @iAnomaly!) ([#11440]) * Fixed policy controller error when deleting a Gateway API HTTPRoute resource ([#11471]) [#11440]: https://github.com/linkerd/linkerd2/pull/11440 [#11471]: https://github.com/linkerd/linkerd2/pull/11471 ## edge-23.10.1 This edge release adds additional configurability to Linkerd's viz and multicluster extensions. * Added a `podAnnotations` Helm value to allow adding additional annotations to the Linkerd-Viz Prometheus Deployment ([#11365]) (thanks @cemenson) * Added `imagePullSecrets` Helm values to the multicluster chart so that it can be installed in an air-gapped environment. ([#11285]) (thanks @lhaussknecht) [#11365]: https://github.com/linkerd/linkerd2/issues/11365 [#11285]: https://github.com/linkerd/linkerd2/issues/11285 ## edge-23.9.4 This edge release makes Linkerd even better. * Added a controlPlaneVersion override to the `linkerd-control-plane` Helm chart to support including SHA256 image digests in Linkerd manifests (thanks @cromulentbanana!) ([#11406]) * Improved `linkerd viz check` to attempt to validate that the Prometheus scrape interval will work well with the CLI and Web query parameters ([#11376]) * Improved CLI error handling to print differentiated error information when versioncheck.linkerd.io cannot be resolved (thanks @dtaskai) ([#11377]) * Fixed an issue where the destination controller would not update pod metadata for profile resolutions for a pod accessed via the host network (e.g. HostPort endpoints) ([#11334]). * Added a validating webhook config for httproutes.gateway.networking.k8s.io resources (thanks @mikutas!) ([#11150]) * Introduced a new `multicluster check --timeout` flag to limit the time allowed for Kubernetes API calls (thanks @moki1202) ([#11420]) [#11150]: https://github.com/linkerd/linkerd2/pull/11150 [#11334]: https://github.com/linkerd/linkerd2/pull/11334 [#11376]: https://github.com/linkerd/linkerd2/pull/11376 [#11377]: https://github.com/linkerd/linkerd2/pull/11377 [#11406]: https://github.com/linkerd/linkerd2/pull/11406 [#11420]: https://github.com/linkerd/linkerd2/pull/11420 ## edge-23.9.3 This edge release updates the proxy's dependency on the `rustls` library to patch security vulnerability [RUSTSEC-2023-0052][RUSTSEC-2023-0052-0] (GHSA-8qv2-5vq6-g2g7), a potential CPU usage denial-of-service attack when acceting a TLS handshake from an untrusted peer with a maliciously-crafted certificate. Furthermore, this edge release contains a few improvements to the control plane and jaeger extension Helm charts. * Addressed security vulnerability [RUSTSEC-2023-0052][RUSTSEC-2023-0052-0] in the proxy by updating its dependency on the `rustls` library * Added a `prometheusUrl` field for the heartbeat job in the control plane Helm chart (thanks @david972!) ([#11343]; fixes [#11342]) * Introduced support for arbitrary labels in the `podMonitors` field in the control plane Helm chart (thanks @jseiser!) ([#11222]; fixes [#11175]) * Added support for config merge and Deployment environment to `opentelemetry-collector` in the jaeger extension (thanks @iAnomaly!) ([#11283]) [#11283]: https://github.com/linkerd/linkerd2/pull/11283 [#11222]: https://github.com/linkerd/linkerd2/pull/11222 [#11175]: https://github.com/linkerd/linkerd2/issues/11175 [#11343]: https://github.com/linkerd/linkerd2/pull/11343 [#11342]: https://github.com/linkerd/linkerd2/issues/11342 [RUSTSEC-2023-0052-0]: https://rustsec.org/advisories/RUSTSEC-2023-0052.html ## edge-23.9.2 This edge release updates the proxy's dependency on the `webpki` library to patch security vulnerability [RUSTSEC-2023-0052] (GHSA-8qv2-5vq6-g2g7), a potential CPU usage denial-of-service attack when accepting a TLS handshake from an untrusted peer with a maliciously-crafted certificate. * Addressed security vulnerability [RUSTSEC-2023-0052] in the proxy ([#11361]) * Fixed `linkerd check --proxy` incorrectly checking the proxy version of pods in the `completed` state (thanks @mikutas!) ([#11295]; fixes [#11280]) * Removed unnecessary `linkerd.io/helm-release-version` annotation from the `linkerd-control-plane` Helm chart (thanks @mikutas!) ([#11329]; fixes [#10778]) [RUSTSEC-2023-0052]: https://rustsec.org/advisories/RUSTSEC-2023-0052.html [#11295]: https://github.com/linkerd/linkerd2/pull/11295 [#11280]: https://github.com/linkerd/linkerd2/issues/11280 [#11361]: https://github.com/linkerd/linkerd2/pull/11361 [#11329]: https://github.com/linkerd/linkerd2/pull/11329 [#10778]: https://github.com/linkerd/linkerd2/issues/10778 ## edge-23.9.1 This edge release introduces a fix for service discovery on endpoints that use hostPorts. Previously, the destination service would return the pod IP for the discovery request which could break connectivity on pod restart. To fix this, direct pod communication for a pod bound on a hostPort will always return the hostIP. In addition, this release fixes a security vulnerability (CVE-2023-2603) detected in the CNI plugin and proxy-init images, and includes a number of other fixes and small improvements. * Addressed security vulnerability CVE-2023-2603 in proxy-init and CNI plugin ([#11296]) * Introduced resource requests/limits for the policy controller resource in the control plane helm chart ([#11301]) * Fixed an issue where an empty `remoteDiscoverySelector` field in a multicluster link would cause all services to be mirrored ([#11309]) * Removed time out from `linkerd multicluster gateways` command; when no metrics exist the command will return instantly ([#11265]) * Improved help messaging for `linkerd multicluster link` ([#11265]) * Changed how hostPort lookups are handled in the destination service. Previously, when doing service discovery for an endpoint bound on a hostPort, the destination service would return the corresponding pod IP. On pod restart, this could lead to loss of connectivity on the client's side. The destination service now always returns host IPs for service discovery on an endpoint that uses hostPorts ([#11328]) * Updated HTTPRoute webhook rule to validate all apiVersions of the resource (thanks @mikutas!) ([#11149]) * Fixed erroneous `skipped` messages when injecting namespaces with `linkerd inject` (thanks @mikutas!) ([#10231]) [#11309]: https://github.com/linkerd/linkerd2/issues/11309 [#11296]: https://github.com/linkerd/linkerd2/discussions/11296 [#11328]: https://github.com/linkerd/linkerd2/pull/11328 [#11301]: https://github.com/linkerd/linkerd2/issues/11301 [#11265]: https://github.com/linkerd/linkerd2/pull/11265 [#11149]: https://github.com/linkerd/linkerd2/pull/11149 [#10231]: https://github.com/linkerd/linkerd2/issues/10231 ## stable-2.14.0 This release introduces direct pod-to-pod multicluster service mirroring. When clusters are deployed on a flat network, Linkerd can export multicluster services in a way where cross-cluster traffic does not need to go through the gateway. This enhances multicluster authentication and can reduce the need for provisioning public load balancers. In addition, this release adds support for the [Gateway API](https://gateway-api.sigs.k8s.io/) HTTPRoute resource (in the `gateway.networking.k8s.io` api group). This improves compatibility with other tools that use these resources such as [Flagger](https://flagger.app/) and [Argo Rollouts](https://argoproj.github.io/rollouts/). The release also includes a large number of features and improvements to HTTPRoute including the ability to set timeouts and the ability to define consumer-namespace HTTPRoutes. Finally, this release includes a number of bugfixes, performance improvements, and other smaller additions. **Upgrade notes**: Please see the [upgrade instructions](https://linkerd.io/2/tasks/upgrade/#upgrade-notice-stable-2140). * Multicluster * Remove namespace field from cluster scoped resources to fix pruning * Added -o json flag for the `linkerd multicluster gateways` command (thanks @hiteshwani29) * Introduced `logFormat` value to the multicluster `Link` Helm Chart (thanks @bunnybilou!) * Added leader-election capabilities to the service-mirror controller * Added high-availability (HA) mode for the multicluster service-mirror * Added a new `remoteDiscoverySelector` field to the multicluster `Link` CRD, which enables a service mirroring mode where the control plane performs discovery for the mirrored service from the remote cluster, rather than creating Endpoints for the mirrored service in the source cluster * HTTPRoute * Fixed `linkerd uninstall` issue for HTTPRoute * Added support for `gateway.networking.k8s.io` HTTPRoutes in the policy controller * Added support for RequestHeaderModifier and RequestRedirect HTTP filters in outbound policy; filters may be added at the route or backend level * Added support for the `ResponseHeaderModifier` HTTPRoute filter * Added support for HTTPRoutes defined in the consumer namespace * Added support for HTTPRoute `parent_refs` that do not specify a port * CRDs * Patched the MeshTLSAuthentication CRD to force providing at least one identity/identityRef * Control Plane * Send Opaque protocol hint for opaque ports in destination controller * Replaced deprecated `failure-domain.beta.kubernetes.io/zone` labels in Helm charts with `topology.kubernetes.io/zone` labels (thanks @piyushsingariya!) * Replaced `server_port_subscribers` Destination controller gauge metric with `server_port_subscribes` and `server_port_unsubscribes` counter metrics * Proxy * Handle Opaque protocol hints on endpoints * Added `outbound_http_balancer_endpoints` metric * Fixed missing route_ metrics for requests with ServiceProfiles * Fixed proxy startup failure when using the `config.linkerd.io/admin-port` annotation (thanks @jclegras!) * Added distinguishable version information to proxy logs and metrics * CLI * The `linkerd diagnostics policy` command now displays outbound policy when the target resource is a Service * A fix for HA validation checks when Linkerd is installed with Helm. Thanks @mikutas!! * Viz * Add the `kubelet` NetworkAuthentication back since it is used by the `linkerd viz allow-scrapes` subcommand. * Fixed the `linkerd viz check` command so that it will wait until the viz extension becomes ready * Fixed an issue where specifying a `remote_write` config would cause the Prometheus config to be invalid (thanks @hiteshwani29) * Improved validation of the `--to` and `--from` flags for the `linkerd viz stat` command (thanks @pranoyk) * Added `-o jsonpath` flag to `linkerd viz tap` to allow filtering output fields (thanks @hiteshwani29!) * Fixed a Grafana error caused by an incorrect datasource (thanks @albundy83!) * Fixed missing "Services" menu item in the Spanish localization for the `linkerd-viz` web dashboard (thanks @mclavel!) * Extensions * Added missing label `linkerd.io/extension` to certain resources to ensure they pruned when appropriate (thanks @ClementRepo) * Added tolerations and nodeSelector support in extensions `namespace-metadata` Jobs (thanks @pssalman!) * Init Containers * Added an option for disabling the network validator's security context for environments that provide their own * CNI * Added --set flag to install-cni plugin (thanks @amit-62!) * Fixed missing resource-cni labels on linkerd-cni, this blocked the linkerd-cni pods from coming up when the injector was broken (thanks @migueleliasweb!) * Build * Build improvements for multi-arch build artifacts. Thanks @MarkSRobinson!! This release includes changes from a massive list of contributors! A special thank-you to everyone who helped make this release possible: * Amir Karimi @AMK9978 * Amit Kumar @amit-62 * Andre Marcelo-Tanner @kzap * Andrew @andrew-gropyus * Arnaud Beun @bunnybilou * Clement @proxfly * Dima @krabradosty * Grégoire Bellon-Gervais @albundy83 * Harsh Soni @harsh020 * Jean-Charles Legras @jclegras * Loong Dai @daixiang0 * Mark Robinson @MarkSRobinson * Miguel Elias dos Santos @migueleliasweb * Pranoy Kumar Kundu @pranoyk * Ryan Hristovski @ryanhristovski * Takumi Sue @mikutas * Zakhar Bessarab @zekker6 * hiteshwani29 @hiteshwani29 * pheianox * pssalman @pssalman ## edge-23.8.3 This is a release candidate for stable-2.14.0; we encourage you to help trying it out! This edge release contains a number of improvements over the multi-cluster features introduced in the last edge release supporting flat networks. It also hardens the containers security stance by removing write access to the root filesystem. * Enhanced `linkerd multicluster link` to allow clusters to be linked without a gateway ([#11226]) * Added cluster store size gauge metric ([#11256]) * Disabled local traffic policy for remote discovery ([#11257]) * Fixed various innocuous multi-cluster warnings ([#11251], [#11246], [#11253]) * Set `readOnlyRootFilesystem: true` in all the containers, as they don't require write permissions ([#11221]; fixes [#11142]) (thanks @mikutas!) [#11226]: https://github.com/linkerd/linkerd2/pull/11226 [#11256]: https://github.com/linkerd/linkerd2/pull/11256 [#11257]: https://github.com/linkerd/linkerd2/pull/11257 [#11251]: https://github.com/linkerd/linkerd2/pull/11251 [#11246]: https://github.com/linkerd/linkerd2/pull/11246 [#11253]: https://github.com/linkerd/linkerd2/pull/11253 [#11221]: https://github.com/linkerd/linkerd2/pull/11221 [#11142]: https://github.com/linkerd/linkerd2/issues/11142 ## edge-23.8.2 This edge release adds improvements to Linkerd's multi-cluster features as part of the [flat network support] planned for Linkerd stable-2.14.0. In addition, it fixes an issue ([#10764]) where warnings about an invalid metric were logged frequently by the Destination controller. * Added a new `remoteDiscoverySelector` field to the multicluster `Link` CRD, which enables a service mirroring mode where the control plane performs discovery for the mirrored service from the remote cluster, rather than creating Endpoints for the mirrored service in the source cluster ([#11190], [#11201], [#11220], and [#11224]) * Fixed missing "Services" menu item in the Spanish localization for the `linkerd-viz` web dashboard ([#11229]) (thanks @mclavel!) * Replaced `server_port_subscribers` Destination controller gauge metric with `server_port_subscribes` and `server_port_unsubscribes` counter metrics ([#11206]; fixes [#10764]) * Replaced deprecated `failure-domain.beta.kubernetes.io/zone` labels in Helm charts with `topology.kubernetes.io/zone` labels ([#11148]; fixes [#11114]) (thanks @piyushsingariya!) [#10764]: https://github.com/linkerd/linkerd2/issues/10764 [#11114]: https://github.com/linkerd/linkerd2/issues/11114 [#11148]: https://github.com/linkerd/linkerd2/issues/11148 [#11190]: https://github.com/linkerd/linkerd2/issues/11190 [#11201]: https://github.com/linkerd/linkerd2/issues/11201 [#11206]: https://github.com/linkerd/linkerd2/issues/11206 [#11220]: https://github.com/linkerd/linkerd2/issues/11220 [#11224]: https://github.com/linkerd/linkerd2/issues/11224 [#11229]: https://github.com/linkerd/linkerd2/issues/11229 [flat network support]: https://linkerd.io/2023/07/20/enterprise-multi-cluster-at-scale-supporting-flat-networks-in-linkerd/ ## edge-23.8.1 This edge release restores a proxy setting for it to shed load less aggressively while under high load, which should result in lower error rates (see #11055). It also removes the usage of host networking in the linkerd-cni extension. * Changed the default HTTP request queue capacities for the inbound and outbound proxies back to 10,000 requests (see #11055 and #11198) * Lifted need of using host networking in the linkerd-cni Daemonset (#11141) (thanks @abhijeetgauravm!) ## edge-23.7.3 This edge release improves Linkerd's support for HttpRoute by allowing `parent_ref` ports to be optional, allowing HttpRoutes to be defined in a consumer's namespace, and adding support for the `ResponseHeaderModifier` filter. It also fixes a panic in the destination controller. * Added an option for disabling the network validator's security context for environments that provide their own * Added high-availability (HA) mode for the multicluster service-mirror * Added support for HttpRoute `parent_refs` that do not specify a port * Fixed a Grafana error caused by an incorrect datasource (thanks @albundy83!) * Added support for HttpRoutes defined in the consumer namespace * Improved the granularity of logging levels in the control plane * Fixed a race condition in the destination controller that could cause it to panic * Added support for the `ResponseHeaderModifier` HttpRoute filter * Updated extension CLI commands to prefer the `--register` flag over the `LINKERD_DOCKER_REGISTRY` environment variable, making the precedence more consistent (thanks @harsh020!) ## edge-23.7.2 This edge release introduces support for HTTP filters configured through both `policy.linkerd.io` and `gateway.networking.k8s.io` HTTPRoute resources. Currently, RequestHeaderModifier and RequestRedirect HTTP filters are supported. Additionally, this release fixes an issue with the linkerd-cni chart. * Added support for RequestHeaderModifier and RequestRedirect HTTP filters in outbound policy; filters may be added at the route or backend level * Fixed missing resource-cni labels on linkerd-cni, this blocked the linkerd-cni pods from coming up when the injector was broken (thanks @migueleliasweb!) ## edge-23.7.1 This edge release adds support for the upstream `gateway.networking.k8s.io` HTTPRoute resource (in addition to the `policy.linkerd.io` CRD installed by Linkerd). Furthermore, it fixes a bug where the ingress-mode proxy would fail to fall back to ServiceProfiles for destinations without HTTPRoutes. * Added support for `gateway.networking.k8s.io` HTTPRoutes in the policy controller * Added distinguishable version information to proxy logs and metrics * Fixed incorrect handling of `NotFound` client policies in ingress-mode proxies ## edge-23.6.3 This edge release adds leader-election capabilities to the service-mirror controller under the hood, as a precursor to HA mode in an upcoming release. It also includes a `linkerd viz tap` improvement and a proxy startup bugfix, both contributed by the community! * Added leader-election capabilities to the service-mirror controller * Added `-o jsonpath` flag to `linkerd viz tap` to allow filtering output fields (thanks @hiteshwani29!) * Fixed proxy startup failure when using the `config.linkerd.io/admin-port` annotation (thanks @jclegras!) ## edge-23.6.2 This edge release introduces timeout capabilities for HTTPRoutes in a manner compatible with the proposed changes to HTTPRoute in [kubernetes-sigs/gateway-api#1997](https://github.com/kubernetes-sigs/gateway-api/pull/1997). This release also includes several small improvements and fixes: * A fix for HA validation checks when Linkerd is installed with Helm. Thanks @mikutas!! * Build improvements for multi-arch build artifacts. Thanks @MarkSRobinson!! ## edge-23.6.1 This edge release changes the behavior of the CNI plugin to run exclusively in "chained mode". Instead of creating its own configuration file, the CNI plugin will now wait until a `conf` file exists before appending its configuration. Additionally, this change includes a bug fix for topology aware service routing. * Changed the CNI plugin installer to always run in 'chained' mode; the plugin will now wait until another CNI plugin is installed before appending its configuration * Fixed bug where topology routing would not disable while service was under load (thanks @MarkSRobinson!) * Introduced `logFormat` value to the multicluster `Link` Helm Chart (thanks @bunnybilou!) ## edge-23.5.3 This edge release includes fixes for several bugs related to HTTPRoute handling. * Fixed an issue where the `namespace` field on HTTPRoute `backendRef`s was ignored, and the backend Service would always be assumed to be in the namespace as the parent Service * Fixed an issue where default authorizations generated for readiness and liveness probes would fail if the probe path included URI query parameters * Fixed the proxy not using gRPC response classification for gRPC requests to destinations without ServiceProfiles ## edge-23.5.2 This edge release adds some minor improvements in the MeshTLSAuthentication CRD and the extensions charts, and fixes an issue with `linkerd multicluster check`. * Added tolerations and nodeSelector support in extensions `namespace-metadata` Jobs (thanks @pssalman!) * Patched the MeshTLSAuthentication CRD to force providing at least one identity/identityRef * Fixed the `linkerd multicluster check` command failing in the presence of lots of mirrored services ## edge-23.5.1 This edge release introduces the ability to configure the proxy's discovery cache timeouts via annotations. While most users will not need to do this, it can be useful to improve the mesh's resilience to control plane failures. This release also includes a number of other important improvements and bug fixes. * Added -o json flag for the `linkerd multicluster gateways` command (thanks @hiteshwani29) * Added missing label `linkerd.io/extension` to certain resources to ensure they pruned when appropriate (thanks @ClementRepo) * Fixed a memory leak in the service mirror controller * Improved validation of the `--to` and `--from` flags for the `linkerd viz stat` command (thanks @pranoyk) * Fixed an issue with W3C trace context propagation which caused proxy spans to be siblings rather than children of their original parent (thanks @whiskeysierra) * Updated the Linkerd CNI plugin base docker image from Debian to Alpine * Fixed an issue where specifying a `remote_write` config would cause the Prometheus config to be invalid (thanks @hiteshwani29) * Added the ability to configure the proxy's discovery cache timeouts with the `config.linkerd.io/proxy-outbound-discovery-cache-unused-timeout` and `config.linkerd.io/proxy-inbound-discovery-cache-unused-timeout` annotations * Fixed the `linkerd viz check` command so that it will wait until the viz extension becomes ready * Fixed an issue where meshed pods could not communicate with themselves through a ClusterIP Service ## edge-23.4.3 This edge release improves compatibility with ArgoCD by changing the Linkerd control plane to create Lease resources at runtime rather than including them in the Helm chart. It also addresses a CVE by upgrading an underlying dependency. * Upgraded `h2` dependency to address CVE-2023-26964 * Fixed an issue where `server_port_subscribers` metric in the Destination controller was sometimes absent * Removed the policy-controller-write Lease from the control plane Helm chart in favor of creating it at runtime * Updated the proxy-injector to pass opaque port lists to the proxy as ranges rather than individually, greatly reducing the size of proxy manifests when large opaque port ranges are set * Fixed an issue where the proxy was performing protocol detection on ports marked as opaque * Improved backwards compatibility between 2.13 proxies and 2.12 control planes ## edge-23.4.2 This edge release contains a number of bug fixes. * CLI * Fixed `linkerd uninstall` issue for HttpRoute * The `linkerd diagnostics policy` command now displays outbound policy when the target resource is a Service * CNI * Fixed incompatibility issue with AWS CNI addon in EKS, that was forbidding pods to acquire networking after scaling up nodes. (thanks @frimik!) * Added --set flag to install-cni plugin (thanks @amit-62!) * Control Plane * Fixed an issue where the policy controller always used the default `cluster.local` domain * Send Opaque protocol hint for opaque ports in destination controller * Helm * Fixed an issue in the viz Helm chart where the namespace metadata template would throw `unexpected argument found` errors * Fixed Jaeger chart installation failure * Multicluster * Remove namespace field from cluster scoped resources to fix pruning * Proxy * Updated `h2` dependency to include a patch for a theoretical denial-of-service vulnerability discovered in CVE-2023-26964 * Handle Opaque protocol hints on endpoints * Changed the proxy's default log level to silence warnings from `trust_dns_proto` that are generally spurious. * Added `outbound_http_balancer_endpoints` metric * Fixed missing route_ metrics for requests with ServiceProfiles * Viz * Bump prometheus image to v2.43.0 * Add the `kubelet` NetworkAuthentication back since it is used by the `linkerd viz allow-scrapes` subcommand. ## stable-2.13.1 This stable release fixes an issue in the policy controller where a non-default cluster domain would return incorrect authorities in the outbound policy API. Additionally, this release updates a proxy dependency to fix CVE-2023-2694. * Proxy * Updated `h2` dependency to include a patch for a theoretical denial-of-service vulnerability discovered in CVE-2023-26964 * Control Plane * Fixed an issue where the policy controller always used the default `cluster.local` domain * Helm * Fixed an issue in the viz Helm chart where the namespace metadata template would throw `unexpected argument found` errors ## stable-2.13.0 This release introduces client-side policy to Linkerd, including dynamic routing and circuit breaking. [Gateway API](https://gateway-api.sigs.k8s.io/) HTTPRoutes can now be used to configure policy for outbound (client) proxies as well as inbound (server) proxies, by creating HTTPRoutes with Service resources as their `parentRef`. See the Linkerd documentation for tutorials on [dynamic request routing] and [circuit breaking]. New functionality for debugging HTTPRoute-based policy is also included in this release, including [new proxy metrics] and the ability to display outbound policies in the `linkerd diagnostics policy` CLI command. In addition, this release adds `network-validator`, a new init container to be used when CNI is enabled. `network-validator` ensures that local iptables rules are working as expected. It will validate this before linkerd-proxy starts. `network-validator` replaces the `noop` container, runs as `nobody`, and drops all capabilities before starting. Finally, this release includes a number of bugfixes, performance improvements, and other smaller additions. **Upgrade notes**: Please see the [upgrade instructions][upgrade-2130]. * CRDs * HTTPRoutes may now have Service parents, to configure outbound policy * Updated HTTPRoute version from `v1alpha1` to `v1beta2` * CLI * Added a new `linkerd prune` command to the CLI (including most extensions) to remove resources which are no longer part of Linkerd's manifests * Added additional shortnames for Linkerd policy resources (thanks @javaducky!) * The `linkerd diagnostics policy` command now displays outbound policy when the target resource is a Service * Control Plane * The policy controller now discovers outbound policy configurations from HTTPRoutes that target Services. * Added OutboundPolicies API, for use by `linkerd-proxy` to route outbound traffic * Added Prometheus `/metrics` endpoint to the admin server, with process metrics * Fixed QueryParamMatch parsing for HTTPRoutes * Added the policy status controller which writes the `status` field to HTTPRoutes when a parent reference Server accepts or rejects it * Added KubeAPI server ports to `ignoreOutboundPorts` of `proxy-injector` * No longer apply `waitBeforeExitSeconds` to control plane, viz and jaeger extension pods * Added support for the `internalTrafficPolicy` of a service (thanks @yc185050!) * Added block chomping to strip trailing new lines in ConfigMap (thanks @avdicl!) * Added protection against nil dereference in resources helm template * Added support for Pod Security Admission (Pod Security Policy resources are still supported but disabled by default) * Lowered non-actionable error messages in the Destination log to debug-level entries to avoid triggering false alarms (thanks @siddharthshubhampal!) * Fixed an issue with EndpointSlice endpoint reconciliation on slice deletion; when using more than one slice, a `NoEndpoints` event would be sent to the proxy regardless of the amount of endpoints that were still available (thanks @utay!) * Improved diagnostic log messages * Fixed sending of spurious profile updates * Removed unnecessary Namespaces access from the destination controller RBAC * Added the server_port_subscribers metric to track the number of subscribers to Server changes associated with a pod's port * Added the service_subscribers metric to track the number of subscribers to Service changes * Fixed a small memory leak in the opaque ports watcher * Proxy * Use the new OutboundPolicies API, supporting Gateway API-style routes in the outbound proxy * Added support for dynamic request routing based on HTTPRoutes * Added HTTP circuit breaking * Added `outbound_route_backend_http_requests_total`, `outbound_route_backend_grpc_requests_total`, and `outbound_http_balancer_endpoints` metrics * Changed the proxy's behavior when traffic splitting so that only services that are not in failfast are used. This will enable the proxy to manage failover without external coordination * Updated tokio (async runtime) in the proxy which should reduce CPU usage, especially for proxy's pod local (i.e in the same network namespace) communication * linkerd-proxy-init * Changed `proxy-init` iptables rules to be idempotent upon init pod restart (thanks @jim-minter!) * Improved logging in `proxy-init` and `linkerd-cni` * Added a `proxyInit.privileged` setting to control whether the `proxy-init` initContainer runs as a privileged process * CNI * Added static and dynamic port overrides for CNI eBPF to work with socket-level load balancing * Added `network-validator` init container to ensure that iptables rules are working as expected * Added a `resources` field in the linkerd-cni chart (thanks @jcogilvie!) * Viz * Added `tap.ignoredHeaders` Helm value to the linkerd-viz chart. This value allows users to specify a comma-separated list of header names which will be ignored by Linkerd Tap (thanks @ryanhristovski!) * Removed duplicate SecurityContext in Prometheus manifest * Added new flag `--viz-namespace` which avoids requiring permissions for listing all namespaces in `linkerd viz` subcommands (thanks @danibaeyens!) * Removed the TrafficSplit page from the Linkerd viz dashboard (thanks @h-dav!) * Introduced new values in the `viz` chart to allow for arbitrary annotations on the `Service` objects (thanks @sgrzemski!) * Added an optional AuthorizationPolicy to authorize Grafana to Prometheus in the Viz extension * Multicluster * Removed duplicate AuthorizationPolicy for probes from the multicluster gateway Helm chart * Updated wording for linkerd-multicluster cluster when it fails to probe a remote gateway mirror * Added multicluster gateway `nodeSelector` and `tolerations` helm parameters * Added new configuration options for the multicluster gateway: * `gateway.deploymentAnnotations` * `gateway.terminationGracePeriodSeconds` (thanks @bunnybilou!) * `gateway.loadBalancerSourceRanges` (thanks @Tyrion85!) * Extensions * Removed dependency on the `curlimages/curl` 3rd-party image used to initialize extensions namespaces metadata (so they are visible by `linkerd check`), replaced by the new `extension-init` image * Converted `ServerAuthorization` resources to `AuthorizationPolicy` resources in Linkerd extensions * Removed policy resources bound to admin servers in extensions (previously these resources were used to authorize probes but now are authorized by default) * Fixed the link to the Jaeger dashboard the in viz dashboard (thanks @eugenegoncharuk!) * Updated linkerd-jaeger's collector to expose port 4318 in order support HTTP alongside gRPC (thanks @uralsemih!) * Among other dependency updates, the no-longer maintained ghodss/yaml library was replaced with sigs.k8s.io/yaml (thanks @Juneezee!) This release includes changes from a massive list of contributors! A special thank-you to everyone who helped make this release possible: * Andrew Pinkham [@jambonrose](https://github.com/jambonrose) * Arnaud Beun [@bunnybilou](https://github.com/bunnybilou) * Carlos Tadeu Panato Junior [@cpanato](https://github.com/cpanato) * Christian Segundo [@someone-stole-my-name](https://github.com/someone-stole-my-name) * Dani Baeyens [@danibaeyens](https://github.com/danibaeyens) * Duc Tran [@ductnn](https://github.com/ductnn) * Eng Zer Jun [@Juneezee](https://github.com/Juneezee) * Ivan Ivic [@Tyrion85](https://github.com/Tyrion85) * Joe Bowbeer [@joebowbeer](https://github.com/joebowbeer) * Jonathan Ogilvie [@jcogilvie](https://github.com/jcogilvie) * Jun [@junnplus](https://github.com/junnplus) * Loong Dai [@daixiang0](https://github.com/daixiang0) * María Teresa Rojas [@mtrojas](https://github.com/mtrojas) * Mo Sattler [@MoSattler](https://github.com/MoSattler) * Oleg Vorobev [@olegy2008](https://github.com/olegy2008) * Paul Balogh [@javaducky](https://github.com/javaducky) * Peter Smit [@psmit](https://github.com/psmit) * Ryan Hristovski [@ryanhristovski](https://github.com/ryanhristovski) * Semih Ural [@uralsemih](https://github.com/uralsemih) * Shubhodeep Mukherjee [@shubhodeep9](https://github.com/shubhodeep9) * Siddharth S Pal [@siddharthshubhampal](https://github.com/siddharthshubhampal) * Subhash Choudhary [@subhashchy](https://github.com/subhashchy) * Szymon Grzemski [@sgrzemski](https://github.com/sgrzemski) * Takumi Sue [@mikutas](https://github.com/mikutas) * Yannick Utard [@utay](https://github.com/utay) * Yu Cao [@yc185050](https://github.com/yc185050) * anoxape [@anoxape](https://github.com/anoxape) * bastienbosser [@bastienbosser](https://github.com/bastienbosser) * bitfactory-sem-denbroeder [@bitfactory-sem-denbroeder](https://github.com/bitfactory-sem-denbroeder) * cui fliter [@cuishuang](https://github.com/cuishuang) * eugenegoncharuk [@eugenegoncharuk](https://github.com/eugenegoncharuk) * h-dav @[h-dav](https://github.com/h-dav) * martinkubrak [@martinkubra](https://github.com/martinkubra) * verbotenj [@verbotenj](https://github.com/verbotenj) * ziollek [@ziollek](https://github.com/ziollek) [dynamic request routing]: https://linkerd.io/2.13/tasks/configuring-dynamic-request-routing [circuit breaking]: https://linkerd.io/2.13/tasks/circuit-breakers [new proxy metrics]: https://linkerd.io/2.13/reference/proxy-metrics/#outbound-xroute-metrics [upgrade-2130]: https://linkerd.io/2/tasks/upgrade/#upgrade-notice-stable-2130 ## edge-23.4.1 This is a release candidate for stable-2.13.0 — we encourage you to help try it out! This edge release introduces request-level HTTP circuit-breaking using a consecutive failures failure accrual policy. Circuit breaking can be configured by adding failure accrual annotations to a Service. In addition, this release adds new `outbound_route_backend_http_requests_total` and `outbound_route_backend_grpc_requests_total` proxy metrics, which can be used to track how routing rules and backend distributions apply to requests. These metrics contain labels describing the route's parent (i.e. a Service), the route resource being used, and the backend resource being used by each request. * Proxy * Added discovery of failure accrual policies from the OutboundPolicy API * Implemented consecutive failures failure accrual policy * Added INFO-level logging on failure accrual changes * Added `outbound_route_backend_http_requests_total` and `outbound_route_backend_grpc_requests_total` metrics * Policy Controller * Added failure accrual configuration to the OutboundPolicy API * Added Prometheus `/metrics` endpoint to the admin server, with process metrics * Changed the policy controller to only accept HTTPRoutes when the parentRef is a ClusterIP Service * Added ports to service references in the OutboundPolicy API * Viz * Added `tap.ignoredHeaders` Helm value to the linkerd-viz chart. This value allows users to specify a comma-separated list of header names which will be ignored by Linkerd Tap (thanks @ryanhristovski!) * Removed duplicate SecurityContext in Prometheus manifest * Multicluster * Removed duplicate AuthorizationPolicy for probes from the multicluster gateway Helm chart ## edge-23.3.4 This edge release further enhances the OutboundPolicies API used by the proxy to route outbound traffic, and continues extending the HTTPRoute resource's Status field. It also starts integrating circuit-breaking functionality into the proxy, which will be configurable in a subsequent iteration. * Continued iterating on the HTTPRoute's Status field, by extending support for routes parented to Services, and adding a ResolvedRefs condition reflecting the status of BackendRefs * Updated the OutboundPolicies API such that only HTTPRoutes with an Accepted status of `true` are considered when routing outbound requests * Improved handling of invalid backends, allowing the configuration of error responses * Added new flag `--viz-namespace` which avoids requiring permissions for listing all namespaces in `linkerd viz` subcommands (thanks @danibaeyens!) * Among other dependency updates, the no-longer maintained ghodss/yaml library was replaced with sigs.k8s.io/yaml (thanks @Juneezee!) ## edge-23.3.3 This edge release removes TrafficSplits from the Linkerd dashboard as well as fixing a number of issues in the policy controller. * Removed the TrafficSplit page from the Linkerd viz dashboard * Fixed an issue where the policy controller was not returning the correct status for non-Service authorities * Fixed an issue where the policy controller could use large amounts of CPU when lease API calls failed ## edge-23.3.2 This edge release continues to improve dynamic Policy statuses and introduces support for header-based routing. * Destination Controller * Added OutboundPolicies API, for use by `linkerd-proxy` to route outbound traffic * Improved diagnostic log messages * Fixed sending of spurious profile updates * Proxy * Use the new OutboundPolicies API, supporting Gateway API-style routes in the outbound proxy * Policy Controller * Support highly available Policy Controller by utilizing `policy-controller-write` Lease when patching HTTPRoutes * Consider the `status` field and filter out HTTPRoutes which have not been accepted * Added KubeAPI server ports to `ignoreOutboundPorts` of `proxy-injector` * Updated HTTPRoute version from `v1alpha1` to `v1beta2` * Updated `network-validator` helm charts to use `proxy-init` resources * Fixed Grafana regular expression, enabling monitoring of filesystem usage (thanks @h-dav!) ## edge-23.3.1 This edge release continues to build support under the hood for the upcoming features in 2.13. Also included are several dependency updates and less verbose logging. * Removed dependency on the `curlimages/curl` 3rd-party image used to initialize extensions namespaces metadata (so they are visible by `linkerd check`), replaced by the new `extension-init` image * Lowered non-actionable error messages in the Destination log to debug-level entries to avoid triggering false alarms (thanks @siddharthshubhampal!) ## edge-23.2.3 This edge release includes a number of fixes and introduces a new CLI command, `linkerd prune`. The new `prune` command should be used to remove resources which are no longer part of the Linkerd manifest when doing an upgrade. Previously, the recommendation was to use `linkerd upgrade` in conjunction with `kubectl apply --prune`, however, that will not remove resources which are not part of the input manifest, and it will not detect cluster scoped resources, `linkerd prune` (included in all core extensions) should be preferred over it. Additionally, this change contains a few fixes from our external contributors, and a change to the `viz` Helm chart which allows for arbitrary annotations on `Service` objects. Last but not least, the release contains a few proxy internal changes to prepare for the new client policy API. * Added a new `linkerd prune` command to the CLI (including extensions) to remove resources which are no longer part of Linkerd's manifests * Introduced new values in the `viz` chart to allow for arbitrary annotations on the `Service` objects (thanks @sgrzemski!) * Fixed up a comment in k8s API wrapper (thanks @ductnn!) * Fixed an issue with EndpointSlice endpoint reconciliation on slice deletion; when using more than one slice, a `NoEndpoints` event would be sent to the proxy regardless of the amount of endpoints that were still available (thanks @utay!) ## edge-23.2.2 This edge release adds the policy status controller which writes the `status` field to HTTPRoutes when a parent reference Server accepts or rejects the HTTPRoute. This field is currently not consumed by the policy controller, but acts as the first step for considering HTTPRoute `status` when serving policy. Additionally, the destination controller now uses the Kubernetes metadata API for resources which it only needs to track the metadata for — Nodes and ReplicaSets. For all other resources it tracks, it uses additional information so continues to use the API as before. * Fixed error message to include the colliding Server in the policy controller's admission webhook validation * Updated wording for linkerd-multicluster cluster when it fails to probe a remote gateway mirror * Removed unnecessary Namespaces access from the destination controller RBAC * Added Kubernetes metadata API in the destination controller for watching Nodes and ReplicaSets * Fixed QueryParamMatch parsing for HTTPRoutes * Added the policy status controller which writes the `status` field to HTTPRoutes when a parent reference Server accepts or rejects it ## edge-23.2.1 This edge release sees the `linkerd-cni` plugin moved to `linkerd2-proxy-init` and released from that repository. An iptables improvement to `linkerd-cni` and `proxy-init` is the main focus. Other minor fixes are also included. * Changed `proxy-init` iptables rules to be idempotent upon init pod restart (thanks @jim-minter!) * Improved logging in `proxy-init` and `linkerd-cni` * Added the server_port_subscribers metric to track the number of subscribers to Server changes associated with a pod's port * Added the service_subscribers metric to track the number of subscribers to Service changes * Fixed a small memory leak in the opaque ports watcher * No longer apply `waitBeforeExitSeconds` to control plane, viz and jaeger extension pods * Added support for the `internalTrafficPolicy` of a service (thanks @yc185050!) * Added `limits` and `requests` to network-validator for ResourceQuota interop * Added block chomping to strip trailing new lines in ConfigMap (thanks @avdicl!) * Added multicluster gateway `nodeSelector` and `tolerations` helm parameters * Added protection against nil dereference in resources helm template ## edge-23.1.2 This edge release fixes a memory leak in the Linkerd control plane that could occur when many many pods were created. It also adds a number of new configuration options Multicluster extension's gateway. * Added additional shortnames for Linkerd policy resources (thanks @javaducky!) * Added new configuration options for the multicluster gateway: * `gateway.deploymentAnnotations` * `gateway.terminationGracePeriodSeconds` (thanks @bunnybilou!) * `gateway.loadBalancerSourceRanges` (thanks @Tyrion85!) * Added an optional AuthorizationPolicy to authorize Grafana to Prometheus in the Viz extension * Fixed the link to the Jaeger dashboard the in viz dashboard (thanks @eugenegoncharuk!) * Fixed an issue where control plane components could fail to start on large clusters because of failing readiness probes while caches were being initialized * Fixed a memory leak in the Destination controller * Fixed an issue where PodSecurityPolicies could reject Linkerd control plane components due to the `seccompProfile` ## edge-23.1.1 This edge release fixes a caching issue in the destination controller, converts deprecated policy resources, and introduces several changes to how the proxy works. A bug in the destination controller that could potentially lead to stale pods being considered in the load balancer has been fixed. Several Linkerd extensions were still using the now deprecated ServerAuthorization resource. These instances have now been converted to using AuthorizationPolicy. Additionally, removed several policy resources that authenticated probes, since probes are now authenticated by default. As part of ongoing policy work, there are several changes with how the proxy works. Routes are now lazily initialized so that service profile routes will not show up in metrics until the route is used. Furthermore, the proxy’s traffic splitting behavior has changed so that only available resources are used, resulting in less failfast errors. Finally, this edge release contains a number of fixes and improvements from our contributors. * Converted `ServerAuthorization` resources to `AuthorizationPolicy` resources in Linkerd extensions * Removed policy resources bound to admin servers in extensions (previously these resources were used to authorize probes but now are authorized by default) * Added a `resources` field in the linkerd-cni chart (thanks @jcogilvie!) * Fixed an issue in the CLI where `--identity-external-ca` would set an incorrect field (thanks @anoxape!) * Fixed an issue in the destination controller's cache that could result in stale endpoints when using EndpointSlice objects * Added namespace to namespace-metadata resources in Helm (thanks @joebowbeer!) * Added support for Pod Security Admission (Pod Security Policy resources are still supported but disabled by default) * Changed routes to be initialized lazily. Service Profile routes will no longer show up in metrics until the route is used (default routes are always available when no Service Profile is defined for a service) * Changed the proxy's behavior when traffic splitting so that only services that are not in failfast are used. This will enable the proxy to manage failover without external coordination * Updated tokio (async runtime) in the proxy which should reduce CPU usage, especially for proxy's pod local (i.e in the same network namespace) communication * Fixed an issue where `linkerd viz tap` would display wrong latency/duration value (thanks @olegy2008!) ## edge-22.12.1 This edge release introduces static and dynamic port overrides for CNI eBPF socket-level load balancing. In certain installations when CNI plugins run in eBPF mode, socket-level load balancing rewrites packet destinations to port 6443; as with 443 already, this port is now skipped as well on control plane components so that they can communicate with the Kubernetes API before their proxies are running. Additionally, a potential panic and false warning have been fixed in the destination controller. * Updated linkerd-jaeger's collector to expose port 4318 in order support HTTP alongside gRPC (thanks @uralsemih!) * Added a `proxyInit.privileged` setting to control whether the `proxy-init` initContainer runs as a privileged process * Fixed a potential panic in the destination controller caused by concurrent writes when dealing with Endpoint updates * Fixed false warning when looking up HostPort mappings on Pods * Added static and dynamic port overrides for CNI eBPF to work with socket-level load balancing ## edge-22.11.3 This edge release fixes connection errors to pods that use `hostPort` configurations. The CNI `network-validator` init container features improved error logging, and the default `linkerd-cni` DaemonSet configuration is updated to tolerate all node taints so that the CNI runs on all nodes in a cluster. * Fixed `destination` service to properly discover targets using a `hostPort` different than their `containerPort`, which was causing 502 errors * Upgraded the `network-validator` with better logging allowing users to determine whether failures occur as a result of their environment or the tool itself * Added default `Exists` toleration to the `linkerd-cni` DaemonSet, allowing it to be deployed in all nodes by default, regardless of taints ## edge-22.11.2 This edge release introduces the use of the Kubernetes metadata API in the proxy-injector and tap-injector components. This can reduce the IO and memory footprint for those components as they now only need to track the metadata for certain resources, rather than the entire resource itself. Similar changes will be made for the destination component in an upcoming release. * Bumped HTTP dependencies to fix a potential deadlock in HTTP/2 clients * Changed the proxy-injector and tap-injector components to use the metadata API which should result in less memory consumption ## edge-22.11.1 This edge releases ships a few fixes in Linkerd's dashboard, and the multicluster extension. Additionally, a regression has been fixed in the CLI that blocked upgrades from versions older than 2.12.0, due to missing CRDs (even if the CRDs were present in-cluster). Finally, the release includes changes to the helm charts to allow for arbitrary (user-provided) labels on Linkerd workloads. * Fixed an issue in the CLI where upgrades from any version prior to stable-2.12.0 would fail when using the `--from-manifest` flag * Removed un-injectable namespaces, such as kube-system from unmeshed resource notification in the dashboard (thanks @MoSattler!) * Fixed an issue where the dashboard would respond to requests with 404 due to wrong root paths in the HTML script (thanks @junnplus!) * Removed the proxyProtocol field in the multicluster gateway policy; this has the effect of changing the protocol from 'HTTP/1.1' to 'unknown' (thanks @psmit!) * Fixed the multicluster gateway UID when installing through the CLI, prior to this change the 'runAsUser' field would be empty * Changed the helm chart for the control plane and all extensions to support arbitrary labels on resources (thanks @bastienbosser!) ## edge-22.10.3 This edge release adds `network-validator`, a new init container to be used when CNI is enabled. `network-validator` ensures that local iptables rules are working as expected. It will validate this before linkerd-proxy starts. `network-validator` replaces the `noop` container, runs as `nobody`, and drops all capabilities before starting. * Validate CNI `iptables` configuration during pod startup * Fix "cluster networks contains all services" fails with services with no ClusterIP * Remove kubectl version check from `linkerd check` (thanks @ziollek!) * Set `readOnlyRootFilesystem: true` in viz chart (thanks @mikutas!) * Fix `linkerd multicluster install` by re-adding `pause` container image in chart * linkerd-viz have hardcoded image value in namespace-metadata.yml template bug correction (thanks @bastienbosser!) ## edge-22.10.2 This edge release fixes an issue with CNI chaining that was preventing the Linkerd CNI plugin from working with other CNI plugins such as Cilium. It also includes several other fixes. * Updated Grafana dashboards to use variable duration parameter so that they can be used when Prometheus has a longer scrape interval (thanks @TarekAS) * Fixed handling of .conf files in the CNI plugin so that the Linkerd CNI plugin can be used alongside other CNI plugins such as Cilium * Added a `linkerd diagnostics policy` command to inspect Linkerd policy state * Added a check that ClusterIP services are in the cluster networks * Added a noop init container to injected pods when the CNI plugin is enabled to prevent certain scenarios where a pod can get stuck without an IP address * Fixed a bug where the`config.linkerd.io/proxy-version` annotation could be empty ## edge-22.10.1 This edge release fixes some sections of the Viz dashboard appearing blank, and adds an optional PodMonitor resource to the Helm chart to enable easier integration with the Prometheus Operator. It also includes many fixes submitted by our contributors. * Fixed the dashboard sections Tap, Top, and Routes appearing blank (thanks @MoSattler!) * Added an optional PodMonitor resource to the main Helm chart (thanks @jaygridley!) * Fixed the CLI ignoring the `--api-addr` flag (thanks @mikutas!) * Expanded the `linkerd authz` command to display AuthorizationPolicy resources that target namespaces (thanks @aatarasoff!) * Fixed the `NotIn` label selector operator in the policy resources, being erroneously treated as `In`. * Fixed warning logic around the "linkerd-viz ClusterRoles exist" and "linkerd-viz ClusterRoleBindings exist" checks in `linkerd viz check` * Fixed proxies emitting some duplicate inbound metrics ## stable-2.12.1 This release includes several control plane and proxy fixes for `stable-2.12.0`. In particular, it fixes issues related to control plane HTTP servers' header read timeouts resulting in decreased controller success rates, lowers the inbound connection pool idle timeout in the proxy, and fixes an issue where the jaeger injector would put pods into an error state when upgrading from stable-2.11.x. Additionally, this release adds the `linkerd.io/trust-root-sha256` annotation to all injected workloads allowing predictable comparison of all workloads' trust anchors via the Kubernetes API. For Windows users, note that the Linkerd CLI's `nupkg` file for Chocolatey is once again included in the release assets (it was previously removed in stable-2.10.0). * Proxy * Lowered inbound connection pool idle timeout to 3s * Control Plane * Updated AdmissionRegistration API version usage to v1 * Added `linkerd.io/trust-root-sha256` annotation on all injected workloads to indicate certifcate bundle * Updated fields in `AuthorizationPolicy` and `MeshTLSAuthentication` to conform to specification (thanks @aatarasoff!) * Updated the identity controller to not require a `ClusterRoleBinding` to read all deployment resources * Increased servers' header read timeouts so they no longer match default probe and Prometheus scrape intervals * Helm * Restored `namespace` field in Linkerd helm charts * Updated `PodDisruptionBudget` `apiVersion` from `policy/v1beta1` to `policy/v1` (thanks @Vrx555!) * Extensions * Fixed jaeger injector interfering with upgrades to 2.12.x ## edge-22.9.2 This release fixes an issue where the jaeger injector would put pods into an error state when upgrading from stable-2.11.x. * Updated AdmissionRegistration API version usage to v1 * Fixed jaeger injector interfering with upgrades to 2.12.x ## edge-22.9.1 This release adds the `linkerd.io/trust-root-sha256` annotation to all injected workloads allowing predictable comparison of all workloads' trust anchors via the Kubernetes API. Additionally, this release lowers the inbound connection pool idle timeout to 3s. This should help avoid socket errors, especially for Kubernetes probes. * Added `linkerd.io/trust-root-sha256` annotation on all injected workloads to indicate certifcate bundle * Lowered inbound connection pool idle timeout to 3s * Restored `namespace` field in Linkerd helm charts * Updated fields in `AuthorizationPolicy` and `MeshTLSAuthentication` to conform to specification (thanks @aatarasoff!) * Updated the identity controller to not require a `ClusterRoleBinding` to read all deployment resources. ## edge-22.8.3 Increased control plane HTTP servers' read timeouts so that they no longer match the default probe intervals. This was leading to closed connections and decreased controller success rate. ## stable-2.12.0 This release introduces route-based policy to Linkerd, allowing users to define and enforce authorization policies based on HTTP routes in a fully zero-trust way. These policies are built on Linkerd's strong workload identities, secured by mutual TLS, and configured using types from the Kubernetes [Gateway API](https://gateway-api.sigs.k8s.io/). The 2.12 release also introduces optional request logging ("access logging" after its name in webservers), optional support for `iptables-nft`, and a host of other improvements and performance enhancements. Additionally, the `linkerd-smi` extension is now required to use TrafficSplit, and the installation process has been updated to separate management of the Linkerd CRDs from the main installation process. With the CLI, you'll need to `linkerd install --crds` before running `linkerd install`; with Helm, you'll install the new `linkerd-crds` chart, then the `linkerd-control-plane` chart. These charts are now versioned using [SemVer](https://semver.org) independently of Linkerd releases. For more information, see the [upgrade notes][upgrade-2120]. **Upgrade notes**: Please see the [upgrade instructions][upgrade-2120]. * Proxy * Added a `config.linkerd.io/shutdown-grace-period` annotation to limit the duration that the proxy may wait for graceful shutdown * Added a `config.linkerd.io/access-log` annotation to enable logging of workload requests * Added a new `iptables-nft` mode for the `proxy-init` initContainer * Added support for non-HTTP traffic forwarding within the mesh in `ingress` mode * Added the `/env.json` log diagnostic endpoint * Added a new `process_uptime_seconds_total` metric to track proxy uptime in seconds * Added support for dynamically discovering policies for ports that are not documented in a pod's `containerPorts` * Added support for route-based inbound HTTP metrics (`route_group`/`route_kind`/`route_name`) * Added a new annotation to configure skipping subnets in the init container (`config.linkerd.io/skip-subnets`), needed e.g. in Docker-in-Docker workloads (thanks @michaellzc!) * Control Plane * Added support for per-route policy by supporting AuthorizationPolicy resources which can target HttpRoute or Server resources * Added support for bound service account token volumes for the control plane and injected workloads * Removed kube-system exclusions from watchers to fix service discovery for workloads in the kube-system namespace (thanks @JacobHenner!) * Updated healthcheck to ignore `Terminated` state for pods (thanks @AgrimPrasad!) * Updated the default policy controller log level to `info`; the controller will now emit INFO level logs for some of its dependencies * Added probe authorization by default, allowing clusters that use a default `deny` policy to not explicitly need to authorize probes * Fixed an issue where the proxy-injector would break when using `nodeAffinity` values for the control plane * Fixed an issue where certain control plane components were not restarting as necessary after a trust root rotation * Removed SMI functionality in the default Linkerd installation; this is now part of the `linkerd-smi` extension * CLI * Fixed the `linkerd check` command crashing when unexpected pods are found in a Linkerd namespace * Updated the `linkerd authz` command to support AuthorizationPolicy and HttpRoute resources * Updated `linkerd check` to allow RSA signed trust anchors (thanks @danibaeyens!) * `linkerd install --crds` must be run before `linkerd install` * `linkerd upgrade --crds` must be run before `linkerd upgrade` * Fixed invalid yaml syntax in the viz extension's tap-injector template (thanks @wc-s!) * Fixed an issue where the `--default-inbound-policy` setting was not being respected * Added support for AuthorizationPolicy and HttpRoute to `viz authz` command * Added support for AuthorizationPolicy and HttpRoute to `viz stat` command * Added support for policy metadata in `linkerd viz tap` * Helm * Split the `linkerd2` chart into `linkerd-crds` and `linkerd-control-plane` * Charts are now versioned using [SemVer](https://semver.org) independently of Linkerd releases * Added missing port in the Linkerd viz chart documentation (thanks @haswalt!) * Changed the `proxy.await` Helm value so that users can now disable `linkerd-await` on control plane components * Added the `policyController.probeNetworks` Helm value for configuring the networks that probes are expected to be performed from * Extensions * Added annotations to allow Linkerd extension deployments to be evicted by the autoscaler when necessary * Added ability to run the Linkerd CNI plugin in non-chained (stand-alone) mode * Added a ServiceAccount token Secret to the multicluster extension to support Kubernetes versions >= v1.24 This release includes changes from a massive list of contributors, including engineers from Adidas, Intel, Red Hat, Shopify, Sourcegraph, Timescale, and others. A special thank-you to everyone who helped make this release possible: Agrim Prasad [@AgrimPrasad](https://github.com/AgrimPrasad) Ahmed Al-Hulaibi [@ahmedalhulaibi](https://github.com/ahmedalhulaibi) Aleksandr Tarasov [@aatarasoff](https://github.com/aatarasoff) Alexander Berger [@alex-berger](https://github.com/alex-berger) Ao Chen [@chenaoxd](https://github.com/chenaoxd) Badis Merabet [@badis](https://github.com/badis) Bjørn [@Crevil](https://github.com/Crevil) Brian Dunnigan [@bdun1013](https://github.com/bdun1013) Christian Schlotter [@chrischdi](https://github.com/chrischdi) Dani Baeyens [@danibaeyens](https://github.com/danibaeyens) David Symons [@multimac](https://github.com/multimac) Dmitrii Ermakov [@ErmakovDmitriy](https://github.com/ErmakovDmitriy) Elvin Efendi [@ElvinEfendi](https://github.com/ElvinEfendi) Evan Hines [@evan-hines-firebolt](https://github.com/evan-hines-firebolt) Eng Zer Jun [@Juneezee](https://github.com/Juneezee) Gustavo Fernandes de Carvalho [@gusfcarvalho](https://github.com/gusfcarvalho) Harry Walter [@haswalt](https://github.com/haswalt) Israel Miller [@imiller31](https://github.com/imiller31) Jack Gill [@jackgill](https://github.com/jackgill) Jacob Henner [@JacobHenner](https://github.com/JacobHenner) Jacob Lorenzen [@Jaxwood](https://github.com/Jaxwood) Joakim Roubert [@joakimr-axis](https://github.com/joakimr-axis) Josh Ault [@jault-figure](https://github.com/jault-figure) João Soares [@jasoares](https://github.com/jasoares) jtcarnes [@jtcarnes](https://github.com/jtcarnes) Kim Christensen [@kichristensen](https://github.com/kichristensen) Krzysztof Dryś [@krzysztofdrys](https://github.com/krzysztofdrys) Lior Yantovski [@lioryantov](https://github.com/lioryantov) Martin Anker Have [@mahlunar](https://github.com/mahlunar) Michael Lin [@michaellzc](https://github.com/michaellzc) Michał Romanowski [@michalrom089](https://github.com/michalrom089) Naveen Nalam [@nnalam](https://github.com/nnalam) Nick Calibey [@ncalibey](https://github.com/ncalibey) Nikola Brdaroski [@nikolabrdaroski](https://github.com/nikolabrdaroski) Or Shachar [@or-shachar](https://github.com/or-shachar) Pål-Magnus Slåtto [@dev-slatto](https://github.com/dev-slatto) Raman Gupta [@rocketraman](https://github.com/rocketraman) Ricardo Gândara Pinto [@rmgpinto](https://github.com/rmgpinto) Roberth Strand [@roberthstrand](https://github.com/roberthstrand) Sankalp Rangare [@sankalp-r](https://github.com/sankalp-r) Sascha Grunert [@saschagrunert](https://github.com/saschagrunert) Steve Gray [@steve-gray](https://github.com/steve-gray) Steve Zhang [@zhlsunshine](https://github.com/zhlsunshine) Takumi Sue [@mikutas](https://github.com/mikutas) Tanmay Bhat [@tanmay-bhat](https://github.com/tanmay-bhat) Táskai Dominik [@dtaskai](https://github.com/dtaskai) Ujjwal Goyal [@importhuman](https://github.com/importhuman) Weichung Shaw [@wc-s](https://github.com/wc-s) Wim de Groot [@wim-de-groot](https://github.com/wim-de-groot) Yannick Utard [@utay](https://github.com/utay) Yurii Dzobak [@yuriydzobak](https://github.com/yuriydzobak) 罗泽轩 [@spacewander](https://github.com/spacewander) [upgrade-2120]: https://linkerd.io/2/tasks/upgrade/#upgrade-notice-stable-2120 ## stable-2.12.0-rc2 This release is the second release candidate for stable-2.12.0. At this point the Helm charts can be retrieved from the stable repo: ```sh helm repo add linkerd https://helm.linkerd.io/stable helm repo up helm install linkerd-crds -n linkerd --create-namespace linkerd/linkerd-crds helm install linkerd-control-plane \ -n linkerd \ --set-file identityTrustAnchorsPEM=ca.crt \ --set-file identity.issuer.tls.crtPEM=issuer.crt \ --set-file identity.issuer.tls.keyPEM=issuer.key \ linkerd/linkerd-control-plane ``` The following lists all the changes since edge-22.8.2: * Fixed inheritance of the `linkerd.io/inject` annotation from Namespace to Workloads when its value is `ingress` * Added the `config.linkerd.io/default-inbound-policy: all-authenticated` annotation to linkerd-multicluster’s Gateway deployment so that all clients are required to be authenticated * Added a `ReadHeaderTimeout` of 10s to all the go `http.Server` instances, to avoid being vulnerable to "slowrolis" attacks * Added check in `linkerd viz check --proxy` to warn in case namespace have the `config.linkerd.io/default-inbound-policy: deny` annotation, which would not authorize scrapes coming from the linkerd-viz Prometheus instance * Added validation for accepted values for the `--default-inbound-policy` flag * Fixed invalid URL in the `linkerd install --help` output * Added `--destination-pod` flag to `linkerd diagnostics endpoints` subcommand * Added `proxyInit.runAsUser` in `values.yaml` defaulting to non-zero, to complement the new default `proxyInit.runAsRoot: false` that was rencently changed ## edge-22.8.2 This release is considered a release candidate for stable-2.12.0 and we encourage you to try it out! It includes an update to the multicluster extension which adds support for Kubernetes v1.24 and also updates many CLI commands to support the new policy resources: ServerAuthorization and HTTPRoute. * Updated linkerd check to allow RSA signed trust anchors (thanks @danibaeyens!) * Fixed some invalid yaml in the viz extension's tap-injector template (thanks @wc-s!) * Added support for AuthorizationPolicy and HttpRoute to viz authz command * Added support for AuthorizationPolicy and HttpRoute to viz stat * Added support for policy metadata in linkerd tap * Fixed an issue where certain control plane components were not restarting as necessary after a trust root rotation * Added a ServiceAccount token Secret to the multicluster extension to support Kubernetes versions >= v1.24 * Fixed an issue where the --default-inbound-policy setting was not being respected ## edge-22.8.1 This releases introduces default probe authorization. This means that on clusters that use a default `deny` policy, probes do not have to be explicitly authorized using policy resources. Additionally, the `policyController.probeNetworks` Helm value has been added, which allows users to configure the networks that probes are expected to be performed from. Additionally, the `linkerd authz` command has been updated to support the policy resources AuthorizationPolicy and HttpRoute. Finally, some smaller changes include allowing to disable `linkerd-await` on control plane components (using the existing `proxy.await` configuration) and changing the default iptables mode back to `legacy` to support more cluster environments by default. * Updated the `linkerd authz` command to support AuthorizationPolicy and HttpRoute resources * Changed the `proxy.await` Helm value so that users can now disable `linkerd-await` on control plane components * Added probe authorization by default allowing clusters that use a default `deny` policy to not explicitly need to authorize probes * Added ability to run the Linkerd CNI plugin in non-chained (stand-alone) mode * Added the `policyController.probeNetworks` Helm value for configuring the networks that probes are expected to be performed from * Changed the default iptables mode to `legacy` ## edge-22.7.3 This release adds a new `nft` iptables mode, used by default in proxy-init. When used, firewall configuration will be set-up through the `iptables-nft` binary; this should allow hosts that do not support `iptables-legacy` (such as RHEL based environments) to make use of the init container. The older `iptables-legacy` mode is still supported, but it must be explictly turned on. Moreover, this release also replaces the `HTTPRoute` CRD with Linkerd's own version, and includes a number of fixes and improvements. * Added a new `iptables-nft` mode for proxy-init. When running in this mode, the firewall will be configured with `nft` kernel API; this should allow users to run the init container on RHEL-family hosts * Fixed an issue where the proxy-injector would break when using `nodeAffinity` values for the control plane * Updated healthcheck to ignore `Terminated` state for pods (thanks @AgrimPrasad!) * Replaced `HTTRoute` CRD version from `gateway.networking.k8s.io` with a similar version from the `policy.linkerd.io` API group. While the CRD is similar, it does not support the `Gateway` type, does not contain the `backendRefs` fields, and does not support `RequestMirror` and `ExtensionRef` filter types. * Updated the default policy controller log level to `info`; the controller will now emit INFO level logs for some of its dependencies * Added validation to ensure `HTTPRoute` paths are absolute; relative paths are not supported by the proxy and the policy controller admission server will reject any routes that use paths which do not start with `/` ## edge-22.7.2 This release adds support for per-route authorization policy using the AuthorizationPolicy and HttpRoute resources. It also adds a configurable shutdown grace period to the proxy which can be used to ensure that proxy graceful shutdown completes within a certain time, even if there are outstanding open connections. * Removed kube-system exclusions from watchers to fix service discovery for workloads in the kube-system namespace (thanks @JacobHenner!) * Added annotations to allow Linkerd extension deployments to be evicted by the autoscaler when necessary * Added missing port in the Linkerd viz chart documentation (thanks @haswalt!) * Added support for per-route policy by supporting AuthorizationPolicy resources which target HttpRoute resources * Fixed the `linkerd check` command crashing when unexpected pods are found in a Linkerd namespace * Added a `config.linkerd.io/shutdown-grace-period` annotation to configure the proxy's maximum grace period for graceful shutdown ## edge-22.7.1 This release includes a security improvement. When a user manually specified the `policyValidator.keyPEM` setting, the value was incorrectly included in the `linkerd-config` configmap. This means that this private key was erroneously exposed to service accounts with read access to this configmap. Practically, this means that the Linkerd `proxy-injector`, `identity`, and `heartbeat` pods could read this value. This should **not** have exposed this private key to other unauthorized users unless additional role bindings were added outside of Linkerd. Nevertheless, we recommend that users who manually set control plane certificates update the credentials for the policy validator after upgrading Linkerd. Additionally, the linkerd-multicluster extensions has several fixes related to fail fast errors during link watch restarts, improper label matching for mirrored services, and properly cleaning up mirrored endpoints in certain situations. Lastly, the proxy can now retry gRPC requests that have responses with a TRAILERS frame. A fix to reduce redundant load balancer updates should also result in less connection churn. * Changed unit tests to use newly introduced `prommatch` package for asserting expected metrics (thanks @krzysztofdrys!) * Fixed Docker container runtime check to only during `linkerd install` rather than `linkerd check --pre` * Changed linkerd-multicluster's remote cluster watcher to assume the gateway is alive when starting—fixing fail fast errors from occurring during restarts (thanks @chenaoxd!) * Added `matchLabels` and `matchExpressions` to linkerd-multicluster's Link CRD * Fixed linkerd-multicluster's label selector to properly select resources that match the expected label value, rather than just the presence of the label * Fixed linkerd-multicluster's cluster watcher to properly clean up endpoints belonging to remote headless services that are no longer mirrored * Added the HttpRoute CRD which will be used by future policy features * Fixed CNI plugin event processing where file updates could sometimes be skipped leading to the update not being acknowledged * Fixed redundant load balancer updates in the proxy that could cause unnecessary connection churn * Fixed gRPC request retries for responses that contain a TRAILERS frame * Fixed the dashboard's `linkerd check` due to missing RBAC for listing pods in the cluster * Fixed API check that ensures access to the Server CRD (thanks @aatarasoff!) * Changed `linkerd authz` to match the labels of pre-fetched Pods rather than the multiple API calls it was doing—resulting in significant speed-up (thanks @aatarasoff!) * Unset `policyValidtor.keyPEM` in `linkerd-config` ConfigMap ## edge-22.6.2 This edge release bumps the minimum supported Kubernetes version from `v1.20` to `v1.21`, introduces some new changes, and includes a few bug fixes. Most notably, a bug has been fixed in the proxy's outbound load balancer that could cause panics, especially when the balancer would process many service discovery updates in a short period of time. This release also fixes a panic in the proxy-injector, and introduces a change that will include HTTP probe ports in the proxy's inbound ports configuration, to be used for policy discovery. * Fixed a bug in the proxy's outbound load balancer that could cause panics when many discovery updates were processed in short time periods * Added `runtimeClassName` options to Linkerd's Helm chart (thanks @jtcarnes!) * Introduced a change in the proxy-injector that will configure the inbound ports proxy configuration with the pod's probe ports (HTTPGet) * Added godoc links in the project README file (thanks @spacewander!) * Increased minimum supported Kubernetes version to `v1.21` from `v1.20` * Fixed an issue where the proxy-injector would not emit events for resources that receive annotation patches but are skipped for injection * Refactored `PublicIPToString` to handle both IPv4 and IPv6 addresses in a similar behavior (thanks @zhlsunshine!) * Replaced the usage of branch with tags, and pinned `cosign-installer` action to `v1` (thanks @saschagrunert!) * Fixed an issue where the proxy-injector would panic if resources have an unsupported owner kind ## edge-22.6.1 This edge release fixes an issue where Linkerd injected pods could not be evicted by Cluster Autoscaler. It also adds the `--crds` flag to `linkerd check` which validates that the Linkerd CRDs have been installed with the proper versions. The previously noisy "cluster networks can be verified" check has been replaced with one that now verifies each running Pod IP is contained within the current `clusterNetworks` configuration value. Additionally, linkerd-viz is no longer required for linkerd-multicluster's `gateways` command — allowing the `Gateways` API to marked as deprecated for 2.12. Finally, several security issues have been patched in the Docker images now that the builds are pinned only to minor — rather than patch — versions. * Replaced manual IP address parsing with functions available in the Go standard library (thanks @zhlsunshine!) * Removed linkerd-multicluster's `gateway` command dependency on the linkerd-viz extension * Fixed issue where Linkerd injected pods were prevented from being evicted by Cluster Autoscaler * Added the `dst_target_cluster` metric to linkerd-multicluster's service-mirror controller probe traffic * Added the `--crds` flag to `linkerd check` which validates that the Linkerd CRDs have been installed * Removed the Docker image's hardcoded patch versions so that builds pick up patch releases without manual intervention * Replaced the "cluster networks can be verified check" check with a "cluster networks contains all pods" check which ensures that all currently running Pod IPs are contained by the current `clusterNetworks` configuration * Added IPv6 compatible IP address generation in certain control plane components that were only generating IPv4 (thanks @zhlsunshine!) * Deprecated linkerd-viz's `Gateways` API which is no longer used by linkerd-multicluster * Added the `promm` package for making programatic Prometheus assertions in tests (thanks @krzysztofdrys!) * Added the `runAsUser` configuration to extensions to fix a PodSecurityPolicy violation when CNI is enabled ## edge-22.5.3 This edge release fixes a few proxy issues, improves the upgrade process, and introduces proto retries to Service Profiles. Also included are updates to the bash scripts to ensure that they follow best practices. * Polished the shell scripts (thanks @joakimr-axis) * Introduced retries to Service Profiles based on the idempotency option of the method by adding an isRetryable function to the proto definition (thanks @mahlunar) * Fixed proxy responses to CONNECT requests by removing the content-length and/or transfer-encoding headers from the response * Fixed DNS lookups in the proxy to consistently use A records when SRV records cannot be resolved * Added dynamic policy discovery to the proxy by evaluating traffic on ports not included in the LINKERD2_PROXY_INBOUND_PORTS environment variable * Added logic to require that the linkerd CRDs are installed when running the `linkerd upgrade` command ## edge-22.5.2 This edge release ships a few changes to the chart values, a fix for multicluster headless services, and notable proxy features. HA functionality, such as PDBs, deployment strategies, and pod anti-affinity, have been split from the HA values and are now configurable for the control plane. On the proxy side, non-HTTP traffic will now be forwarded on the outbound side within the cluster when the proxy runs in ingress mode. * Updated `ingress-mode` proxies to forward non-HTTP traffic within the cluster (protocol detection will always be attempted for outbound connections) * Added a new proxy metric `process_uptime_seconds_total` to keep track of the number of seconds since the proxy started * Fixed an issue with multicluster headless service mirroring, where exported endpoints would be mirrored with a delay, or when changes to the export label would be ignored * Split HA functionality, such as PodDisruptionBudgets, into multiple configurable values (thanks @evan-hines-firebolt for the initial work) ## edge-22.5.1 This edge release adds more flexibility to the MeshTLSAuthentication and AuthorizationPolicy policy resources by allowing them to target entire namespaces. It also fixes a race condition when multiple CNI plugins are installed together as well as a number of other bug fixes. * Added support for MeshTLSAuthentication resources to target an entire namespace, authenticating all ServiceAccounts in that namespace * Fixed a panic in `linkerd install` when the `--ignore-cluster` flag is passed * Fixed issue where pods would fail to start when `enablePSP` and `proxyInit.runAsRoot` are set * Added support for AuthorizationPolicy resources to target namespaces, applying to all Servers in that namespace * Fixed a race condition where the Linkerd CNI configuration could be overwritten when multiple CNI plugins are installed * Added test for opaque ports using Service and Pod IPs (thanks @krzysztofdrys!) * Fixed an error in the linkerd-viz Helm chart in HA mode ## edge-22.4.1 In order to support having custom resources in the default Linkerd installation, the CLI install flow is now always a 2-step process where `linkerd install --crds` must be run first to install CRDs only and then `linkerd install` is run to install everything else. This more closely aligns the CLI install flow with the Helm install flow where the CRDs are a separate chart. This also applies to `linkerd upgrade`. Also, the `config` and `control-plane` sub-commands have been removed from both `linkerd install` and `linkerd upgrade`. On the proxy side, this release fixes an issue where proxies would not honor the cluster's opaqueness settings for non-pod/service addresses. This could cause protocol detection to be peformed, for instance, when using off-cluster databases. This release also disables the use of regexes in Linkerd log filters (i.e., as set by `LINKERD2_PROXY_LOG`). Malformed log directives could, in theory, cause a proxy to stop responding. The `helm.sh/chart` label in some of the CRDs had its formatting fixed, which avoids issues when installing/upgrading through external tools that make use of it, such as recent versions of Flux. * Added `--crds` flag to install/upgrade and remove config/control-plane stages * Allowed the `AuthorizationPolicy` CRD to have an empty `requiredAuthenticationRefs` entry that allows all traffic * Introduced `nodeAffinity` config in all the charts for enhanced control on the pods scheduling (thanks @michalrom089!) * Introduced `resources`, `nodeSelector` and `tolerations` configs in the `linkerd-multicluster-link` chart for enhanced control on the service mirror deployment (thanks @utay!) * Fixed formatting of the `helm.sh/chart` label in CRDs * Updated container base images from buster to bullseye * Added support for spaces in the `config.linkerd.io/opaque-ports` annotation ## edge-22.3.5 This edge release introduces new policy CRDs that allow for more generalized authorization policies. The `AuthorizationPolicy` CRD authorizes clients that satisfy all the required authentications to communicate with the Linkerd `Server` that it targets. Required authentications are specified through the new `MeshTLSAuthentication` and `NetworkAuthentication` CRDs. A `MeshTLSAuthentication` defines a list of authenticated client IDs—specified directly by proxy identity strings or referencing resources such as `ServiceAccount`s. A `NetworkAuthentication` defines a list of client networks that will be authenticated. Additionally, to support the new CRDs, policy-related labels have been changed to better categorize policy metrics. A `srv_kind` label has been introduced which splits the current `srv_name` value—formatted as `kind:name`—into separate labels. The `saz_name` label has been removed and is replaced by the new `authz_kind` and `authz_name` labels. * Introduced the `srv_kind` label which allowed splitting the value of the current `srv_name` label * Removed the `saz_name` label and replaced it with the new `authz_kind` and `authz_name` labels * Fixed an issue in the destination controller where an update would not be sent after an endpoint was discovered for a currently empty service * Introduced the following custom resource types to support generalized authorization policies: `AuthorizationPolicy`, `MeshTLSAuthentication`, `NetworkAuthentication` * Deprecated the `--proxy-version` flag (thanks @importhuman!) * Updated linkerd-viz to use new policy CRDs ## edge-22.3.4 * Disabled pprof endpoints on Linkerd control plane components by default * Fixed an issue where mirror service endpoints of headless services were always ready regardless of gateway liveness * Added server side validation for ServerAuthorization resources * Fixed an "origin not allowed" issue when using the latest Grafana with the Linkerd Viz extension ## edge-22.3.3 This edge release ensures that in multicluster installations, mirror service endpoints have their readiness tied to gateway liveness. When the gateway for a target cluster is not alive, the endpoints that point to it on a source cluster will properly indicate that they are not ready. * Fixed tap controller logging errors that were succeptible to log forgery by ensuring special characters are escaped * Fixed issue where mirror service endpoints were always ready regardless of gateway liveness * Removed unused `namespace` entry in `linkerd-control-plane` chart ## edge-22.3.2 This edge release includes a few fixes and quality of life improvements. An issue has been fixed in the proxy allowing HTTP Upgrade requests to work through multi-cluster gateways, and the init container's resource limits and requests have been revised. Additionally, more Go linters have been enabled and improvements have been made to the devcontainer. * Changed `linkerd-init` resource (CPU/memory) limits and requests to ensure by default the init container does not break a pod's `Guaranteed` QOS class * Added a new check condition to skip pods whose status is `NodeShutdown` during validation as they will not have a proxy container * Fixed an issue that would prevent proxies from sending HTTP Upgrade requests (used in websockets) through multi-cluster gateways ## edge-22.3.1 This edge release includes updates to dependencies, CI, and rust 1.59.0. It also includes changes to the `linkerd-jaeger` chart to ensure that namespace labels are preserved and adds support for `imagePullSecrets`, along with improvements to the multicluster and policy functionality. * Added note to `multicluster link` command to clarify that the link is one-direction * Introduced `imagePullSecrets` to Jaeger Helm chart * Updated Rust to v1.59.0 * Fixed a bug where labels can be overwritten in the `linkerd-jaeger` chart * Fix broken mirrored headles services after `repairEndpoints` runs * Updated `Server` CRD to handle an empty `PodSelector` ## edge-22.2.4 This edge release continues to address several security related lints and ensures they are checked by CI. * Add `linkerd check` warning for clusters that cannot verify their `clusterNetworks` due to Nodes missing the `podCIDR` field * Changed `Server` CRD to allow having an empty `PodSelector` * Modified `linkerd inject` to only support `https` URLs to mitigate security risks * Fixed potential goroutine leak in the port forwarding used by several CLI commands and control plane components * Fixed timeouts in the policiy validator which could lead to failures if `failurePolicy` was set to `Fail` ## edge-22.2.3 This edge release fixes some `Instant`-related proxy panics that occur on Amazon Linux. It also includes many behind the scenes improvements to the project's CI and linting. * Removed the `--controller-image-version` install flag to simplify the way that image versions are handled. The controller image version can be set using the `--set linkerdVersion` flag or Helm value * Lowercased logs and removed redundant lines from the Linkerd2 proxy init container * Prevented the proxy from logging spurious errors when its pod does not define any container ports * Added workarounds to reduce the likelihood of `Instant`-related proxy panics that occur on Amazon Linux ## edge-22.2.2 This edge release updates the jaeger extension to be available in ARM architectures and applies some security-oriented amendments. * Upgraded jaeger and the opentelemetry-collector to their latest versions, which now support ARM architectures * Fixed `linkerd multicluster check` which was reporting false warnings * Started enforcing TLS v1.2 as a minimum in the webhook servers * Had the identity controller emit SHA256 certificate fingerprints in its logs/events, instead of MD5 ## edge-22.2.1 This edge release removed the `disableIdentity` configuration now that the proxy no longer supports running without identity. * Added a `privileged` configuration to linkerd-cni which is required by some environments * Fixed an issue where the TLS credentials used by the policy validator were not updated when the credentials were rotated * Removed the `disableIdentity` configurations now that the proxy no longer supports running without identity * Fixed an issue where `linkerd jaeger check` would needlessly fail for BYO Jaeger or collector installations * Fixed a Helm HA installation race condition introduced by the stoppage of namespace creation ## edge-22.1.5 This edge release adds support for per-request Access Logging for HTTP inbound requests in Linkerd. A new annotation i.e. `config.linkerd.io/access-log` is added, which configures the proxies to emit access logs to stderr. `apache` and `json` are the supported configuration options, emitting access logs in Apache Common Log Format and JSON respectively. Special thanks to @tustvold for all the initial work around this! * Updated injector to support the new `config.linkerd.io/access-log` annotation * Added a new `LINKERD2_PROXY_ACCESS_LOG` proxy environment variable to configure the access log format (thanks @tustvold) * Updated service mirror controller to emit relevant events when mirroring is skipped for a service * Updated various dependencies across the project (thanks @dependabot) ## edge-22.1.4 This edge release features a new configuration annotation, support for externally hosted Grafana instances, and other improvements in the CLI, dashboard and Helm charts. To learn more about using an external Grafana instance with Linkerd, you can refer to our [docs](https://github.com/linkerd/website/blob/0c3c5cd5ae329cd7dbcca18534f3bc8ec7d57859/linkerd.io/content/2.12/tasks/grafana.md). * Added a new annotation to configure skipping subnets in the init container (`config.linkerd.io/skip-subnets`). This configuration option is ideal for Docker-in-Docker (dind) workloads (thanks @michaellzc!) * Added support in the dashboard for externally hosted Grafana instances (thanks @jackgill!) * Introduced resource block to `linkerd-jaeger` Helm chart (thanks @yuriydzobak!) * Introduced parametrized datasource (`DS_PROMETHEUS`) in all Grafana dashboards. This allows pointing to the right Prometheus datasource when importing a dashboard * Introduced a consistent `--ignore-cluster` flag in the CLI for the base installation and extensions; manifests will now be rendered even if there is an existing installation in the current Kubernetes context (thanks @krzysztofdrys!) * Updated the service mirror controller to skip mirroring services whose namespaces do not yet exist in the source cluster; previously, the service mirror would create the namespace itself. ## edge-22.1.3 This release removes the Grafana component in the linkerd-viz extension. Users can now import linkerd dashboards into Grafana from the [Linkerd org](https://grafana.com/orgs/linkerd) in Grafana. Users can also follow the instructions in the [docs](https://github.com/linkerd/website/blob/f687a04ee43c90bd804b04af287bc80c9366db98/linkerd.io/content/2.12/tasks/grafana.md) to install a separate Grafana that can be integrated with the Linkerd Dashboard. * Stopped shipping grafana-based image in the linkerd-viz extension * Removed `repair` sub-command in the CLI * Updated various dependencies across the project (thanks @dependabot) ## edge-22.1.2 This release sets the version of the extension Helm charts to 30.0.0-edge to ensure that previous versions of these charts can be upgraded properly. * Reset extensions Helm chart versions at 30.0.0-edge * Pin multicluster extension pause container version to 3.2 so that it will work on Arm architectures * Create a unique PSP `RoleBinding` for each multicluster link to prevent conflicts when PSP is enabled ## edge-22.1.1 This release adds support for using the cert-manager CA Injector to configure Linkerd's webhooks. * Fixed a rare issue when a Service's opaque ports annotation does not match that of the pods in the service * Disallowed privilege escalation in control plane containers (thanks @kichristensen!) * Updated the multicluster extension's service mirror controller to make mirror services empty when the exported service is empty * Added support for injecting Webhook CA bundles with cert-manager CA Injector (thanks @bdun1013!) ## edge-21.12.4 This release adds support for custom HTTP methods in the viz stats (i.e CLI and Dashboard). Additionally, it also includes various smaller improvements. * Added support for custom HTTP methods in the `linkerd-viz` stats * Updated the health checker to pull trust root from the `linkerd-identity-trust-roots` configmap to support cases where they are generated externally (thanks @wim-de-groot) * Removed unnecessary `installNamespace` bool flag from the `linkerd-control-plane` chart (thanks @mikutas) * Updated the `install` command to error if container runtime check fails * Updated various dependencies across the project (thanks @dependabot) ## edge-21.12.3 This edge release contains a few improvements to the CLI commands and a major change around Helm charts. * **Breaking change** The `linkerd2` chart has been deprecated in favor of the `linkerd-crds` and `linkerd-control-plane` charts. The former takes care of installing all the required CRDs and the latter everything else. Of important note is that, as per Helm best practice, we're no longer creating the linkerd namespace. Users require to do that manually, or have the Helm tool do it explicitly. So the install procedure would look something like this: ```bash helm install linkerd-crds -n linkerd --create-namespace linkerd/linkerd-crds helm install linkerd-control-plane -n linkerd \ --set-file identityTrustAnchorsPEM=ca.crt \ --set-file identity.issuer.tls.crtPEM=issuer.crt \ --set-file identity.issuer.tls.keyPEM=issuer.key \ linkerd/linkerd-control-plane ``` In order to upgrade, please delete your previously installed `linkerd2` chart and install the new charts as explained above. Although the charts for the main extensions (viz, multicluster, jaeger, linkerd2-cni) were not deprecated, they also stopped creating their namespace and users are required to uninstall and reinstall them anew, e.g: ```bash helm install linkerd-viz -n linkerd-viz --create-namespace linkerd/linkerd-viz ``` * Added a new `--obfuscate` flag to `linkerd diagnostics proxy-metrics` to obfuscate potentially private information in the output (thanks @ahmedalhulaibi!) * Fixed formatting of the recommended value for `--set clusterNetworks` in the `linkerd check` output when that parameter doesn't contain all the node podCIDRs (thanks @ElvinEfendi!) * Skipped evicted pods in `linkerd viz check` and `linkerd jaeger check`, to avoid the checks fail unnecessarily * Removed some no longer used environment variables from the proxy's manifest ## edge-21.12.2 This edge removes the default SMI functionality that is included in installations now that the linkerd-smi extension provides these resources. It also relaxes the `proxy-init`'s `privileged` value to only be set to `true` when needed by certain installation configurations. Along with some bug fixes, the repository's issue and feature request templates have been updated to forms; check them when opening a [new issue](https://github.com/linkerd/linkerd2/issues/new/choose)! (thanks @mikutas). * Removed SMI functionality in the default Linkerd installation; this is now part of the linkerd-smi extension * Fixed autocompletion of the `--context` flag (thanks @mikutas!) * Added support for conditionally setting `proxy-init`'s `privileged: true` only when needed (thanks @alex-berger!) * Added support for controlling opaque ports through the Server resource * Fixed an issue where `linkerd check` would compare proxy versions of uninjected pods leading to incorrect errors * Relaxed extension checks so that the CLI still works when not all extension proxies are healthy * Added the `--default-inbound-policy` flag to `linkerd inject` for setting a non-default inbound policy on injected workloads (thanks @ahmedalhulaibi!) ## edge-21.12.1 This edge release enables by default `EndpointSlices` in the destination controller, which unblocks any functionality that is specific to `EndpointSlices` such as as topology-aware hints. It also contains a couple of internal cleanups and upgrades, by our external contributors! * Added new check to `linkerd check` verifying the nodes aren't running the old Docker container runtime and attempting to run proxy-init as root at the same time, which doesn't work (thanks @alex-berger!) * Enabled `EndpointSlices` in the destination controller by default * Removed extraneous empty lines and fixed the formatting of warnings in the output of `linkerd check -o short` * Upgraded to go 1.17 (thanks @Juneezee!) * Removed old protobuf definitions from the codebase (thanks @krzysztofdrys!) ## edge-21.11.4 This edge release introduces a change in the destination service to honor opaque ports set in the `proxyProtocol` field of `Server` resources. This change makes it possible to set opaque ports directly in `Server` resources without needing the opaque ports annotation on pods. The release also features a number of fixes and improvements, a big thank you to our external contributors for their continued support and involvement. * Added support in the destination service for honoring opaque ports marked in `Server` resources; ports can now be marked as opaque directly in `Server` resources through the `proxyProtocol` field. * Added support to override default behavior and run `proxyInit` as root (thanks @alex-berger!) * Added multicluster `Link` CRD to code generation script; consumers of the multicluster API can now use a typed API to interact with multicluster links (thanks @zaharidichev!) * Added a multicluster integration test for exported headless services (thanks @importhuman!) * Deprecated `v1alpha1` version of the policy APIs * Removed newline from `linkerd check` header text (thanks @mikutas!) * Replaced deprecated `beta.kubernetes.io/os` label with `kubernetes.io/os` ## edge-21.11.3 This edge releases fixes a compatibility issue that prevented the policy controller from starting in some Kubernetes distributions. This release also includes a new High Availability mode for the gateway component in multicluster extension. Various dependencies across the CNI plugin, Policy Controller and dashboard have also been upgraded. In the proxy, error logging when the proxy fails to accept a connection due to a system error has been improved. * Updated policy controller to use `openssl` instead of `rustls` to fix compatibility issues with some Kubernetes distributions * Added HA mode to multicluster gateway that adds a PodDisruptionBudget, additional replicas and anti-affinity to the deployment (thanks @Crevil) * Improved TCP server error messages in the proxy * Fixed broken Grafana links in the dashboard * Upgraded CNI pkg to v0.8.1 in `linkerd-cni` to support latest CNI versions * Updated various dependencies in the dashboard, policy controller (thanks @dependabot) ## edge-21.11.2 This edge release introduces a new Services page in the web dashboard that shows live calls and route metrics for meshed services. Additionally, the `proxy-init` container is no longer enforced to run as root. Lastly, the proxy can now retry requests with a `content-length` header—permitting requests emitted by grpc-go to be retried. * Removed hardcoding that enforced the `proxy-init` container to run as root * Added support for retrying requests without a `content-length` header * Changed service discovery logs from `TRACE` to `DEBUG` * Fixed issue with policy controller where it assumed `linkerd` was the name of the control plane namespace, leading to issues with installations that use a non-default namespace name * Added support for ephemeral storage requests and limits configured either through the CLI or annotations (thanks @michaellzc!) * Deprecated support for topology keys and added support for topology aware hints * Added `logFormat` and `logLevel` configuration values for the `proxy-init` container (thanks @gusfcarvalho!) * Added services to the web dashboard (thanks @krzysztofdrys!) * Updated example commands in the web dashboard to use the `viz` subcommand when necessary (thanks @mikutas!) * Removed references to `linkerd-sp-validator` service account in the `linkerd-psp` role binding (thanks @multimac!) ## edge-21.11.1 In this edge, we're very excited to introduce Service Account Token Volume Projections, used to set up the pods' identities. These tokens are bounded specifically for this use case and are rotated daily, replacing the usage of the default tokens injected by Kubernetes which are overly permissive. Note that this edge release updates the minimum supported kubernetes version to 1.20. * Updated the minimum supported kubernetes version to 1.20 * Use Service Account Token Volume Projections to set up the pods' identities; now injection also works on pods with `automountServiceAccountToken` set to `false` * Updated proxy-init's Alpine base image to fix some CVEs (not affecting Linkerd) * Updated the Prometheus image in linkerd-viz to 2.30.3 * Changed the proxy and policy controller to use jemalloc on x86_64 gnu/linux to reduce memory usage * Fixed output for `linkerd check -o json` * Added ability to configure ephemeral-storage resources for each component (thanks @michaellzc!) ## edge-21.10.3 This edge release fixes a bug in the proxy that could cause it to be killed in certain situations. It also uses a more relaxed policy for the identity controller that allows it to work in environments where health checks come from outside of the pod network. * Skipped Prometheus scrapes on policy's `admin` server so that it no longer incorrectly appears as "DOWN" in the Prometheus UI * Updated the identity controller to use the 'all-unauthenticated' policy so that it can accept health checks from the node IPs * Fixed an infinite loop in the proxy that could cause it to be killed * Added tests for the multicluster install command (thanks @crevil!) * Fixed a bug where `authz` CLI commands would fail when policy resources had an empty selector ## edge-21.10.2 This edge release fixes linkerd check and the helm charts to explicitly indicate that the minimum Kubernetes version is 1.17.0. Prior to this change, there was no validation or enforcement from linkerd check or helm to meet this minimum requirement. This edge also improves `check` functionality for extensions by adding the `-oshort` flag, and prevents duplicate policy resources from being created for linked multicluster services. * Moved service mirror policy into multicluster base chart * Added `-oshort` flag for extension `check` commands * Updated minimum kubernetes version to 1.17.0 * Removed unused `crtExpiry` template parameter from helm charts * Fixed multicluster gateway name for ServerAuthorization * Added `priorityClassName` to the helm charts to configure control plane components ## edge-21.10.1 This release includes some fixes in the `linkerd check`, along with a bunch of dependency updates across the dashboard, Go components, and others. On the proxy side, Support for `TLSv1.2` has been dropped (Only `TLSv1.3` cipher suite will be used), `h2` crate has been updated to support HTTP/2 messages with larger header values. * Updated `linkerd check` to avoid multiline errors with retryable checks * Fixed incorrect opaque ports warning in `linkerd check --proxy` with un-named ports * Bumped proxy-init to `1.4.1` which adds support for `--log-level` and `--log-format` flags (thanks @gusfcarvalho) * Removed the use of `TLSv1.2` in the proxy * Updated the `h2` crate in the proxy to support HTTP/2 messages with larger header values. * Updated various dependencies across the dashboard, policy-controller, etc (thanks @dependabot!) ## stable-2.11.0 This release introduces access control policies. Default policies may be configured at the cluster- and workspace-levels; and fine grained policies may be instrumented via the new `policy.linkerd.io/v1beta1` CRDs: `Server` and `ServerAuthorization`. These resources may be created to define how individual ports accept connections; and the `Server` resource will be a building block for future features that configure inbound proxy behavior. Furthermore, `ServiceProfile` retry configurations can now instrument retries for requests with bodies. This unlocks retry behavior for gRPC services. **Upgrade notes**: Please see the [upgrade instructions][upgrade-2110]. * Proxy * Reduced CPU & Memory usage by up to 30% in some load tests * Updated retries to support requests with bodies up to 64KB. ServiceProfiles may now configure retries for gRPC services * The proxy's container image is now based on `gcr.io/distroless/cc` to contain a minimal OS footprint that should not trigger unnecessary alerts in security scanners * Added the `inbound_http_errors_total` and `outbound_http_errors_total` metrics to reflect errors that caused the proxy to respond with errors * Added an `l5d-proxy-error` header that is included on responses on trusted connections for debugging purposes * Added a `l5d-client-id` header on mutually-authenticated inbound requests so that applications can discover the client's identity * Added metrics to reflect TCP and HTTP authorization decisions * Added `srv_name` and `saz_name` labels to inbound HTTP metrics * Fixed an issue that could cause the proxy to continually reconnect to defunct service endpoints * Dropped support for non-HTTP outbound services when `linkerd.io/inject: ingress` is used * Instrumented fuzz testing to help guard against unexpected panics * Control Plane * Added a new `policy-controller` container to the `linkerd-destination` pod--the first control plane component implemented in Rust * Added a new admission controller to validate that multiple `Server` resources do not reference the same port * Added a `linkerd-identity-trust-roots` ConfigMap which configures the trust root bundle for all pods in the core control plane namespace * Eliminated the `linkerd-controller` deployment so that Linkerd's core control plane now consists of only 3 deployments * Updated the proxy injector to configure the `proxy-init` container with `NET_RAW` and `NET_ADMIN` capabilities so that the container does not fail when the pod drops these capabilities * CLI * Enhanced `linkerd completion` to expand Kubernetes resources from the current kubectl context * Added an `authz` subcommand to display the authorization policies that impact a workload * Added a _short_ output mode for `linkerd check` that only prints failed checks * Added support for `ReplicaSets` to `linkerd stat` so that pods created by Argo `Rollout` resources can be inspected * Helm: please see the [upgrade instructions][upgrade-2110]. * Extensions: * Introduced a new (optional) SMI extension responsible for reading `specs.smi-spec.io` resources and converting them to Linkerd resources * In `stable-2.12`, this extension will be required to use `TrafficSplit` resources with Linkerd * Added an extensions page to the Linkerd Web UI * Viz * Added `Server` and `ServerAuthorization` resources for all ports * Added JSON log formatting * Jaeger * Added OpenTelemetry collector instead of OpenCensus * Multicluster * Added experimental support for `StatefulSet` workloads This release includes changes from a massive list of contributors. A special thank-you to everyone who helped make this release possible: Gustavo Fernandes de Carvalho @gusfcarvalho Oleg Vorobev @olegy2008 Bart Peeters @bartpeeters Stepan Rabotkin @EpicStep LiuDui @xichengliudui Andrew Hemming @drewhemm Ujjwal Goyal @importhuman Knut Götz @knutgoetz Sanni Michael @sannimichaelse Brandon Sorgdrager @bsord Gerald Pape @ubergesundheit Alexey Kostin @rumanzo rdileep13 @rdileep13 Takumi Sue @mikutas Akshit Grover @akshitgrover Sanskar Jaiswal @aryan9600 Aleksandr Tarasov @aatarasoff Taylor @skinn Miguel Ángel Pastor Olivar @migue wangchenglong01 @wangchenglong01 Josh Soref @jsoref Carol Chen @kipply Peter Smit @psmit Tarvi Pillessaar @tarvip James Roper @jroper Dominik Münch @muenchdo Szymon Gibała @Szymongib Mitch Hulscher @mhulscher [upgrade-2110]: https://linkerd.io/2/tasks/upgrade/#upgrade-notice-stable-2110 ## edge-21.9.5 This edge is a release candidate for `stable-2.11.0`, containing a couple of improvements to `linkerd check`, some final tweaks before the stable release, and a couple of contributions from the community. * Had `linkerd check --proxy` stop failing on pods that are in Shutdown status (thanks @olegy2008!) * Lowered from error to warning a failed check on misconfigured opaque ports annotations, given that doesn't imply the installation is broken * Added log level and format settings to all the viz components (thanks @gusfcarvalho!) * Removed label from the multicluster gateway and service-mirror pods to allow them to be properly rolled out when upgrading ## edge-21.9.4 This edge is a release candidate for `stable-2.11.0`! It introduces a new `linkerd viz auth` command which shows metrics for server authorizations broken down by server for a given resource. It also shows the rate of unauthorized requests to each server. This is helpful for seeing a breakdown of which authorizations are being used and what proportion of traffic is being rejected. It also fixes an issue in the proxy where HTTP load balancers could continue trying to establish connections to endpoints that were removed from service discovery. In addition it improves the proxy's error handling so that it can signal to an inbound proxy when its peers outbound connections should be torn down. * Changed destination watch updates from `info` to `debug` to reduce the amount of logs (thanks @bartpeeters!) * Added the `linkerd viz auth` command which shows metrics for server authorizations broken down by server for a given resource * Fixed an issue where the policy controller's validating admission webhook attempted to validate ServerAuthorizations when it should only be validating Servers * Removed `omitWebhookSideEffects` setting now that we no longer support Kubernetes 1.12 * Improved proxy error handling so that it can signal to its peers that their outbound connections should be torn down * Fixed an issue where after upgrades there would be a mismatch in certs used by the policy controller validator; the destination pod is now restarted similar to the injector * Fixed a field reference in the Helm template to properly refer to `profileValidator.namespaceSelector` * Updated policy CRD versions to `v1beta1` * Added support for `stat`'s `-o json` option to Server resources * Fixed an issue in the proxy where HTTP load balancers could continue trying to establish connections to endpoints that were removed from service discovery * Added JSON output format to `linkerd viz authz` command ## edge-21.9.3 This edge is a release candidate for `stable-2.11.0`! It features a new `linkerd authz` CLI command to list servers and authorizations for a workload, as well as policy resources support for `linkerd viz stat`. Furthermore, this edge release adds support for JSON log formatting, enables TLS detection on port 443 (previously marked as opaque), and further improves policy features. * Removed port 443 from the default list of opaque ports, this will allow the proxy to report metadata (such as the connection's SNI value) on TLS connections to port 443 * Added default policies for core Linkerd extensions * Added support for JSON log formatting to the policy controller * Added support for new policy resources to `viz stat` command * Added default policy annotation to `linkerd-identity` * Added a new `linkerd authz` command to the CLI to list all server and authorization resources that apply to a specific resource * Added TLS labels (including client identity) to authorization metrics in the proxy * Changed the opaque ports CLI check to consider service and pod ports when checking annotation values; previously, the check would naively issue warnings when the service annotation values were different from the pod it selected * Changed how the proxy forwards inbound connections to a pod locally; the proxy now targets the original address instead of a port bound on localhost to protect services that are only bound on loopback from being exposed to other pods * Improved memory utilization in the proxy, especially for TCP forwarding, where the memory allocated was reduced from 128KB to 16KB * Updated the inbound policy system for the proxies to always allow connections from localhost * Fixed an issue where the policy controller would not detect changes to the `proxyProtocol` field of `Server` resources * Fixed an issue where the policy admission controller would log a `WARN` message when deserializing `Server` structs ## edge-21.9.2 This edge release gets us closer to 2.11 by further polishing the policy feature. Also the proxy received a noticeable resource consumption improvement. * Stopped creating the default authorizations for the kubelet * Added missing ports to the destination controller's default list of ports, to allow the sp-validator to start properly when using a default-deny policy * Set the destination and proxy-injector pods default policy to `all-unauthenticated` to allow the webhooks to be called from the kube-api when using a default-deny policy * Extended inbound policies to cover the proxy's admin server * Improved the proxy's error handling so that HTTP metrics include 5XX responses for common errors * The proxy's outbound tap has been fixed to include route labels when service profiles are configured * Enabled link-time optimizations in the Rust components (proxy and policy controller), resulting in noticeable RSS and CPU consumption improvements * Made the admin servers in the control plane components properly shut down (thanks @EpicStep!) * Updated linkerd-await, suppressing the error emitted when linkerd-await was disabled ## edge-21.9.1 This release includes various improvements and feature additions across the policy feature i.e, New validating webhook for policy resources. This also includes changes in the proxy i.e, terminating TCP connections when a authorization is revoked, improvements in the proxy authorization metrics. In addition, proxy injector has also been updated to set the right `opaque-ports` annotation on services with default opaque ports. * Added a new validating admission controller to validate the policy resources * Updated the proxy-init to remove a rule which caused the packets from the proxy with destination != 127.0.0.1 on localhost to be sent to the inbound proxy * Updated inbound policy enforcement to interrupt TCP forwarding if a previously established authorization is revoked * Added new proxy metrics to expose authorization decisions * Updated inbound TCP metrics to only include a `srv_name` label * Updated the proxy to export route-oriented metrics only when a ServiceProfile is enabled * Updated the proxy's release build configuration to improve CPU and memory utilization * Added DNS name validation to the `proxy-identity` binary which creates the read-only private key required by the proxy (thanks @yorkijr!) * Updated the identity controller's default policy to be `cluster-unauthenticated` * Updated the proxy injector to include the correct default ports as opaque with services * Deprecated the usage of `vis stat ts` and print a warning about the SMI extension * Updated various dependencies across the dashboard, policy-controller (thanks @dependabot!) ## edge-21.8.4 This edge release continues to build on the policy feature by adding support for cluster-scoped default policies and exposing policy labels on various prometheus metrics. The proxy has been updated to return HTTP-level authorization errors at the time that the request is processed, instead of when the connection is established. In addition, the proxy-injector has been updated to set the `opaque-ports` annotation on a workload to make sure that controllers can discover how the workload was configured. Also, the `sleep` binary has been added to the proxy image in order to restore the functionality required for `waitBeforeExitSeconds` to work. * Added `default-inbound-policy` annotation to the proxy-injector * Updated the proxy-injector to always add the `opaque-ports` annotation * Added `sleep` binary to proxy image * Updated inbound traffic metrics to include server and authorization labels * Updated the policy-controller to honor pod level port annotations when a `Server` resource definition does not match the ports defined for the workload * Updated the point at which the proxy returns HTTP-level authorization errors * Exposed permit and policy labels on HTTP metrics * Added support for cluster-scoped default policies * Dropped `nonroot` variant from the policy-controller's distroless base image to avoid erroring in some environments. ## edge-21.8.3 This release adds support for dynamic inbound policies. The proxy now discovers policies from the policy-controller API for all application ports documented in a pod spec. Rejected connections are logged. Policies are not yet reflected in the proxy's metrics. These policies also allow the proxy to skip protocol detection when a server is explicitly annotated as HTTP/2 or when the server is documented to be opaque or application-terminated TLS. * Added a new section to linkerd-viz's dashboard that lists installed extensions (thanks @sannimichaelse!) * Added the `enableHeadlessServices` Helm flag to the `linkerd multicluster link` command for enabling headless service mirroring (thanks @knutgoetz!) * Removed some unused and duplicate constants in the codebase (thanks @xichengliudui!) * Added support for exposing service metadata from exported to mirrored services in multicluster installations (thanks @importhuman!) * Fixed an issue where the policy controller's liveness checks would fail after the controller was disconnected but had successfully resumed its watches * Fixed the `linkerd-policy` service selector to properly select `destination` control plane components * Added additional environment variables to the proxy container to allow support for dynamic policy configuration ## edge-21.8.2 This edge release continues the policy work by adding a new controller, written in Rust, to expose a discovery API for inbound server policies. Apart from that, this release includes a number of changes from external contributors; the `linkerd-jaeger` helm chart now supports passing arguments to the Jaeger container through the chart's values file. A number of unused functions and variables have been also removed to improve the quality of the codebase. Finally, this release also comes with changes to the proxy's outbound behavior, a new extensions page on the dashboard, and support for querying service metrics using the `authority` label in `linkerd viz stat`. * Introduced new `linkerd-policy-controller`; the new controller is written in Rust and implements discovery APIs for inbound server policies, the container has been added to the `linkerd-destination` pod * Updated `linkerd-jaeger` helm chart to support passing arguments to the Jaeger container (thanks @bsord!) * Added support for querying service metrics using the `authority` label in `linkerd viz stat` * Improved code hygiene by removing unused constants and functions throughout the codebase (thanks @xichengliudui!) * Added a new extensions page to the dashboard to list all known built-in and third party extensions that can be used with Linkerd * Changed outbound behavior in the proxy to tear down server-side connections when the remote proxy returns responses that indicate proxy errors; the connection in this case will be reset to allow clients to connect to a new endpoint ## edge-21.8.1 This releases includes initial changes w.r.t addition of Authorization into Linkerd. It includes adding the new `policy.linkerd.io` CRDs to the core install. This also includes numerous dependency updates both in the web and dashboard. * Added `servers.policy.linkerd.io` and `serverauthorizations.policy.linkerd.io` CRDs into the default Linkerd installation to support configuration and discovery of inbound policies * Modified the proxy to support upcoming policy features * Updated several dashboard dependencies to latest versions * Updated several proxy dependencies to latest versions ## edge-21.7.5 This release updates Linkerd to store the identity trust root in a ConfigMap to make it easier to manage and rotate the trust root. The release also lays the groundwork for StatefulSet support in the multicluster extension and removes deprecated PSP resources by default. * Added a `linkerd-identity-trust-roots` ConfigMap which contains the configured trust root bundle * Introduced support for StatefulSets across multicluster (disabled by default) * Stopped installing PSP resources by default since these are deprecated as of Kubernetes v1.21 ## edge-21.7.4 This release continues to focus on dependency updates. It also adds the `l5d-proxy-error` information header to distinguish proxy generated errors proxy generated errors from application generated errors. * Updated several project dependencies * Added a new `l5d-proxy-error` on responses that allows proxy-generated error responses to be distinguished from application-generated error responses. * Removed support for configuring HTTP/2 keepalives via the proxy. Configuring this setting would sometimes cause conflicts with Go gRPC servers and clients * Added a new `target_addr` label to `*_tcp_accept_errors` metrics to improve diagnostics, especially for TLS detection timeouts ## edge-21.7.3 This edge release introduces several changes around metrics. ReplicaSets are now a supported resource and metrics can be associated with them. A new metric has been added which counts proxy errors encountered before a protocol can be detected. Finally, the request errors metric has been split into separate inbound and outbound directions. * Fixed printing `check --pre` command usage if it fails after being unable to connect to Kubernetes (thanks @rdileep13!) * Updated the default skip and opaque ports to match that which is listed in the [documentation](https://linkerd.io/2.10/features/protocol-detection/#configuring-protocol-detection) * Added the `LINKERD2_PROXY_INBOUND_PORTS` environment variable during proxy injection which will be used by ongoing policy changes * Added client-go cache size metrics to the `diagnostics controller-metrics` command * Added validation that the certificate provided by an external issuer is a CA (thanks @rumanzo!) * Added metrics support for ReplicaSets * Replaced the `request_errors_total` metric with two new metrics: `inbound_http_errors_total` and `outbound_http_errors_total` * Introduced the `inbound_tcp_accept_errors_total` and `outbound_tcp_accept_errors_total` metrics which count proxy errors encountered before a protocol can be detected ## edge-21.7.2 This edge release focuses on dependency updates and has a couple of functional changes. First, the Dockerfile used to build the proxy has been updated to use the default `distroless` image, rather than the non-root variant. This change is safe because the proxy already runs as non-root within the container. Second, the `ignoreInboundPorts` parameter has been added in the linkerd2-cni helm charts in order to enable tap support. * Updated several project dependencies * Updated the Dockerfile-proxy to use the default distroless image, because the proxy already runs as non-root within the container * Added `ignoreInboundPorts` parameter to the linkerd2-cni plugin helm chart ## edge-21.7.1 This edge release adds support for emitting Kubernetes events in the identity controller when issuing leaf certificates. The event includes the identity, expiry date, and a hash of the certificate. Additionally, this release contains many dependency updates for the control plane's components, and it includes a fix for an issue with the clusterNetworks healthcheck. * Updated the identity controller to emit Kubernetes events when successfully issuing leaf certificates to injected pods. * Fixed an issue in `linkerd check` where the clusterNetworks healthcheck would fail if the `podCIDR` field is omitted from a node's spec. * Removed unnecessary controller port-forward logic from the `bin/web` script. ## edge-21.6.5 This release contains a few improvements, from many contributors! Also under the hood, the destination service has received updates in preparation to the upcoming support for StatefulSets across multicluster. * Improved the `linkerd check --proxy` command to avoid hitting a timeout when dealing with large clusters * Fixed the web component permissions in order to properly run the podCIDR check (thanks @aryan9600!) * Avoid having the proxy-init container fail when the main container is configured to drop either the NET_RAW or NET_ADMIN capabilities (thanks @aryan9600!) * Upgraded the proxy-init image to improve the output in "simulate" mode (thanks @liuerfire!) and to log to stdout instead of stderr (thanks @mo4islona!) * Added test-coverage reports to PRs (thanks @akshitgrover!) ## edge-21.6.3 This release moves the Linkerd proxy to a more minimal Docker base image, adds a check for detecting certain network misconfigurations, and replaces the deprecated OpenCensus collector with the OpenTelemetry collector in the jaeger extension. * Switched the Linkerd proxy's base docker image from Debian to a minimal distroless base image (thanks @tskinn!) * Added a check to verify that Linkerd's clusterNetworks settings match the cluster's pod CIDR networks (thanks @aryan9600!) * Replaced the deprecated OpenCensus collector with the OpenTelemetry collector in the jaeger extension (thanks @aatarasoff!) ## edge-21.6.2 This release fixes a problem with the HTTP body buffering that was added to support gRPC retries. Now, only requests with a retry configuration are buffered (and only when their bodies are less than 64KB). Additionally, an issue with the outbound ingress-mode proxy where forwarded HTTP clients could fail to detect when the target pod was deleted, causing connections to retry forever has been fixed. This only impacted traffic forwarded directly to pod IPs and not load balanced services. Finally, this release also includes some fixes in the CLI and dashboard. * Added a new check that verifies if the opaque ports annotation is misconfigured on services or pods (thanks @migue!) * Added support for resource aware completion for core linkerd command * Fixed an issue where `namespace` resource was erroneously being shown in the dashboard's topology graph * Added uninstall command support for legacy extension installs * Updated the proxy to only buffer request bodies when a request can be retried * Updated the proxy to prevent buffering indefinitely on requests when endpoints are updated in ingress mode * Fixed spelling mistakes across various files in the project (thanks @jsoref!) ## edge-21.6.1 This release adds support for retrying HTTP/2 requests with small (<64KB) message bodies, allowing the proxy to properly buffer message bodies when responses are classified as a failure. Documentation on how to configure retries can be found [here](https://linkerd.io/2.10/tasks/configuring-retries/). This release also modifies the proxy's identity subsystem to instantiate a client on-demand so client connections are not retained continually. Also included in this release are various bug fixes and improvements as well as expanding support for resource-aware tab completion in the jaeger and multicluster CLI extensions. * Added support for specifying a `gateway-port` flag for the `multicluster link` command (thanks @psmit!) * Added support for Kubernetes resource aware tab completion for `jaeger` and `multicluster` commands * Fixed an issue where `viz`, `jaeger` and `multicluster` extensions could not be installed on `PodSecurityPolicy`-enabled clusters * Fixed an issue where `linkerd check --proxy` could incorrectly report out-of-date proxy versions caused by incorrect regex (thanks @aryan9600!) * Added support for the proxy to retry HTTP/2 requests with message bodies <= 64KB * Modified the proxy's controller stack to create new client connections on-demand * Fixed Viz's `uninstall` command to remove viz installations that used the legacy `linkerd.io/extension: linkerd-viz` label (thanks @jsoref!) * Expanded the "linkerd-existence" health check to also check for the destination pod readiness ## edge-21.5.3 This edge release contains various improvements to the Viz and Jaeger install charts, along with bug fixes in the CLI, and destination. This release also adds kubernetes aware autocompletion to all viz commands, along with ServiceProfiles to be part of the default `viz install`. Finally, the proxy has been updated to continue supporting requests without `l5d-dst-override` in ingress-mode proxies, to no longer include query parameters in the OpenCensus trace spans, and to prevent timeouts with controller clients of components with more than one replica. * Separated protocol hint setting from H2 upgrades in destination profile response, thus preventing `hint.OpaqueTransport` field from not being set when H2 upgrades are disabled * Updated OpenCensus trace spans for HTTP requests to no longer include query parameters (thanks @aatarasoff!) * Reverted [linkerd/linkerd2-proxy#992](https://github.com/linkerd/linkerd2-proxy/pull/992) to support requests without `l5d-dst-override` in ingress-mode proxies * Fixed an issue in the proxy to prevent timeouts with controller clients of components with more than one replica * Fixed `linkerd check --proxy` failure with pods that are part of Jobs * Updated `viz install` to also include ServiceProfiles of its components. As a side-effect, `linkerd diagnostics install-sp` cmd has been removed * Added support for Kubernetes resource aware tab completion for all viz commands * Updated destination to prefer `ServiceProfile.dstOverrides` over `TrafficSplit` when both are present for a service * Added toggle flags for `collector` and `jaeger` components in the jaeger extension (thanks @tarvip!) * Added support for setting `nodeselector`, `toleration` fields for components in the Viz extension (thanks @aatarasoff!) * Fixed a templating issue in Viz, making `podAnnotations` field work with prometheus * Updated Golang version to 1.16.4 * Removed unnecessary `--addon-overwrite` flag in `linkerd upgrade` ## edge-21.5.2 This edge release updates the proxy-init container to check whether the iptables rules have already been added, which prevents errors if the proxy-init container is restarted. Also, the `viz stat` command now has tab completion for Kubernetes resources, saving you precious keystrokes! Finally, the proxy has been updated with several fixes and improvements. * Added instructions to `build.md` for using a locally built proxy (thanks @jroper!) * Added support for Kubernetes resource aware tab completion to the `viz stat` command * Updated `proxy-init` to skip configuring firewall if rules exists * Fixed `viz uninstall` to delete all RBAC objects (thanks @aryan9600!) * Improved diagnostics for rejected profile discovery * Added the `l5d-client-id` header on mutually-authenticated inbound requests so that applications can discover the client's identity. * Reduced proxy resource usage when there are no profiles * Changed the admin server to assume all meshed connections are HTTP/2 and fail connections when that is not the case * Updated the proxy to require the `l5d-dst-override` header on outbound requests when the proxy is in ingress-mode * Removed support for TCP-forwarding in ingress-mode ## edge-21.5.1 This edge release adds support for versioned hint URLs in `linkerd check` and support for traffic splitting through ServiceProfiles, among other fixes and improvements. Additionally, more options have been added to the linkerd-multicluster and linkerd-jaeger helm charts. * Added support for traffic splitting through a ServiceProfile's `dstOverrides` field. * Added `nodePorts` option to the multicluster helm chart (thanks @psmit!). * Added `nodeSelector` and toleration options to the linkerd-jaeger helm chart (thanks @aatarasoff!). * Added versioned hint URLs to the CLI `check` command when encountering an error; each major CLI version will now point to that version's relevant section in the Linkerd troubleshooting page. * Fixed an issue in the CLI `check` command where error messages for healthchecks that were being retried would be outputted repeatedly instead of just once. * Fixed an issue in the proxy injector where a namespace annotated with opaque ports would overwrite all service annotations. * Fixed a regression in the proxy that caused all logs to be output with ANSI control characters, by default logs are output in plaintext now. * Simplified proxy internals in order to distinguish endpoint-forwarding logic from the handling of load balanced services. * Simplified the ingress-mode outbound proxy by requiring the `l5d-dst-override` header and by failing non-HTTP communication. Proxies running in ingress-mode will not unexpectedly revert to insecure communication as a result. ## edge-21.4.5 This edge release adds a new output format `short` for `linkerd check` to show a summary of the check output. This release also includes various proxy bug fixes and improvements. * Proxy * Fixed a task leak that would be triggered when clients disconnect a service in failfast. * Improved admin server protocol detection so that error messages are more descriptive about the underlying problem. * Fixed panics found in fuzz testing. These panics were extremely unlikely to occur in practice and would require very specific configuration overrides to be triggered. * CLI * Added support for a new `short` format for the `--output` flag of the `check` command to show a summary of check results ## edge-21.4.4 This edge release further consolidates the control plane by removing the linkerd-controller deployment and moving the sp-validator container into the destination deployment. Annotation inheritance has been added so that all Linkerd annotations on a namespace resource will be inherited by pods within that namespace. In addition, the `config.linkerd.io/proxy-await` annotation has been added which enables the [linkerd-await](https://github.com/linkerd/linkerd-await) functionality by default, simplifying the implementation of the await behavior. Setting the annotation value to disabled will prevent this behavior. Some of the `linkerd check` functionality has been updated. The command ensures that annotations and labels are properly located in the YAML and adds proxy checks for the control plane and extension pods. Finally, the nginx container has been removed from the Multicluster gateway pod, which will impact upgrades. Please see the note below. **Upgrade note:** When the Multicluster extension is updated in both of the source and target clusters there won't be any downtime because this change only affects the readiness probe. The multicluster links must be re-generated with the `linkerd mc link` command and the `linkerd mc gateways` will show the target cluster as not alive until the `linkerd mc link` command is re-run, however that shouldn't affect existing endpoints pointing to the target cluster. * Added proxy checks for core control plane and extension pods * Added support for awaiting proxy readiness using an annotation * Added namespace annotation inheritance to pods * Removed the linkerd-controller pod * Moved sp-validator container into the destination deployment * Added check verifying that labels and annotations are not mixed up (thanks @szymongib) * Enabled support for extra initContainers to the linkerd-cni daemonset (thanks @mhulscher!) * Removed nginx container from multicluster gateway pod * Added an error message when there is nothing to uninstall ## stable-2.10.1 This stable release adds CLI support for Apple Silicon M1 chips and support for SMI's TrafficSplit `v1alpha2`. There are several proxy fixes: handling `FailedPrecondition` errors gracefully, inbound TLS detection from non-meshed workloads, and using the correct cached client when the proxy is in ingress mode. The logging infrastructure has also been improved to reduce memory pressure in high-connection environments. On the control-plane side, there have been several improvements to the destination service such as support for Host IP lookups and ignoring pods in "Terminating" state. It also updates the proxy-injector to add opaque ports annotation to pods if their namespace has it set. On the CLI side, `linkerd repair` has been updated to be aware about the control-plane version and suggest the relevant version to generate the right config. Various bugs have been fixed around `linkerd identity`, etc. **Upgrade notes**: Please refer [2.10 upgrade instructions](https://linkerd.io/2/tasks/upgrade/#upgrade-notice-stable-2100) if you are upgrading from `2.9.x` or below versions. * Proxy: * Fixed an issue where proxies could infinitely retry failed requests to the `destination` controller when it returned a `FailedPrecondition` * The proxy's logging infrastructure has been updated to reduce memory pressure in high-connection environments. * Fixed a caching issue in the outbound proxy that would cause it to forward traffic to the wrong pod when running in ingress mode. * Fixed an issue where inbound TLS detection from non-meshed workloads could break * Fixed an issue where the admin server's HTTP detection would fail and not recover; these are now handled gracefully and without logging warnings * Control plane proxies no longer emit warnings about the resolution stream ending. This error was innocuous. * Bumped the proxy-init image to v1.3.11 which updates the go version to be 1.16.2 * Control Plane: * Fixed an issue where the destination service would respond with too big of a header and result in http2 protocol errors * Fixed an issue where the destination control plane component sometimes returned endpoint addresses with a 0 port number while pods were undergoing a rollout (thanks @riccardofreixo!) * Fixed an issue where pod lookups by host IP and host port fail even though the cluster has a matching pod * Updated the IP Watcher in destination to ignore pods in "Terminating" state (thanks @Wenliang-CHEN!) * Modified the proxy-injector to add the opaque ports annotation to pods if their namespace has it set * Added Support for TrafficSplit `v1alpha2` * Updated all the control-plane components to use go `1.16.2`. * CLI: * Fixed an issue where the linkerd identity command returned the root certificate of a pod instead of its leaf certificates * Fixed an issue where the destination service would respond with too big of a header and result in http2 protocol errors * Updated the release process to build Linkerd CLI binaries for Apple Silicon M1 chips * Improved error messaging when trying to install Linkerd on a cluster that already had Linkerd installed * Added a loading spinner to the linkerd check command when running extension checks * Added installNamespace toggle in the jaeger extension's install. (thanks @jijeesh!) * Updated healthcheck pkg to have hintBaseURL configurable, useful for external extensions using that pkg * Fixed TCP read and write bytes/sec calculations to group by label based off inbound or outbound traffic * Fixed an issue in linkerd inject where the wrong annotation would be added when using --ingress flag * Updated `linkerd repair` to be aware of the client and server versions * Updated `linkerd uninstall` to print error message when there are no resources to uninstall. * Helm: * Aligned the Helm installation heartbeat schedule to match that of the CLI * Viz: * Fixed an issue where the topology graph in the dashboard was no longer draggable. * Updated dashboard build to use webpack v5 * Added CA certs to the Viz extension's metrics-api container so that it can validate the certificate of an external Prometheus * Removed components from the control plane dashboard that now are part of the Viz extension * Changed web's base image from debian to scratch * Multicluster: * Fixed an issue with Multicluster's service mirror where its endpoint repair retries were not properly rate limited * Jaeger: * Fixed components in the Jaeger extension to set the correct Prometheus scrape values ## edge-21.4.3 This edge supersedes `edge-21.4.2` as a release candidate for `stable-2.10.1`! This release adds support for TrafficSplit `v1alpha2`. Additionally, It includes improvements to the web and `proxy-init` images. * Added Support for TrafficSplit `v1alpha2` * Changed web base image from debian to scratch * Bumped the `proxy-init` image to `v1.3.11` which updates the go version to be `1.16.2` ## edge-21.4.2 This edge release is another candidate for `stable-2.10.1`! It includes some CLI fixes and addresses an issue where the outbound proxy would forward traffic to the wrong pod when running in ingress mode. Thank you to all of our users that have helped test and identify issues in 2.10! * Fixed an issue in `linkerd inject` where the wrong annotation would be added when using `--ingress` flag * Fixed a nil pointer dereference in `linkerd repair` caused by a mismatch between CLI and server versions * Removed an unnecessary error handling condition in multicluster check (thanks @wangchenglong01!) * Fixed a caching issue in the outbound proxy that would cause it to forward traffic to the wrong pod when running in ingress mode. * Removed unsupported `matches` field from TrafficSplit CRD ## edge-21.4.1 This is a release candidate for `stable-2.10.1`! This includes several fixes for the core installation as well the Multicluster, Jaeger, and Viz extensions. There are two significant proxy fixes that address TLS detection and admin server failures. Thanks to all our 2.10 users who helped discover these issues! * Fixed TCP read and write bytes/sec calculations to group by label based off inbound or outbound traffic * Updated dashboard build to use webpack v5 * Modified the proxy-injector to add the opaque ports annotation to pods if their namespace has it set * Added CA certs to the Viz extension's `metrics-api` container so that it can validate the certificate of an external Prometheus * Fixed an issue where inbound TLS detection from non-meshed workloads could break * Fixed an issue where the admin server's HTTP detection would fail and not recover; these are now handled gracefully and without logging warnings * Aligned the Helm installation heartbeat schedule to match that of the CLI * Fixed an issue with Multicluster's service mirror where it's endpoint repair retries were not properly rate limited * Removed components from the control plane dashboard that now are part of the Viz extension * Fixed components in the Jaeger extension to set the correct Prometheus scrape values ## edge-21.3.4 This release fixes some issues around publishing of CLI binary for Apple Silicon M1 Chips. This release also includes some fixes and improvements to the dashboard, destination, and the CLI. * Fixed an issue where the topology graph in the dashboard was no longer draggable * Updated the IP Watcher in destination to ignore pods in "Terminating" state (thanks @Wenliang-CHEN!) * Added `installNamespace` toggle in the jaeger extension's install. (thanks @jijeesh!) * Updated `healthcheck` pkg to have `hintBaseURL` configurable, useful for external extensions using that pkg * Added multi-arch support for RabbitMQ integration tests (thanks @barkardk!) ## edge-21.3.3 This release includes various bug fixes and improvements to the CLI, the identity and destination control plane components as well as the proxy. This release also ships with a new CLI binary for Apple Silicon M1 chips. * Added new RabbitMQ integration tests (thanks @barkardk!) * Updated the Go version to 1.16.2 * Fixed an issue where the `linkerd identity` command returned the root certificate of a pod instead of its leaf certificate * Fixed an issue where the destination service would respond with too big of a header and result in http2 protocol errors * Updated the release process to build Linkerd CLI binaries for Apple Silicon M1 chips * Improved error messaging when trying to install Linkerd on a cluster that already had Linkerd installed * Fixed an issue where the `destination` control plane component sometimes returned endpoint addresses with a `0` port number while pods were undergoing a rollout (thanks @riccardofreixo!) * Added a loading spinner to the `linkerd check` command when running extension checks * Fixed an issue where pod lookups by host IP and host port fail even though the cluster has a matching pod * Control plane proxies no longer emit warnings about the resolution stream ending. This error was innocuous. * Fixed an issue where proxies could infinitely retry failed requests to the `destination` controller when it returned a `FailedPrecondition` * The proxy's logging infrastructure has been updated to reduce memory pressure in high-connection environments. ## stable-2.10.0 This release introduces Linkerd extensions. The default control plane no longer includes Prometheus, Grafana, the dashboard, or several other components that previously shipped by default. This results in a much smaller and simpler set of core functionalities. Visibility and metrics functionality is now available in the Viz extension under the `linkerd viz` command. Cross-cluster communication functionality is now available in the Multicluster extension under the `linkerd multicluster` command. Distributed tracing functionality is now available in the Jaeger extension under the `linkerd jaeger` command. This release also introduces the ability to mark certain ports as "opaque", indicating that the proxy should treat the traffic as opaque TCP instead of attempting protocol detection. This allows the proxy to provide TCP metrics and mTLS for server-speaks-first protocols. It also enables support for TCP traffic in the Multicluster extension. **Upgrade notes**: Please see the [upgrade instructions](https://linkerd.io/2/tasks/upgrade/#upgrade-notice-stable-2100). * Proxy * Updated the proxy to use TLS version 1.3; support for TLS 1.2 remains enabled for compatibility with prior proxy versions * Improved support for server-speaks-first protocols by allowing ports to be marked as opaque, causing the proxy to skip protocol detection. Ports can be marked as opaque by setting the `config.linkerd.io/opaque-ports` annotation on the Pod and Service or by using the `--opaque-ports` flag with `linkerd inject` * Ports `25,443,587,3306,5432,11211` have been removed from the default skip ports; all traffic through those ports is now proxied and handled opaquely by default * Fixed an issue that could cause proxies in "ingress mode" (`linkerd.io/inject: ingress`) to use an excessive amount of memory * Improved diagnostic logging around "fail fast" and "max-concurrency exhausted" error messages * Added a new `/shutdown` admin endpoint that may only be accessed over the loopback network allowing batch jobs to gracefully terminate the proxy on completion * Control Plane * Removed all components and functionality related to visibility, tracing, or multicluster. These have been moved into extensions * Changed the identity controller to receive the trust anchor via environment variable instead of by flag; this allows the certificate to be loaded from a config map or secret (thanks @mgoltzsche!) * Added PodDisruptionBudgets to the control plane components so that they cannot be all terminated at the same time during disruptions (thanks @tustvold!) * CLI * Changed the `check` command to include each installed extension's `check` output; this allows users to check for proper configuration and installation of Linkerd without running a command for each extension * Moved the `metrics`, `endpoints`, and `install-sp` commands into subcommands under the `diagnostics` command * Added an `--opaque-ports` flag to `linkerd inject` to easily mark ports as opaque. * Added the `repair` command which will repopulate resources needed for properly upgrading a Linkerd installation * Added Helm-style `set`, `set-string`, `values`, `set-files` customization flags for the `linkerd install` and `linkerd upgrade` commands * Introduced the `linkerd identity` command, used to fetch the TLS certificates for injected pods (thanks @jimil749) * Removed the `get` and `logs` command from the CLI * Helm * Changed many Helm values, please see the upgrade notes * Viz * Introduced the `linkerd viz` subcommand which contains commands for installing the viz extension and all visibility commands * Updated the Web UI to only display the "Gateway" sidebar link when the multicluster extension is active * Added a `linkerd viz list` command to list pods with tap enabled * Fixed an issue where the `tap` APIServer would not refresh its certs automatically when provided externally—like through cert-manager * Multicluster * Introduced the `linkerd multicluster` subcommand which contains commands for installing the multicluster extension and all multicluster commands * Added support for cross-cluster TCP traffic * Updated the service mirror controller to copy the `config.linkerd.io/opaque-ports` annotation when mirroring services so that cross-cluster traffic can be correctly handled as opaque * Added support for multicluster gateways of types other than LoadBalancer (thanks @DaspawnW!) * Jaeger * Introduced the `linkerd jaeger` subcommand which contains commands for installing the jaeger extension and all tracing commands * Added a `linkerd jaeger list` command to list pods with tracing enabled This release includes changes from a massive list of contributors. A special thank-you to everyone who helped make this release possible: [Lutz Behnke](https://github.com/cypherfox) [Björn Wenzel](https://github.com/DaspawnW) [Filip Petkovski](https://github.com/fpetkovski) [Simon Weald](https://github.com/glitchcrab) [GMarkfjard](https://github.com/GMarkfjard) [hodbn](https://github.com/hodbn) [Hu Shuai](https://github.com/hs0210) [Jimil Desai](https://github.com/jimil749) [jiraguha](https://github.com/jiraguha) [Joakim Roubert](https://github.com/joakimr-axis) [Josh Soref](https://github.com/jsoref) [Kelly Campbell](https://github.com/kellycampbell) [Matei David](https://github.com/mateiidavid) [Mayank Shah](https://github.com/mayankshah1607) [Max Goltzsche](https://github.com/mgoltzsche) [Mitch Hulscher](https://github.com/mhulscher) [Eugene Formanenko](https://github.com/mo4islona) [Nathan J Mehl](https://github.com/n-oden) [Nicolas Lamirault](https://github.com/nlamirault) [Oleh Ozimok](https://github.com/oleh-ozimok) [Piyush Singariya](https://github.com/piyushsingariya) [Naga Venkata Pradeep Namburi](https://github.com/pradeepnnv) [rish-onesignal](https://github.com/rish-onesignal) [Shai Katz](https://github.com/shaikatz) [Takumi Sue](https://github.com/tkms0106) [Raphael Taylor-Davies](https://github.com/tustvold) [Yashvardhan Kukreja](https://github.com/yashvardhan-kukreja) ## edge-21.3.2 This edge release is another release candidate for stable 2.10 and fixes some final bugs found in testing. A big thank you to users who have helped us identity these issues! * Fixed an issue with the service profile validating webhook that prevented service profiles from being added or updated * Updated the `check` command output hint anchors to match Linkerd component names * Fixed a permission issue with the Viz extension's tap admin cluster role by adding namespace listing to the allowed actions * Fixed an issue with the proxy where connections would not be torn down when communicating with a defunct endpoint * Improved diagnostic logging in the proxy * Fixed an issue with the Viz extension's Prometheus template that prevented users from specifying a log level flag for that component (thanks @n-oden!) * Fixed a template parsing issue that prevented users from specifying additional ignored inbound parts through Helm's `--set` flag * Fixed an issue with the proxy where non-HTTP streams could sometimes hang due to TLS buffering ## edge-21.3.1 This edge release is another release candidate, bringing us closer to `stable-2.10.0`! It fixes the Helm install/upgrade procedure and ships some new CLI commands, among other improvements. * Fixed Helm install/upgrade, which was failing when not explicitly setting `proxy.image.version` * Added a warning in the dashboard when viewing tap streams from resources that don't have tap enabled * Added the command `linkerd viz list` to list meshed pods and indicate which can be tapped, which need to be restarted before they can be tapped, and which have tap disabled * Similarly, added the command `linkerd jaeger list` to list meshed pods and indicate which will participate in tracing * Added the `--opaque-ports` flag to `linkerd inject` to specify the list of opaque ports when injecting pods (and services) * Simplified the output of `linkerd jaeger check`, combining the checks for the status of each component into a single check * Changed the destination component to receive the list of default opaque ports set during install so that it's properly reflected during discovery * Moved the level of the proxy server's I/O-related "Connection closed" messages from info to debug, which were not providing actionable information ## edge-21.2.4 This edge is a release candidate for `stable-2.10.0`! It wraps up the functional changes planned for the upcoming stable release. We hope you can help us test this in your staging clusters so that we can address anything unexpected before an official stable. This release introduces support for CLI extensions. The Linkerd `check` command will now invoke each extension's `check` command so that users can check the health of their Linkerd installation and extensions with one command. Additional documentation will follow for developers interested in creating extensions. Additionally, there is no longer a default list of ports skipped by the proxy. These ports have been moved to opaque ports, meaning protocols like MySQL will be encrypted by default and without user input. * Cleaned up entries in `values.yaml` by removing `do not edit` entries; they are now hardcoded in the templates * Added the count of service profiles installed in a cluster to the Heartbeat metrics * Fixed CLI commands which would unnecessarily print usage instructions after encountering API errors (thanks @piyushsingariya!) * Fixed the `install` command so that it errors after detecting there is an existing Linkerd installation in the cluster * Changed the identity controller to receive the trust anchor via environment variable instead of by flag; this allows the certificate to be loaded from a config map or secret (thanks @mgoltzsche!) * Updated the proxy to use TLS version 1.3; support for TLS 1.2 remains enabled for compatibility with prior proxy versions * The opaque ports annotation is now supported on services and enables users to use this annotation on mirrored services in multicluster installations * Reverted the renaming of the `mirror.linkerd.io` label * Ports `25,443,587,3306,5432,11211` have been removed from the default skip ports; all traffic through those ports is now proxied and handled opaquely by default * Errors configuring the firewall in CNI are propagated so that they can be handled by the user * Removed Viz extension warnings from the `check --proxy` command when tap is not configured for pods; this is now handled by the `viz tap` command * Added support for CLI extensions as well as ensuring their `check` commands are invoked by Linkerd's `check` command * Moved the `metrics`, `endpoints`, and `install-sp` commands into subcommands under the `diagnostics` command. * Removed the `linkerd-` prefix from non-cluster scoped resources in the Viz and Jaeger extensions * Added the linkerd-await helper to all Linkerd containers so that the proxy can initialize before the components start making outbound connections * Removed the `tcp_connection_duration_ms` histogram from the metrics export to fix high cardinality issues that surfaced through high memory usage ## edge-21.2.3 This release wraps up most of the functional changes planned for the upcoming `stable-2.10.0` release. Try this edge release in your staging cluster and let us know if you see anything unexpected! * **Breaking change**: Changed the multicluster `Service`-export annotation from `mirror.linkerd.io/exported` to `multicluster.linkerd.io/export` * Updated the proxy-injector to to set the `config.linkerd.io/opaque-ports` annotation on newly-created `Service` objects when the annotation is set on its parent `Namespace` * Updated the proxy-injector to ignore pods that have disabled `automountServiceAccountToken` (thanks @jimil749) * Updated the proxy to log warnings when control plane components are unresolveable * Updated the Destination controller to cache node topology metadata (thanks @fpetkovski) * Updated the CLI to handle API errors without printing the CLI usage (thanks @piyushsingariya) * Updated the Web UI to only display the "Gateway" sidebar link when the multicluster extension is active * Fixed the Web UI on Chrome v88 (thanks @kellycampbell) * Improved `install` and `uninstall` behavior for extensions to prevent control-plane components from being left in a broken state * Docker images are now hosted on the `cr.l5d.io` registry * Updated base docker images to buster-20210208-slim * Updated the Go version to 1.14.15 * Updated the proxy to prevent outbound connections to localhost to protect against traffic loops ## edge-21.2.2 This edge release introduces support for multicluster TCP! The `repair` command was added which will repopulate resources needed for upgrading from a `2.9.x` installation. There will be an error message during the upgrade process indicating that this command should be run so that users do not need to guess. Lastly, it contains a breaking change for Helm users. The `global` field has been removed from the Helm chart now that it is no longer needed. Users will need to pass in the identity certificates again—along with any other customizations, no longer rooted at `global`. * **Breaking change**: Removed the `Global` field from the Linkerd Helm chart now that it is unused because of the extension model * Added the `repair` command which will repopulate resources needed for properly upgrading a Linkerd installation * Fixed the spelling of the `sidecarContainers` key in the Viz extension Helm chart to match that of the template (thanks @n-oden!) * Added the `tapInjector.logLevel` key to the Viz extension helm chart so that the log level of the component can be configured * Removed the `--disable-tap` flag from the `inject` command now that tap is no longer part of the core installation (thanks @mayankshah1607!) * Changed proxy configuration to use fully-qualified DNS names to avoid extra search paths in DNS resolutions * Changed the `check` command to include each installed extension's `check` output; this allows users to check for proper configuration and installation of Linkerd without running a command for each extension * Added proxy support for TCP traffic to the multicluster gateways ## edge-21.2.1 This edge release continues improving the proxy's diagnostics and also avoids timing out when the HTTP protocol detection fails. Additionally, old resource versions were upgraded to avoid warnings in k8s v1.19. Finally, it comes with lots of CLI improvements detailed below. * Improved the proxy's diagnostic metrics to help us get better insights into services that are in fail-fast * Improved the proxy's HTTP protocol detection to prevent timeout errors * Upgraded CRD and webhook config resources to get rid of warnings in k8s v1.19 (thanks @mateiidavid!) * Added viz components into the Linkerd Health Grafana charts * Had the tap injector add a `viz.linkerd.io/tap-enabled` annotation when injecting a pod, which allowed providing clearer feedback for the `linkerd tap` command * Had the jaeger injector add a `jaeger.linkerd.io/tracing-enabled` annotation when injecting a pod, which also allowed providing better feedback for the `linkerd jaeger check` command * Improved the `linkerd uninstall` command so it fails gracefully when there still are injected resources in the cluster (a `--force` flag was provided too) * Moved the `linkerd profile --tap` functionality into a new command `linkerd viz profile --tap`, given tap now belongs to the viz extension * Expanded the `linkerd viz check` command to include data-plane checks * Cleaned-up YAML in templates that was incompatible with SOPS (thanks @tkms0106!) ## edge-21.1.4 This edge release continues to polish the Linkerd extension model and improves the robustness of the opaque transport. * Improved the consistency of behavior of the `check` commands between Linkerd extensions * Fixed an issue where Linkerd extension commands could be run before the extension was fully installed * Renamed some extension Helm charts for consistency: * jaeger -> linkerd-jaeger * linkerd2-multicluster -> linkerd-multicluster * linkerd2-multicluster-link -> linkerd-multicluster-link * Fixed an issue that could cause the inbound proxy to fail meshed HTTP/1 requests from older proxies (from the stable-2.8.x vintage) * Changed opaque-port transport to be advertised via ALPN so that new proxies will not initiate opaque-transport connections to proxies from prior edge releases * Added inbound proxy transport metrics with `tls="passthru"` when forwarding non-mesh TLS connections * Thanks to @hs0210 for adding new unit tests! ## edge-21.1.3 This edge release improves proxy diagnostics and recovery in situations where the proxy is temporarily unable to route requests. Additionally, the `viz` and `multicluster` CLI sub-commands have been updated for consistency. Full release notes: * Added Helm-style `set`, `set-string`, `values`, `set-files` customization flags for the `linkerd install` and `linkerd multicluster install` commands * Fixed an issue where `linkerd metrics` could return metrics for the incorrect set of pods when there are overlapping label selectors * Added tap-injector to linkerd-viz which is responsible for adding the tap service name environment variable to the Linkerd proxy container * Improved diagnostics when the proxy is temporarily unable to route requests * Made proxy recovery for a service more robust when the proxy is unable to route requests, even when new requests are being received * Added `client` and `server` prefixes in the proxy logs for socket-level errors to indicate which side of the proxy encountered the error * Improved jaeger-injector reliability in environments with many resources by adding watch RBAC permissions * Added check to confirm whether the jaeger-injector pod is in running state (thanks @yashvardhan-kukreja!) * Fixed a crash in the destination controller when EndpointSlices are enabled (thanks @oleh-ozimok!) * Added a `linkerd viz check` sub-command to verify the states of the `linkerd-viz` components * Added a `log-format` flag to optionally output the control plane component log output as JSON (thanks @mo4islona!) * Updated the logic in the `metrics` and `profile` subcommands to use the `namespace` specified by the `current-context` of the KUBECONFIG so that it is no longer necessary to use the `--namespace` flag to query resources in the current namespace. Queries for resources in namespaces other than the current namespace still require the `--namespace` flag * Added new pod 'linkerd-metrics-api' set up by `linkerd viz install` that manages all functionality dependent on Prometheus, thus removing most of the dependencies on Prometheus from the linkerd core installation * Removed need to have linkerd-viz installed for the `linkerd multicluster check` command to properly work. ## edge-21.1.2 This edge release continues the work on decoupling non-core Linkerd components. Commands that use the viz extension i.e, `dashboard`, `edges`, `routes`, `stat`, `tap` and `top` are moved to the `viz` sub-command. These commands are still available under root but are marked as deprecated and will be removed in a later stable release. This release also upgrades the proxy's dependencies to the Tokio v1 ecosystem. * Moved sub-commands that use the viz extension under `viz` * Started ignoring pods with `Succeeded` status when watching IP addresses in destination. This allows the re-use of IPs of terminated pods * Support Bring your own Jaeger use-case by adding `collector.jaegerAddr` in the Jaeger extension. * Fixed an issue with the generation of working manifests in the `podAntiAffinity` use-case * Added support for the modification of proxy resources in the viz extension through `values.yaml` in Helm and flags in CLI. * Improved error reporting for port-forward logic with namespace and pod data, used across dashboard, checks, etc (thanks @piyushsingariya) * Added support to disable the rendering of `linkerd-viz` namespace resource in the viz extension (thanks @nlamirault) * Made service-profile generation work offline with `--ignore-cluster` flag (thanks @piyushsingariya) * Upgraded the proxy's dependencies to the Tokio v1 ecosystem ## edge-21.1.1 This edge release introduces a new "opaque transport" feature that allows the proxy to securely transport server-speaks-first and otherwise opaque TCP traffic. Using the `config.linkerd.io/opaque-ports` annotation on pods and namespaces, users can configure ports that should skip the proxy's protocol detection. Additionally, a new `linkerd-viz` extension has been introduced that separates the installation of the Grafana, Prometheus, web, and tap components. This extension closely follows the Jaeger and multicluster extensions; users can `install` and `uninstall` with the `linkerd viz ..` command as well as configure for HA with the `--ha` flag. The `linkerd viz install` command does not have any cli flags to customize the install directly, but instead follows the Helm way of customization by using flags such as `set`, `set-string`, `values`, `set-files`. Finally, a new `/shutdown` admin endpoint that may only be accessed over the loopback network has been added. This allows batch jobs to gracefully terminate the proxy on completion. The `linkerd-await` utility can be used to automate this. * Added a new `linkerd multicluster check` command to validate that the `linkerd-multicluster` extension is working correctly * Fixed description in the `linkerd edges` command (thanks @jsoref!) * Moved the Grafana, Prometheus, web, and tap components into a new Viz chart, following the same extension model that multicluster and Jaeger follow * Introduced a new "opaque transport" feature that allows the proxy to securely transport server-speaks-first and otherwise opaque TCP traffic * Removed the check comparing the `ca.crt` field in the identity issuer secret and the trust anchors in the Linkerd config; these values being different is not a failure case for the `linkerd check` command (thanks @cypherfox!) * Removed the Prometheus check from the `linkerd check` command since it now depends on a component that is installed with the Viz extension * Fixed error messages thrown by the cert checks in `linkerd check` (thanks @pradeepnnv!) * Added PodDisruptionBudgets to the control plane components so that they cannot be all terminated at the same time during disruptions (thanks @tustvold!) * Fixed an issue that displayed the wrong `linkerd.io/proxy-version` when it is overridden by annotations (thanks @mateiidavid!) * Added support for custom registries in the `linkerd-viz` helm chart (thanks @jimil749!) * Renamed `proxy-mutator` to `jaeger-injector` in the `linkerd-jaeger` extension * Added a new `/shutdown` admin endpoint that may only be accessed over the loopback network allowing batch jobs to gracefully terminate the proxy on completion * Introduced the `linkerd identity` command, used to fetch the TLS certificates for injected pods (thanks @jimil749) * Fixed an issue with the CNI plugin where it was incorrectly terminating and emitting error events (thanks @mhulscher!) * Re-added support for non-LoadBalancer service types in the `linkerd-multicluster` extension ## edge-20.12.4 This edge release adds support for the `config.linkerd.io/opaque-ports` annotation on pods and namespaces, to configure ports that should skip the proxy's protocol detection. In addition, it adds new CLI commands related to the `linkerd-jaeger` extension, fixes bugs in the CLI `install` and `upgrade` commands and Helm charts, and fixes a potential false positive in the proxy's HTTP protocol detection. Finally, it includes improvements in proxy performance and memory usage, including an upgrade for the proxy's dependency on the Tokio async runtime. * Added support for the `config.linkerd.io/opaque-ports` annotation on pods and namespaces, to indicate to the proxy that some ports should skip protocol detection * Fixed an issue where `linkerd install --ha` failed to honor flags * Fixed an issue where `linkerd upgrade --ha` can override existing configs * Added missing label to the `linkerd-config-overrides` secret to avoid breaking upgrades performed with the help of `kubectl apply --prune` * Added a missing icon to Jaeger Helm chart * Added new `linkerd jaeger check` CLI command to validate that the `linkerd-jaeger` extension is working correctly * Added new `linkerd jaeger uninstall` CLI command to print the `linkerd-jaeger` extension's resources so that they can be piped into `kubectl delete` * Fixed an issue where the `linkerd-cni` daemonset may not be installed on all intended nodes, due to missing tolerations to the `linkerd-cni` Helm chart (thanks @rish-onesignal!) * Fixed an issue where the `tap` APIServer would not refresh its certs automatically when provided externally—like through cert-manager * Changed the proxy's cache eviction strategy to reduce memory consumption, especially for busy HTTP/1.1 clients * Fixed an issue in the proxy's HTTP protocol detection which could cause false positives for non-HTTP traffic * Increased the proxy's default dispatch timeout to 5 seconds to accommodate connection pools which might open connections without immediately making a request * Updated the proxy's Tokio dependency to v0.3 ## edge-20.12.3 This edge release is functionally the same as `edge-20.12.2`. It fixes an issue that prevented the release build from occurring. ## edge-20.12.2 * Fixed an issue where the `proxy-injector` and `sp-validator` did not refresh their certs automatically when provided externally—like through cert-manager * Added support for overrides flags to the `jaeger install` command to allow setting Helm values when installing the Linkerd-jaeger extension * Added missing Helm values to the multicluster chart (thanks @DaspawnW!) * Moved tracing functionality to the `linkerd-jaeger` extension * Fixed various issues in developer shell scripts (thanks @joakimr-axis!) * Fixed an issue where `install --ha` was only partially applying the high availability config * Updated RBAC API versions in the CNI chart (thanks @glitchcrab!) * Fixed an issue where TLS credentials are changed during upgrades, but the Linkerd webhooks would not restart, leaving them to use older credentials and fail requests * Stopped publishing the multicluster link chart as its primary use case is in the `multicluster link` command and not being installed through Helm * Added service mirror error logs for when the multicluster gateway's hostname cannot be resolved. ## edge-20.12.1 This edge release continues the work of decoupling non-core Linkerd components by moving more tracing related functionality into the Linkerd-jaeger extension. * Continued work on moving tracing functionality from the main control plane into the `linkerd-jaeger` extension * Fixed a potential panic in the proxy when looking up a socket's peer address while under high load * Added automatic readme generation for charts (thanks @GMarkfjard!) * Fixed zsh completion for the CLI (thanks @jiraguha!) * Added support for multicluster gateways of types other than LoadBalancer (thanks @DaspawnW!) ## edge-20.11.5 This edge release improves the proxy's support high-traffic workloads. It also contains the first steps towards decoupling non-core Linkerd components, the first iteration being a new `linkerd jaeger` sub-command for installing tracing. Please note this is still a work in progress. * Addressed some issues reported around clients seeing max-concurrency errors by increasing the default in-flight request limit to 100K pending requests * Have the proxy appropriately set `content-type` when synthesizing gRPC error responses * Bumped the `proxy-init` image to `v1.3.8` which is based off of `buster-20201117-slim` to reduce potential security vulnerabilities * No longer panic in rare cases when `linkerd-config` doesn't have an entry for `Global` configs (thanks @hodbn!) * Work in progress: the `/jaeger` directory now contains the charts and commands for installing the tracing component. ## edge-20.11.4 * Fixed an issue in the destination service where endpoints always included a protocol hint, regardless of the controller label being present or not ## edge-20.11.3 This edge release improves support for CNI by properly handling parameters passed to the `nsenter` command, relaxes checks on root and intermediate certificates (following X509 best practices), and fixes two issues: one that prevented installation of the control plane into a custom namespace and one which failed to update endpoint information when a headless service is modified. This release also improves linkerd proxy performance by eliminating unnecessary endpoint resolutions for TCP traffic and properly tearing down serverside connections when errors occur. * Added HTTP/2 keepalive PING frames * Removed logic to avoid redundant TCP endpoint resolution * Fixed an issue where serverside connections were not torn down when an error occurs * Updated `linkerd check` so that it doesn't attempt to validate the subject alternative name (SAN) on root and intermediate certificates. SANs for leaf certificates will continue to be validated * Fixed a CLI issue where the `linkerd-namespace` flag is not honored when passed to the `install` and `upgrade` commands * Fixed an issue where the proxy does not receive updated endpoint information when a headless service is modified * Updated the control plane Docker images to use `buster-20201117-slim` to reduce potential security vulnerabilities * Updated the proxy-init container to `v1.3.7` which fixes CNI issues in certain environments by properly parsing `nsenter` args ## edge-20.11.2 This edge release reduces memory consumption of Linkerd proxies which maintain many idle connections (such as Prometheus). It also removes some obsolete commands from the CLI and allows setting custom annotations on multicluster gateways. * Reduced the default idle connection timeout to 5s for outbound clients and 20s for inbound clients to reduce the proxy's memory footprint, especially on Prometheus instances * Added support for setting annotations on the multicluster gateway in Helm which allows setting the load balancer as internal (thanks @shaikatz!) * Removed the `get` and `logs` command from the CLI ## stable-2.9.0 This release extends Linkerd's zero-config mutual TLS (mTLS) support to all TCP connections, allowing Linkerd to transparently encrypt and authenticate all TCP connections in the cluster the moment it's installed. It also adds ARM support, introduces a new multi-core proxy runtime for higher throughput, adds support for Kubernetes service topologies, and lots, lots more, as described below: * Proxy * Performed internal improvements for lower latencies under high concurrency * Reduced performance impact of logging, especially when the `debug` or `trace` log levels are disabled * Improved error handling for DNS errors encountered when discovering control plane addresses; this can be common during installation before all components have been started, allowing linkerd to continue to operate normally in HA during node outages * Control Plane * Added support for [topology-aware service routing](https://kubernetes.io/docs/concepts/services-networking/service-topology/) to the Destination controller; when providing service discovery updates to proxies the Destination controller will now filter endpoints based on the service's topology preferences * Added support for the new Kubernetes [EndpointSlice](https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/) resource to the Destination controller; Linkerd can be installed with `--enable-endpoint-slices` flag to use this resource rather than the Endpoints API in clusters where this new API is supported * Dashboard * Added new Spanish translations (please help us translate into your language!) * Added new section for exposing multicluster gateway metrics * CLI * Renamed the `--addon-config` flag to `--config` to clarify this flag can be used to set any Helm value * Added fish shell completions to the `linkerd` command * Multicluster * Replaced the single `service-mirror` controller with separate controllers that will be installed per target cluster through `linkerd multicluster link` * Changed the mechanism for mirroring services: instead of relying on annotations on the target services, now the source cluster should specify which services from the target cluster should be exported by using a label selector * Added support for creating multiple service accounts when installing multicluster with Helm to allow more granular revocation * Added a multicluster `unlink` command for removing multicluster links * Prometheus * Moved Linkerd's bundled Prometheus into an add-on (enabled by default); this makes the Linkerd Prometheus more configurable, gives it a separate upgrade lifecycle from the rest of the control plane, and allows users to disable the bundled Prometheus instance * The long-awaited Bring-Your-Own-Prometheus case has been finally addressed: added `global.prometheusUrl` to the Helm config to have linkerd use an external Prometheus instance instead of the one provided by default * Added an option to persist data to a volume instead of memory, so that historical metrics are available when Prometheus is restarted * The helm chart can now configure persistent storage and limits * Other * Added a new `linkerd.io/inject: ingress` annotation and accompanying `--ingress` flag to the `inject` command, to configure the proxy to support service profiles and enable per-route metrics and traffic splits for HTTP ingress controllers * Changed the type of the injector and tap API secrets to `kubernetes.io/tls` so they can be provisioned by cert-manager * Changed default docker image repository to `ghcr.io` from `gcr.io`; **Users who pull the images into private repositories should take note of this change** * Introduced support for authenticated docker registries * Simplified the way that Linkerd stores its configuration; configuration is now stored as Helm values in the `linkerd-config` ConfigMap * Added support for Helm configuration of per-component proxy resources requests This release includes changes from a massive list of contributors. A special thank-you to everyone who helped make this release possible: [Abereham G Wodajie](https://github.com/Abrishges), [Alexander Berger](https://github.com/alex-berger), [Ali Ariff](https://github.com/aliariff), [Arthur Silva Sens](https://github.com/ArthurSens), [Chris Campbell](https://github.com/campbel), [Daniel Lang](https://github.com/mavrick), [David Tyler](https://github.com/DaveTCode), [Desmond Ho](https://github.com/DesmondH0), [Dominik Münch](https://github.com/muenchdo), [George Garces](https://github.com/jgarces21), [Herrmann Hinz](https://github.com/HerrmannHinz), [Hu Shuai](https://github.com/hs0210), [Jeffrey N. Davis](https://github.com/penland365), [Joakim Roubert](https://github.com/joakimr-axis), [Josh Soref](https://github.com/jsoref), [Lutz Behnke](https://github.com/cypherfox), [MaT1g3R](https://github.com/MaT1g3R), [Marcus Vaal](https://github.com/mvaal), [Markus](https://github.com/mbettsteller), [Matei David](https://github.com/mateiidavid), [Matt Miller](https://github.com/mmiller1), [Mayank Shah](https://github.com/mayankshah1607), [Naseem](https://github.com/naseemkullah), [Nil](https://github.com/c-n-c), [OlivierB](https://github.com/olivierboudet), [Olukayode Bankole](https://github.com/rbankole), [Paul Balogh](https://github.com/javaducky), [Rajat Jindal](https://github.com/rajatjindal), [Raphael Taylor-Davies](https://github.com/tustvold), [Simon Weald](https://github.com/glitchcrab), [Steve Gray](https://github.com/steve-gray), [Suraj Deshmukh](https://github.com/surajssd), [Tharun Rajendran](https://github.com/tharun208), [Wei Lun](https://github.com/WLun001), [Zhou Hao](https://github.com/zhouhao3), [ZouYu](https://github.com/Hellcatlk), [aimbot31](https://github.com/aimbot31), [iohenkies](https://github.com/iohenkies), [memory](https://github.com/memory), and [tbsoares](https://github.com/tbsoares) ## edge-20.11.1 This edge supersedes edge-20.10.6 as a release candidate for stable-2.9.0. * Fixed issue where the `check` command would error when there is no Prometheus configured * Fixed recent regression that caused multicluster on EKS to not work properly * Changed the `check` command to warn instead of error when webhook certificates are near expiry * Added the `--ingress` flag to the `inject` command which adds the recently introduced `linkerd.io/inject: ingress` annotation * Fixed issue with upgrades where external certs would be fetched and stored even though this does not happen on fresh installs with externally created certs * Fixed issue with upgrades where the issuer cert expiration was being reset * Removed the `--registry` flag from the `multicluster install` command * Removed default CPU limits for the proxy and control plane components in HA mode ## edge-20.10.6 This edge supersedes edge-20.10.5 as a release candidate for stable-2.9.0. It adds a new `linkerd.io/inject: ingress` annotation to support service profiles and enable per-route metrics and traffic splits for HTTP ingress controllers * Added a new `linkerd.io/inject: ingress` annotation to configure the proxy to support service profiles and enable per-route metrics and traffic splits for HTTP ingress controllers * Reduced performance impact of logging in the proxy, especially when the `debug` or `trace` log levels are disabled * Fixed spurious warnings logged by the `linkerd profile` CLI command ## edge-20.10.5 This edge supersedes edge-20.10.4 as a release candidate for stable-2.9.0. It adds a fix for updating the destination service when there are no endpoints * Added a fix to clear the EndpointTranslator state when it gets a `NoEndpoints` message. This ensures that the clients get the correct set of endpoints during an update. ## edge-20.10.4 This edge release is a release candidate for stable-2.9.0. For the proxy, there have been changes to improve performance, remove unused code, and configure ports that can be ignored by default. Also, this edge release adds enhancements to the multicluster configuration and observability, adds more translations to the dashboard, and addresses a bug in the CLI. * Added more Spanish translations to the dashboard and more labels that can be translated * Added support for creating multiple service accounts when installing multicluster with Helm to allow more granular revocation * Renamed `global.proxy.destinationGetNetworks` to `global.clusterNetworks`. This is a cluster-wide setting and can no longer be overridden per-pod * Fixed an empty multicluster Grafana graph which used a deprecated label * Added the control plane tracing ServiceAccounts to the linkerd-psp RoleBinding so that it can be used in environments where PodSecurityPolicy is enabled * Enhanced EKS support by adding `100.64.0.0/10` to the set of discoverable networks * Fixed a bug in the way that the `--all-namespaces` flag is handled by the `linkerd edges` command * Added a default set of ports to bypass the proxy for server-first, https, and memcached traffic ## edge-20.10.3 This edge release is a release candidate for stable-2.9.0. It overhauls the discovery and routing logic implemented by the proxy, simplifies the way that Linkerd stores configuration, and adds new Helm values to configure additional labels, annotations, and namespace selectors for webhooks. * Added podLabels and podAnnotations Helm values to allow adding additional labels or annotations to Linkerd control plane pods (thanks @tustvold!) * Added namespaceSelector Helm value for configuring the namespace selector used by admission webhooks (thanks @tustvold!) * Expanded the 'linkerd edges' command to show TCP connections * Overhauled the discovery and routing logic implemented by the proxy: * The `l5d-dst-override` header is no longer honored * When the application attempts to connect to a pod IP, the proxy no longer load balances these requests among all pods in the service. The proxy will now honor session-stickiness as selected by an application-level load balancer * `TrafficSplits` are only applied when a client targets a service's IP * The proxy no longer performs DNS "canonicalization" to translate relative host header names to a fully-qualified form * Simplified the way that Linkerd stores its configuration. Configuration is now stored as Helm values in the linkerd-config ConfigMap * Renamed the --addon-config flag to --config to clarify this flag can be used to set any Helm value ## edge-20.10.2 This edge release adds more improvements for mTLS for all TCP traffic. It also includes significant internal improvements to the way Linkerd configuration is stored within the cluster. * Changed TCP metrics exported by the proxy to ensure that peer identities are encoded via the `client_id` and `server_id` labels. * Removed the dependency of control plane components on `linkerd-config` * Updated the data structure `proxy-injector` uses to derive the configuration used when injecting workloads ## edge-20.10.1 This edge release includes a couple of external contributions towards improved cert-manager support and Grafana charts fixes, among other enhancements. * Changed the type of the injector and tap API secrets to `kubernetes.io/tls`, so they can be provisioned by cert-manager (thanks @cypherfox!) * Fixed the "Kubernetes cluster monitoring" Grafana dashboard that had a few charts with incomplete data (thanks @aimbot31!) * Fixed the `service-mirror` multicluster component so that it retries connections to the target cluster's Kubernetes API when it's not reachable, instead of blocking * Increased the proxy's default timeout for DNS resolution to 500ms, as there were reports that 100ms was too restrictive ## edge-20.9.4 This edge release introduces support for authenticated docker registries and fixes a recent multicluster regression. * Fixed a regression in multicluster gateway configurations that would forbid inbound gateway traffic * Upgraded bundled Grafana to v7.1.5 * Enabled Jaeger receiver in collector configuration in Helm chart (thanks @olivierboudet!) * Fixed skip port configuration being skipped in CNI plugin * Introduced support for authenticated docker registries (thanks @c-n-c!) ## edge-20.9.3 This edge release includes fixes and updates for the control plane and CLI. * Added `--dest-cni-bin-dir` flag to the `linkerd install-cni` command, to configure the directory on the host where the CNI binary will be placed * Removed `collector.name` and `jaeger.name` config fields from the tracing addon * Updated Jaeger to 1.19.2 * Fixed a warning about deprecated Go packages in controller container logs ## edge-20.9.2 This edge release continues the work of adding support for mTLS for all TCP traffic and changes the default container registry to `ghcr.io` from `gcr.io`. If you are upgrading from `stable-2.8.x` with the Linkerd CLI using the `linkerd upgrade` command, you must add the `--addon-overwrite` flag to ensure that the grafana image is properly set. * Removed the default timeout for ServiceProfiles so that ServiceProfile routes behave the same as when there is no ServiceProfile definition * Changed default docker image repository to ghcr.io from gcr.io. **Users who pull the images into private repositories should take note of this change** * Added endpoint labels to outbound TCP metrics to provide more context and detail for the metrics, add load balancing to TCP connections (bypassing kube-proxy), and secure the connection with mTLS when both endpoints are meshed * Made unnamed ServiceProfile discovery configurable using the `proxy.destinationGetNetworks` variable to set the `LINKERD2_PROXY_DESTINATION_PROFILE_NETWORKS` variable in the proxy chart template * Added TLS certificate validation for the Injector, SP Validator, and Tap webhooks to the `linkerd check` command ## edge-20.9.1 This edge release contains an important proxy update that allows linkerd to continue to operate normally in HA during node outages. We're also adding full Kubernetes 1.19 support! * Improved the proxy's error handling for DNS errors encountered when discovering control plane addresses, which can be common during installation, before all components have been started * The destination and identity services had to be made headless in order to support that new controller discovery (which now can leverage SRV records) * Use SAN fields when generating the linkerd webhook configs; this completes the Kubernetes 1.19 support which enforces them * Fixed `linkerd check` for multicluster that was spuriously claiming the absence of some resources * Improved the injection test cleanup (thanks @zhouhao3!) * Added ability to run the integration test suite using a cluster in an ARM architecture (thanks @aliariff!) ## edge-20.8.4 * Fixed a problem causing the `enable-endpoint-slices` flag to not be persisted when set via `linkerd upgrade` (thanks @Matei207!) * Removed SMI-Metrics templates and experimental sub-commands * Use `--frozen-lockfile` to avoid accidental update of dashboard JS dependencies in CI (thanks @tharun208!) ## edge-20.8.3 This edge release adds support for [topology-aware service routing][topology] to the Destination controller. When providing service discovery updates to proxies, the Destination controller will now filter endpoints based on the service's topology preferences. Additionally, this release includes bug fixes for the `linkerd check` CLI command and web dashboard. * CLI * `linkerd check` will no longer warn about a looser webhook failure policy in HA mode * Controller * Added support for [topology-aware service routing][topology] to the Destination controller (thanks @Matei207) * Changed the Destination controller to always return destination overrides for service profiles when no traffic split is present * Web UI * Fixed Tap `Authority` dropdown not being populated (thanks to @tharun208!) [topology]: https://kubernetes.io/docs/concepts/services-networking/service-topology/ ## edge-20.8.2 This edge release adds an internationalization framework to the dashboard, Spanish translations to the dashboard UI, and a `linkerd multicluster uninstall` command for graceful removal of the multicluster components. * Web UI * Added Spanish translations to the dashboard * Added a framework and documentation to simplify creation of new translations * Multicluster * Added a multicluster uninstall command * Added a warning from `linkerd check --multicluster` if the multicluster support is not installed ## edge-20.8.1 This edge adds multi-arch support to Linkerd! Our docker images and CLI now support the amd64, arm64, and arm architectures. * Multicluster * Added a multicluster unlink command for removing multicluster links * Improved multicluster checks to be more informative when the remote API is not reachable * Proxy * Enabled a multi-threaded runtime to substantially improve latency especially when the proxy is serving requests for many concurrent connections * Other * Fixed an issue where the debug sidecar image was missing during upgrades (thanks @javaducky!) * Updated all control plane plane and proxy container images to be multi-arch to support amd64, arm64, and arm (thanks @aliariff!) * Fixed an issue where check was failing when DisableHeartBeat was set to true (thanks @mvaal!) ## edge-20.7.5 This edge brings a new approach to multicluster service mirror controllers and the way services in target clusters are selected for mirroring. The long-awaited Bring-Your-Own-Prometheus case has been finally addressed. Many other improvements from our great contributors are described below. Also note progress is still being made under the covers for future support for Service Topologies (by @Matei207) and delivering image builds in multiple platforms (by @aliariff). * Multicluster * Replaced the single `service-mirror` controller, with separate controllers that will be installed per target cluster through `linkerd multicluster link`. More info [here](https://github.com/linkerd/linkerd2/pull/4710). * Changed the mechanism for mirroring services: instead of relying on annotations on the target services, now the source cluster should specify which services from the target cluster should be exported by using a label selector. More info [here](https://github.com/linkerd/linkerd2/pull/4795). * Added new section in the dashboard for exposing multicluster gateway metrics (thanks @tharun208!) * Prometheus * Added `global.prometheusUrl` to the Helm config to have linkerd use an external Prometheus instance instead of the one provided by default. * Added ability to declare sidecar containers in the Prometheus Helm config. This allows adding components for cases like exporting logs to services such as Cloudwatch, Stackdriver, Datadog, etc. (thanks @memory!) * Upgraded Prometheus to the latest version (v2.19.3), which should consume substantially less memory, among other benefits. * Other * Fixed bug in `linkerd check` that was failing to wait for Prometheus to be available right after having installed linkerd. * Added ability to set `priorityClassName` for CNI DaemonSet pods, and to install CNI in an existing namespace (both options provided through the CLI and as Helm configs) (thanks @alex-berger!) * Added support for overriding the proxy's inbound and outbound TCP connection timeouts (thanks @mmiller1!) * Added library support for dashboard i18n. Strings still need to be tagged and translations to be added. More info [here](https://github.com/linkerd/linkerd2/pull/4803). * In some Helm charts, replaced the non-standard `linkerd.io/helm-release-version` annotation with `checksum/config` for forcing restarting the component during upgrades (thanks @naseemkullah!) * Upgraded the proxy init-container to v1.3.4, which comes with an updated debian-buster distro and will provide cleaner logs listing the iptables rules applied. ## edge-20.7.4 This edge release adds support for the new Kubernetes [EndpointSlice](https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/) resource to the Destination controller. Using the EndpointSlice API is more efficient for the Kubernetes control plane than using the Endpoints API. If the cluster supports EndpointSlices (a beta feature in Kubernetes 1.17), Linkerd can be installed with `--enable-endpoint-slices` flag to use this resource rather than the Endpoints API. * Added fish shell completions to the `linkerd` command (thanks @WLun001!) * Enabled the support for EndpointSlices (thanks @Matei207!) * Separated Prometheus checks and made them runnable only when the add-on is enabled ## edge-20.7.3 * Add preliminary support for EndpointSlices which will be usable in future releases (thanks @Matei207!) * Internal improvements to the CI process for testing Helm installations ## edge-20.7.2 This edge release moves Linkerd's bundled Prometheus into an add-on. This makes the Linkerd Prometheus more configurable, gives it a separate upgrade lifecycle from the rest of the control plane, and will allow users to disable the bundled Prometheus instance. In addition, this release includes fixes for several issues, including a regression where the proxy would fail to report OpenCensus spans. * Prometheus is now an optional add-on, enabled by default * Custom tolerations can now be specified for control plane resources when installing with Helm (thanks @DesmondH0!) * Evicted data plane pods are no longer considered to be failed by `linkerd check --proxy`, fixing an issue where the check would be retried indefinitely as long as evicted pods are present * Fixed a regression where proxy spans were not reported to OpenCensus * Fixed a bug where the proxy injector would fail to render skipped port lists when installed with Helm * Internal improvements to the proxy for lower latencies under high concurrency * Thanks to @Hellcatlk and @surajssd for adding new unit tests and spelling fixes! ## edge-20.7.1 This edge release features the option to persist prometheus data to a volume instead of memory, so that historical metrics are available when prometheus is restarted. Additional changes are outlined in the bullet points below. * Some commands like `linkerd stat` would fail if any control plane components were unhealthy, even when other replicas are healthy. The check conditions for these commands have been improved * The helm chart can now configure persistent storage for Prometheus (thanks @naseemkullah!) * The proxy log output format can now be configured to `plain` or `json` using the `config.linkerd.io/proxy-log-format` annotation or the `global.proxy.logFormat` value in the helm chart (thanks again @naseemkullah!) * `linkerd install --addon-config=` now supports URLs in addition to local files * The CNI Helm chart used the incorrect variable name to determine the `createdBy` version tag. This is now controlled by `cniPluginVersion` in the helm chart * The proxy's default buffer size has been increased, which reduces latency when the proxy has many concurrent clients ## edge-20.6.4 This edge release moves the proxy onto a new version of the Tokio runtime. This allows us to more easily integrate with the ecosystem and may yield performance benefits as well. * Upgraded the proxy's underlying Tokio runtime and its related libraries * Added support for PKCS8 formatted ECDSA private keys * Added support for Helm configuration of per-component proxy resources requests and limits (thanks @cypherfox!) * Updated the `linkerd inject` command to throw an error while injecting non-compliant pods (thanks @mayankshah1607) ## stable-2.8.1 This release fixes multicluster gateways support on EKS. * The multicluster service-mirror has been extended to resolve DNS names for target clusters when an IP address is not known. * Linkerd checks could fail when run from the dashboard. Thanks to @alex-berger for providing a fix! * Have the service mirror controller check in `linkerd check` retry on failures. * As of this version we're including a Chocolatey package (Windows) next to the other binaries in the release assets in GitHub. * Base images have been updated: * debian:buster-20200514-slim * grafana/grafana:7.0.3 * The shell scripts under `bin` continued to be improved, thanks to @joakimr-axis! ## edge-20.6.3 This edge release is a release candidate for stable-2.8.1. It includes a fix to support multicluster gateways on EKS. * The `config.linkerd.io/proxy-destination-get-networks` annotation configures the networks for which a proxy can discover metadata. This is an advanced configuration option that has security implications. * The multicluster service-mirror has been extended to resolve DNS names for target clusters when an IP address it not known. * Linkerd checks could fail when run from the dashboard. Thanks to @alex-berger for providing a fix! * The CLI will be published for Chocolatey (Windows) on future stable releases. * Base images have been updated: * debian:buster-20200514-slim * grafana/grafana:7.0.3 ## stable-2.8.0 This release introduces new a multi-cluster extension to Linkerd, allowing it to establish connections across Kubernetes clusters that are secure, transparent to the application, and work with any network topology. * The CLI has a new set of `linkerd multicluster` sub-commands that provide tooling to create the resources needed to discover services across Kubernetes clusters. * The `linkerd multicluster gateways` command exposes gateway-specific telemetry to supplement the existing `stat` and `tap` commands. * The Linkerd-provided Grafana instance remains enabled by default, but it can now be disabled. When it is disabled, the Linkerd dashboard can be configured to link to an alternate, externally-managed Grafana instance. * Jaeger & OpenCensus are configurable as an [add-on][addon-2.8.0]; and the proxy has been improved to emit spans with labels that reflect its pod's metadata. * The `linkerd-cni` component has been promoted from _experimental_ to _stable_. * `linkerd profile --open-api` now honors the `x-linkerd-retryable` and `x-linkerd-timeout` OpenAPI annotations. * The Helm chart continues to become more flexible and modular, with new Prometheus configuration options. More information is available in the [Helm chart README][helm-2.8.0]. * gRPC stream error handling has been improved so that transport errors are indicated to the client with a `grpc-status: UNAVAILABLE` trailer. * The proxy's memory footprint could grow significantly when server-speaks-first-protocol connections hit the proxy. Now, a timeout is in place to prevent these connections from consuming resources. * After benchmarking the proxy in high-concurrency situations, the inbound proxy has been improved to reduce contention, improving latency and reducing spurious timeouts. * The proxy could fail requests to services that had only 1 request every 60 seconds. This race condition has been eliminated. * Finally, users reported that ingress misconfigurations could cause the proxy to consume an entire CPU which could lead to timeouts. The proxy now attempts to prevent the most common traffic-loop scenarios to protect against this. _**NOTE**_: Linkerd's `multicluster` extension does not yet work on Amazon EKS. We expect to follow this release with a stable-2.8.1 to address this issue. Follow [#4582](https://github.com/linkerd/linkerd2/pull/4582) for updates. This release includes changes from a massive list of contributors. A special thank-you to everyone who helped make this release possible: @aliariff, @amariampolskiy, @arminbuerkle, @arthursens, @christianhuening, @christyjacob4, @cypherfox, @daxmc99, @dr0pdb, @drholmie, @hydeenoble, @joakimr-axis, @jpresky, @kohsheen1234, @lewiscowper, @lundbird, @matei207, @mayankshah1607, @mmiller1, @naseemkullah, @sannimichaelse, & @supra08. [addon-2.8.0]: https://github.com/linkerd/linkerd2/blob/4219955bdb5441c5fce192328d3760da13fb7ba1/charts/linkerd2/README.md#add-ons-configuration [helm-2.8.0]: https://github.com/linkerd/linkerd2/blob/4219955bdb5441c5fce192328d3760da13fb7ba1/charts/linkerd2/README.md ## edge-20.6.2 This edge release is our second release candidate for `stable-2.8`, including various fixes and improvements around multicluster support. * CLI * Fixed bad output in the `linkerd multicluster gateways` command * Improved the error returned when running the CLI with no KUBECONFIG path set (thanks @Matei207!) * Controller * Fixed issue where mirror service wasn't created when paired to a gateway whose external IP wasn't yet provided * Fixed issue where updating the gateway identity annotation wasn't propagated back into the mirror gateway endpoints object * Fixed issue where updating the gateway ports wasn't reflected in the gateway mirror service * Increased the log level for some of the service mirror events * Changed the nginx gateway config so that it runs as non-root and denies all requests to locations other than the probe path * Web UI * Fixed multicluster Grafana dashboard * Internal * Added flag in integration tests to dump fixture diffs into a separate directory (thanks @cypherfox!) ## edge-20.6.1 This edge release is a release candidate for `stable-2.8`! It introduces several improvements and fixes for multicluster support. * CLI * Added multicluster daisy chain checks to `linkerd check` * Added list of successful gateways in multicluster checks section of `linkerd check` * Controller * Renamed `nginx-configuration` ConfigMap to `linkerd-gateway-config` (please manually remove the former if upgrading from an earlier multicluster install, thanks @mayankshah1607!) * Renamed multicluster gateway ports to `mc-gateway` and `mc-probe` * Fixed Service Profiles routes for `linkerd-prometheus` * Internal * Fixed shellcheck errors in all `bin/` scripts (thanks @joakimr-axis!) * Helm * Added support for `linkerd mc allow` * Added ability to disable secret resources for self-signed certs (thanks @cypherfox!) * Proxy * Modified the `linkerd-gateway` component to use the inbound proxy, rather than nginx, for gateway; this allows Linkerd to detect loops and propagate identity ## edge-20.5.5 This edge release adds refinements to the Linkerd multicluster implementation, adds new health checks for the tracing add-on, and addresses an issue in which outbound requests from the proxy result in looping behavior. * CLI * Added the `multicluster` command along with subcommands to configure and deploy Linkerd workloads which enable services to be mirrored across clusters * Added health-checks for tracing add-on * Proxy * Added logic to prevent loops in outbound requests ## edge-20.5.4 * CLI * Fixed the display of the meshed pod column for non-selector services in `linkerd stat` output * Added an `addon-overwrite` upgrade flag which allows users to overwrite the existing addon config rather than merging into it * Added a `--close-wait-timeout` inject flag which sets the `nf_conntrack_tcp_timeout_close_wait` property which can be used to mitigate connection issues with application that hold half-closed sockets * Controller * Restricted the service-mirror's RBAC permissions so that it no longer is able to read secrets in all namespaces * Moved many multicluster components into the `linkerd-multicluster` namespace by default * Added multicluster gateway mirror services to allow multicluster liveness probes to work in private networks * Fixed an issue where multicluster gateway mirror services could be incorrectly deleted during a resync * Internal * Fixed many style issues in build scripts (thanks @joakimr-axis!) * Helm * Added `global.grafanaUrl` variable to allow using an existing Grafana installation ## edge-20.5.3 * Controller * Added a Grafana dashboard for tracking multi-cluster traffic metrics * Added health checks for the Grafana add-on, under a separate section * Fixed issues when updating a remote multi-cluster gateway * Proxy * Added special special handling for I/O errors in HTTP responses so that an `errno` label is included to describe the underlying errors in the proxy's metrics * Internal * Started gathering stats of CI runs for aggregating CI health metrics ## edge-20.5.2 This edge release contains everything required to get up and running with multicluster. For a tutorial on how to do that, check out the [documentation](https://linkerd.io/2/features/multicluster_support/). * CLI * Added a section to the `linkerd check` that validates that all clusters part of a multicluster setup have compatible trust anchors * Modified the `inkerd cluster export-service` command to work by transforming yaml instead of modifying cluster state * Added functionality that allows the `linkerd cluster export-service` command to operate on lists of services * Controller * Changed the multicluster gateway to always require TLS on connections originating from outside the cluster * Removed admin server timeouts from control plane components, thereby fixing a bug that can cause liveness checks to fail * Helm * Moved Grafana templates into a separate add-on chart * Proxy * Improved latency under high-concurrency use cases. ## edge-20.5.1 * CLI * Fixed all commands to use kubeconfig's default namespace if specified (thanks @Matei207!) * Added multicluster checks to the `linkerd check` command * Hid development flags in the `linkerd install` command for release builds * Controller * Added ability to configure Prometheus Alertmanager as well as recording and alerting rules on the Linkerd Prometheus (thanks @naseemkullah!) * Added ability to add more commandline flags to the Prometheus command (thanks @naseemkullah!) * Web UI * Fixed TrafficSplit detail page not loading * Added Jaeger links to the dashboard when the tracing addon is enabled * Proxy * Modified internal buffering to avoid idling out services as a request arrives, fixing failures for requests that are sent exactly once per minute--such as Prometheus scrapes ## edge-20.4.5 This edge release includes several new CLI commands for use with multi-cluster gateways, and adds liveness checks and metrics for gateways. Additionally, it makes the proxy's gRPC error-handling behavior more consistent with other implementations, and includes a fix for a bug in the web UI. * CLI * Added `linkerd cluster setup-remote` command for setting up a multi-cluster gateway * Added `linkerd cluster gateways` command to display stats for multi-cluster gateways * Changed `linkerd cluster export-service` to modify a provided YAML file and output it, rather than mutating the cluster * Controller * Added liveness checks and Prometheus metrics for multi-cluster gateways * Changed the proxy injector to configure proxies to do destination lookups for IPs in the private IP range * Web UI * Fixed errors when viewing resource detail pages * Internal * Created script and config to build a Linkerd CLI Chocolatey package for Windows users, which will be published with stable releases (thanks to @drholmie!) * Proxy * Changed the proxy to set a `grpc-status: UNAVAILABLE` trailer when a gRPC response stream is interrupted by a transport error ## edge-20.4.4 This edge release fixes a packaging issue in `edge-20.4.3`. _From `edge.20.4.3` release notes_: This edge release adds functionality to the CLI to output more detail and includes changes which support the multi-cluster functionality. Also, the helm support has been expanded to make installation more configurable. Finally, the HA reliability is improved by ensuring that control plane pods are restarted with a rolling strategy * CLI * Added output to the `linkerd check --proxy` command to list all data plane pods which are not up-to-date rather than just printing the first one it encounters * Added a `--proxy` flag to the `linkerd version` command which lists all proxy versions running in the cluster and the number of pods running each version * Lifted requirement of using --unmeshed for linkerd stat when querying TrafficSplit resources * Added support for multi-stage installs with Add-Ons * Controller * Added a rolling update strategy to Linkerd deployments that have multiple replicas during HA deployments to ensure that at most one pod begins terminating before a new pod ready is ready * Added a new label for the proxy injector to write to the template, `linkerd.io/workload-ns` which indicates the namespace of the workload/pod * Internal * Added a [security policy](https://help.github.com/en/github/managing-security-vulnerabilities/adding-a-security-policy-to-your-repository) to facilitate conversations around security * Helm * Changed charts to use downwardAPI to mount labels to the proxy container making them easier to identify * Proxy * Changed the Linkerd proxy endpoint for liveness to use the new `/live` admin endpoint instead of the `/metrics` endpoint, because the `/live` endpoint returns a smaller payload * Added a per-endpoint authority-override feature to support multi-cluster gateways ## edge-20.4.3 **This release is superseded by `edge-20.4.4`** This edge release adds functionality to the CLI to output more detail and includes changes which support the multi-cluster functionality. Also, the helm support has been expanded to make installation more configurable. Finally, the HA reliability is improved by ensuring that control plane pods are restarted with a rolling strategy * CLI * Added output to the `linkerd check --proxy` command to list all data plane pods which are not up-to-date rather than just printing the first one it encounters * Added a `--proxy` flag to the `linkerd version` command which lists all proxy versions running in the cluster and the number of pods running each version * Lifted requirement of using --unmeshed for linkerd stat when querying TrafficSplit resources * Added support for multi-stage installs with Add-Ons * Controller * Added a rolling update strategy to Linkerd deployments that have multiple replicas during HA deployments to ensure that at most one pod begins terminating before a new pod ready is ready * Added a new label for the proxy injector to write to the template, `linkerd.io/workload-ns` which indicates the namespace of the workload/pod * Internal * Added a [security policy](https://help.github.com/en/github/managing-security-vulnerabilities/adding-a-security-policy-to-your-repository) to facilitate conversations around security * Helm * Changed charts to use downwardAPI to mount labels to the proxy container making them easier to identify * Proxy * Changed the Linkerd proxy endpoint for liveness to use the new `/live` admin endpoint instead of the `/metrics` endpoint, because the `/live` endpoint returns a smaller payload * Added a per-endpoint authority-override feature to support multi-cluster gateways ## edge-20.4.2 This release brings a number of CLI fixes and Controller improvements. * CLI * Fixed a bug that caused pods to crash after upgrade if `--skip-outbound-ports` or `--skip-inbound-ports` were used * Added `unmeshed` flag to the `stat` command, such that unmeshed resources are only displayed if the user opts-in * Added a `--smi-metrics` flag to `install`, to allow installation of the experimental `linkerd-smi-metrics` component * Fixed a bug in `linkerd stat`, causing incorrect output formatting when using the `--o wide` flag * Fixed a bug, causing `linkerd uninstall` to fail when attempting to delete PSPs * Controller * Improved the anti-affinity of `linkerd-smi-metrics` deployment to avoid pod scheduling problems during `upgrade` * Improved endpoints change detection in the `linkerd-destination` service, enabling mirrored remote services to change cluster gateways * Added `operationID` field to tap OpenAPI response to prevent issues during upgrade from 2.6 to 2.7 * Proxy * Added a new protocol detection timeout to prevent clients from consuming resources indefinitely when not sending any data ## edge-20.4.1 This release introduces some cool new functionalities, all provided by our awesome community of contributors! Also two bugs were fixed that were introduced since edge-20.3.2. * CLI * Added `linkerd uninstall` command to uninstall the control plane (thanks @Matei207!) * Fixed a bug causing `linkerd routes -o wide` to not show the proper actual success rate * Controller * Fail proxy injection if the pod spec has `automountServiceAccountToken` disabled (thanks @mayankshah1607!) * Web UI * Added a route dashboard to Grafana (thanks @lundbird!) * Proxy * Fixed a bug causing the proxy's inbound to spuriously return 503 timeouts ## edge-20.3.4 This release introduces several fixes and improvements to the CLI. * CLI * Added support for kubectl-style label selectors in many CLI commands (thanks @mayankshah1607!) * Fixed the path regex in service profiles generated from proto files without a package name (thanks @amariampolskiy!) * Fixed an error when injecting Cronjobs that have no metadata * Relaxed the clock skew check to match the default node heartbeat interval on Kubernetes 1.17 and made this check a warning * Fixed a bug where the linkerd-smi-metrics pod could not be created on clusters with pod security policy enabled * Internal * Upgraded tracing components to more recent versions and improved resource defaults (thanks @Pothulapati!) ## edge-20.3.3 This release introduces new experimental CLI commands for querying metrics using the Service Mesh Interface (SMI) and for multi-cluster support via service mirroring. If you would like to learn more about service mirroring or SMI, or are interested in experimenting with these features, please join us in [Linkerd Slack](https://slack.linkerd.io) for help and feedback. * CLI * Added experimental `linkerd cluster` commands for managing multi-cluster service mirroring * Added the experimental `linkerd alpha clients` command, which uses the smi-metrics API to display client-side metrics from each of a resource's clients * Added retries to some `linkerd check` checks to prevent spurious failures when run immediately after cluster creation or Linkerd installation ## edge-20.3.2 This release introduces substantial proxy improvements as well as new observability and security functionality. * CLI * Added the `linkerd alpha stat` command, which uses the smi-metrics API; the latter enables access to metrics to be controlled with RBAC * Controller * Added support for configuring service profile timeouts `(x-linkerd-timeout)` via OpenAPI spec (thanks @lewiscowper!) * Web UI * Improved the Grafana dashboards to use a globing operator for Prometheus in order to avoid producing queries that are too large (thanks @mmiller1!) * Helm * Improved the `linkerd2` chart README (thanks @lundbird!) * Proxy * Fixed a bug that could cause log levels to be processed incorrectly ## edge-20.3.1 This release introduces new functionality mainly focused around observability and multi-cluster support via `service mirroring`. If you would like to learn more about `service mirroring` or are interested in experimenting with this feature, please join us in [Linkerd Slack](https://slack.linkerd.io) for help and feedback. * CLI * Improved the `linkerd check` command to check for extension server certificate (thanks @christyjacob4!) * Controller * Removed restrictions preventing Linkerd from injecting proxies into Contour (thanks @alfatraining!) * Added an experimental version of a service mirroring controller, allowing discovery of services on remote clusters. * Web UI * Fixed a bug causing incorrect Grafana links to be rendered in the web dashboard. * Proxy * Fixed a bug that could cause the proxy's load balancer to stop processing updates from service discovery. ## edge-20.2.3 This release introduces the first optional add-on `tracing`, added through the new add-on model! The existing optional `tracing` components Jaeger and OpenCensus can now be installed as add-on components. There will be more information to come about the new add-on model, but please refer to the details of [#3955](https://github.com/linkerd/linkerd2/pull/3955) for how to get started. * CLI * Added the `linkerd diagnostics` command to get metrics only from the control plane, excluding metrics from the data plane proxies (thanks @srv-twry!) * Added the `linkerd install --prometheus-image` option for installing a custom Prometheus image (thanks @christyjacob4!) * Fixed an issue with `linkerd upgrade` where changes to the `Namespace` object were ignored (thanks @supra08!) * Controller * Added the `tracing` add-on which installs Jaeger and OpenCensus as add-on components (thanks @Pothulapati!!) * Proxy * Increased the inbound router's default capacity from 100 to 10k to accommodate environments that have a high cardinality of virtual hosts served by a single pod * Web UI * Fixed styling in the CallToAction banner (thanks @aliariff!) ## edge-20.2.2 This release includes the results from continued profiling & performance analysis on the Linkerd proxy. In addition to modifying internals to prevent unwarranted memory growth, new metrics were introduced to aid in debugging and diagnostics. Also, Linkerd's CNI plugin is out of experimental, check out the docs at ! * CLI * Added support for label selectors in the `linkerd stat` command (thanks @mayankshah1607!) * Added scrolling functionality to the `linkerd top` output (thanks @kohsheen1234!) * Fixed bug in `linkerd metrics` that was causing a panic when port-forwarding failed (thanks @mayankshah1607!) * Added check to `linkerd check` verifying the number of replicas for Linkerd components in HA (thanks @mayankshah1607!) * Unified trust anchors terminology across the CLI commands * Removed some messages from `linkerd upgrade`'s output that are no longer relevant (thanks @supra08!) * Controller * Added support for configuring service profile retries `(x-linkerd-retryable)` via OpenAPI spec (thanks @kohsheen1234!) * Improved traffic split metrics so sources in all namespaces are shown, not just traffic from the traffic split's own namespace * Improved linkerd-identity's logs and events to help diagnosing certificate validation issues (thanks @mayankshah1607!) * Proxy * Added `request_errors_total` metric exposing the number of requests that receive synthesized responses due to proxy errors * Helm * Added a new `enforcedHostRegexp` variable to allow configuring the linkerd-web component enforced host (that was previously introduced to protect against DNS rebinding attacks) (thanks @sannimichaelse!) * Internal * Removed various es-lint warnings from the dashboard code (thanks @christyjacob4 and @kohsheen1234!) * Fixed go module file syntax (thanks @daxmc99!) ## stable-2.7.0 This release adds support for integrating Linkerd's PKI with an external certificate issuer such as [`cert-manager`] as well as streamlining the certificate rotation process in general. For more details about cert-manager and certificate rotation, see the [docs](https://linkerd.io/2/tasks/use_external_certs/). This release also includes performance improvements to the dashboard, reduced memory usage of the proxy, various improvements to the Helm chart, and much much more. To install this release, run: `curl https://run.linkerd.io/install | sh` **Upgrade notes**: This release includes breaking changes to our Helm charts. Please see the [upgrade instructions](https://linkerd.io/2/tasks/upgrade/#upgrade-notice-stable-270). **Special thanks to**: @alenkacz, @bmcstdio, @daxmc99, @droidnoob, @ereslibre, @javaducky, @joakimr-axis, @JohannesEH, @KIVagant, @mayankshah1607, @Pothulapati, and @StupidScience! **Full release notes**: * CLI * Updated the mTLS trust anchor checks to eliminate false positives caused by extra trailing spaces * Reduced the severity level of the Linkerd version checks, so that they don't fail when the external version endpoint is unreachable (thanks @mayankshah1607!) * Added a new `tap` APIService check to aid with uncovering Kubernetes API aggregation layer issues (thanks @droidnoob!) * Introduced CNI checks to confirm the CNI plugin is installed and ready; this is done through `linkerd check --pre --linkerd-cni-enabled` before installation and `linkerd check` after installation if the CNI plugin is present * Added support for the `--as-group` flag so that users can impersonate groups for Kubernetes operations (thanks @mayankshah1607!) * Added HA specific checks to `linkerd check` to ensure that the `kube-system` namespace has the `config.linkerd.io/admission-webhooks:disabled` label set * Fixed a problem causing the presence of unnecessary empty fields in generated resource definitions (thanks @mayankshah1607) * Added the ability to pass both port numbers and port ranges to `--skip-inbound-ports` and `--skip-outbound-ports` (thanks to @javaducky!) * Increased the comprehensiveness of `linkerd check --pre` * Added TLS certificate validation to `check` and `upgrade` commands * Added support for injecting CronJobs and ReplicaSets, as well as the ability to use them as targets in the CLI subcommands * Introduced the new flags `--identity-issuer-certificate-file`, `--identity-issuer-key-file` and `identity-trust-anchors-file` to `linkerd upgrade` to support trust anchor and issuer certificate rotation * Added a check that ensures using `--namespace` and `--all-namespaces` results in an error as they are mutually exclusive * Added a `Dashboard.Replicas` parameter to the Linkerd Helm chart to allow configuring the number of dashboard replicas (thanks @KIVagant!) * Removed redundant service profile check (thanks @alenkacz!) * Updated `uninject` command to work with namespace resources (thanks @mayankshah1607!) * Added a new `--identity-external-issuer` flag to `linkerd install` that configures Linkerd to use certificates issued by an external certificate issuer (such as `cert-manager`) * Added support for injecting a namespace to `linkerd inject` (thanks @mayankshah1607!) * Added checks to `linkerd check --preinstall` ensuring Kubernetes Secrets can be created and accessed * Fixed `linkerd tap` sometimes displaying incorrect pod names for unmeshed IPs that match multiple running pods * Made `linkerd install --ignore-cluster` and `--skip-checks` faster * Fixed a bug causing `linkerd upgrade` to fail when used with `--from-manifest` * Made `--cluster-domain` an install-only flag (thanks @bmcstdio!) * Updated `check` to ensure that proxy trust anchors match configuration (thanks @ereslibre!) * Added condition to the `linkerd stat` command that requires a window size of at least 15 seconds to work properly with Prometheus * Controller * Fixed an issue where an override of the Docker registry was not being applied to debug containers (thanks @javaducky!) * Added check for the Subject Alternate Name attributes to the API server when access restrictions have been enabled (thanks @javaducky!) * Added support for arbitrary pod labels so that users can leverage the Linkerd provided Prometheus instance to scrape for their own labels (thanks @daxmc99!) * Fixed an issue with CNI config parsing * Fixed a race condition in the `linkerd-web` service * Updated Prometheus to 2.15.2 (thanks @Pothulapati) * Increased minimum kubernetes version to 1.13.0 * Added support for pod ip and service cluster ip lookups in the destination service * Added recommended kubernetes labels to control-plane * Added the `--wait-before-exit-seconds` flag to linkerd inject for the proxy sidecar to delay the start of its shutdown process (a huge commit from @KIVagant, thanks!) * Added a pre-sign check to the identity service * Fixed inject failures for pods with security context capabilities * Added `conntrack` to the `debug` container to help with connection tracking debugging * Fixed a bug in `tap` where mismatch cluster domain and trust domain caused `tap` to hang * Fixed an issue in the `identity` RBAC resource which caused start up errors in k8s 1.6 (thanks @Pothulapati!) * Added support for using trust anchors from an external certificate issuer (such as `cert-manager`) to the `linkerd-identity` service * Added support for headless services (thanks @JohannesEH!) * Helm * **Breaking change**: Renamed `noInitContainer` parameter to `cniEnabled` * **Breaking Change** Updated Helm charts to follow best practices (thanks @Pothulapati and @javaducky!) * Fixed an issue with `helm install` where the lists of ignored inbound and outbound ports would not be reflected * Fixed the `linkerd-cni` Helm chart not setting proper namespace annotations and labels * Fixed certificate issuance lifetime not being set when installing through Helm * Updated the helm build to retain previous releases * Moved CNI template into its own Helm chart * Proxy * Fixed an issue that could cause the OpenCensus exporter to stall * Improved error classification and error responses for gRPC services * Fixed a bug where the proxy could stop receiving service discovery updates, resulting in 503 errors * Improved debug/error logging to include detailed contextual information * Fixed a bug in the proxy's logging subsystem that could cause the proxy to consume memory until the process is OOM killed, especially when the proxy was configured to log diagnostic information * Updated proxy dependencies to address RUSTSEC-2019-0033, RUSTSEC-2019-0034, and RUSTSEC-2020-02 * Web UI * Fixed an error when refreshing an already open dashboard when the Linkerd version has changed * Increased the speed of the dashboard by pausing network activity when the dashboard is not visible to the user * Added support for CronJobs and ReplicaSets, including new Grafana dashboards for them * Added `linkerd check` to the dashboard in the `/controlplane` view * Added request and response headers to the `tap` expanded view in the dashboard * Added filter to namespace select button * Improved how empty tables are displayed * Added `Host:` header validation to the `linkerd-web` service, to protect against DNS rebinding attacks * Made the dashboard sidebar component responsive * Changed the navigation bar color to the one used on the [Linkerd](https://linkerd.io/) website * Internal * Added validation to incoming sidecar injection requests that ensures the value of `linkerd.io/inject` is either `enabled` or `disabled` (thanks @mayankshah1607) * Upgraded the Prometheus Go client library to v1.2.1 (thanks @daxmc99!) * Fixed an issue causing `tap`, `injector` and `sp-validator` to use old certificates after `helm upgrade` due to not being restarted * Fixed incomplete Swagger definition of the tap api, causing benign error logging in the kube-apiserver * Removed the destination container from the linkerd-controller deployment as it now runs in the linkerd-destination deployment * Allowed the control plane to be injected with the `debug` container * Updated proxy image build script to support HTTP proxy options (thanks @joakimr-axis!) * Updated the CLI `doc` command to auto-generate documentation for the proxy configuration annotations (thanks @StupidScience!) * Added new `--trace-collector` and `--trace-collector-svc-account` flags to `linkerd inject` that configures the OpenCensus trace collector used by proxies in the injected workload (thanks @Pothulapati!) * Added a new `--control-plane-tracing` flag to `linkerd install` that enables distributed tracing in the control plane (thanks @Pothulapati!) * Added distributed tracing support to the control plane (thanks @Pothulapati!) ## edge-20.2.1 This edge release is a release candidate for `stable-2.7` and fixes an issue where the proxy could consume inappropriate amounts of memory. * Proxy * Fixed a bug in the proxy's logging subsystem that could cause the proxy to consume memory until the process is OOM killed, especially when the proxy was configured to log diagnostic information * Fixed properly emitting `grpc-status` headers when signaling proxy errors to gRPC clients * Updated certain proxy dependencies to address RUSTSEC-2019-0033, RUSTSEC-2019-0034, and RUSTSEC-2020-02 ## edge-20.1.4 This edge release is a release candidate for `stable-2.7`. The `linkerd check` command has been updated to improve the control plane debugging experience. * CLI * Updated the mTLS trust anchor checks to eliminate false positives caused by extra trailing spaces * Reduced the severity level of the Linkerd version checks, so that they don't fail when the external version endpoint is unreachable (thanks @mayankshah1607!) * Added a new `tap` APIService check to aid with uncovering Kubernetes API aggregation layer issues (thanks @droidnoob!) ## edge-20.1.3 This edge release is a release candidate for `stable-2.7`. An update to the Helm charts has caused a **breaking change** for users who have installed Linkerd using Helm. In order to make the purpose of the `noInitContainer` parameter more explicit, it has been renamed to `cniEnabled`. * CLI * Introduced CNI checks to confirm the CNI plugin is installed and ready; this is done through `linkerd check --pre --linkerd-cni-enabled` before installation and `linkerd check` after installation if the CNI plugin is present * Added support for the `--as-group` flag so that users can impersonate groups for Kubernetes operations (thanks @mayankshah160!) * Controller * Fixed an issue where an override of the Docker registry was not being applied to debug containers (thanks @javaducky!) * Added check for the Subject Alternate Name attributes to the API server when access restrictions have been enabled (thanks @javaducky!) * Added support for arbitrary pod labels so that users can leverage the Linkerd provided Prometheus instance to scrape for their own labels (thanks @daxmc99!) * Fixed an issue with CNI config parsing * Helm * **Breaking change**: Renamed `noInitContainer` parameter to `cniEnabled` * Fixed an issue with `helm install` where the lists of ignored inbound and outbound ports would not be reflected ## edge-20.1.2 * CLI * Added HA specific checks to `linkerd check` to ensure that the `kube-system` namespace has the `config.linkerd.io/admission-webhooks:disabled` label set * Fixed a problem causing the presence of unnecessary empty fields in generated resource definitions (thanks @mayankshah1607) * Proxy * Fixed an issue that could cause the OpenCensus exporter to stall * Internal * Added validation to incoming sidecar injection requests that ensures the value of `linkerd.io/inject` is either `enabled` or `disabled` (thanks @mayankshah1607) ## edge-20.1.1 This edge release includes experimental improvements to the Linkerd proxy's request buffering and backpressure infrastructure. Additionally, we've fixed several bugs when installing Linkerd with Helm, updated the CLI to allow using both port numbers _and_ port ranges with the `--skip-inbound-ports` and `--skip-outbound-ports` flags, and fixed a dashboard error that can occur if the dashboard is open in a browser while updating Linkerd. **Note**: The `linkerd-proxy` version included with this release is more experimental than usual. We'd love your help testing, but be aware that there might be stability issues. * CLI * Added the ability to pass both port numbers and port ranges to `--skip-inbound-ports` and `--skip-outbound-ports` (thanks to @javaducky!) * Controller * Fixed a race condition in the `linkerd-web` service * Updated Prometheus to 2.15.2 (thanks @Pothulapati) * Web UI * Fixed an error when refreshing an already open dashboard when the Linkerd version has changed * Proxy * Internal changes to the proxy's request buffering and backpressure infrastructure * Helm * Fixed the `linkerd-cni` Helm chart not setting proper namespace annotations and labels * Fixed certificate issuance lifetime not being set when installing through Helm * More improvements to Helm best practices (thanks to @Pothulapati!) ## edge-19.12.3 This edge release adds support for pod IP and service cluster IP lookups, improves performance of the dashboard, and makes `linkerd check --pre` perform more comprehensive checks. The `--wait-before-exit-seconds` flag has been added to allow Linkerd users to opt in to `preStop hooks`. The details of this change are in [#3798](https://github.com/linkerd/linkerd2/pull/3798). Also, the proxy has been updated to `v2.82.0` which improves gRPC error classification and [ensures that resolutions](https://github.com/linkerd/linkerd2/pull/3848) are released when the associated balancer becomes idle. Finally, an update to follow best practices in the Helm charts has caused a _breaking change_. Users who have installed Linkerd using Helm must be certain to read the details of [#3822](https://github.com/linkerd/linkerd2/issues/3822) * CLI * Increased the comprehensiveness of `linkerd check --pre` * Added TLS certificate validation to `check` and `upgrade` commands * Controller * Increased minimum kubernetes version to 1.13.0 * Added support for pod ip and service cluster ip lookups in the destination service * Added recommended kubernetes labels to control-plane * Added the `--wait-before-exit-seconds` flag to linkerd inject for the proxy sidecar to delay the start of its shutdown process (a huge commit from @KIVagant, thanks!) * Added a pre-sign check to the identity service * Web UI * Increased the speed of the dashboard by pausing network activity when the dashboard is not visible to the user * Proxy * Added a timeout to release resolutions to idle balancers * Improved error classification for gRPC services * Internal * **Breaking Change** Updated Helm charts to follow best practices using proper casing (thanks @Pothulapati!) ## edge-19.12.2 * CLI * Added support for injecting CronJobs and ReplicaSets, as well as the ability to use them as targets in the CLI subcommands * Introduced the new flags `--identity-issuer-certificate-file`, `--identity-issuer-key-file` and `identity-trust-anchors-file` to `linkerd upgrade` to support trust anchor and issuer certificate rotation * Controller * Fixed inject failures for pods with security context capabilities * Web UI * Added support for CronJobs and ReplicaSets, including new Grafana dashboards for them * Proxy * Fixed a bug where the proxy could stop receiving service discovery updates, resulting in 503 errors * Internal * Moved CNI template into a Helm chart to prepare for future publication * Upgraded the Prometheus Go client library to v1.2.1 (thanks @daxmc99!) * Reenabled certificates rotation integration tests ## edge-19.12.1 * CLI * Added condition to the `linkerd stat` command that requires a window size of at least 15 seconds to work properly with Prometheus * Internal * Fixed whitespace path handling in non-docker build scripts (thanks @joakimr-axis!) * Removed Calico logutils dependency that was incompatible with Go 1.13 * Updated Helm templates to use fully-qualified variable references based upon Helm best practices (thanks @javaducky!) ## edge-19.11.3 * CLI * Added a check that ensures using `--namespace` and `--all-namespaces` results in an error as they are mutually exclusive * Internal * Fixed an issue causing `tap`, `injector` and `sp-validator` to use old certificates after `helm upgrade` due to not being restarted * Fixed incomplete Swagger definition of the tap api, causing benign error logging in the kube-apiserver ## edge-19.11.2 * CLI * Added a `Dashboard.Replicas` parameter to the Linkerd Helm chart to allow configuring the number of dashboard replicas (thanks @KIVagant!) * Removed redundant service profile check (thanks @alenkacz!) * Web UI * Added `linkerd check` to the dashboard in the `/controlplane` view * Added request and response headers to the `tap` expanded view in the dashboard * Internal * Removed the destination container from the linkerd-controller deployment as it now runs in the linkerd-destination deployment * Upgraded Go to version 1.13.4 ## edge-19.11.1 * CLI * Updated `uninject` command to work with namespace resources (thanks @mayankshah1607!) * Controller * Added `conntrack` to the `debug` container to help with connection tracking debugging * Fixed a bug in `tap` where mismatch cluster domain and trust domain caused `tap` to hang * Fixed an issue in the `identity` RBAC resource which caused start up errors in k8s 1.6 (thanks @Pothulapati!) * Proxy * Improved debug/error logging to include detailed contextual information * Web UI * Added filter to namespace select button * Improved how empty tables are displayed * Internal * Added integration test for custom cluster domain * Allowed the control plane to be injected with the `debug` container * Updated proxy image build script to support HTTP proxy options (thanks @joakimr-axis!) * Updated the CLI `doc` command to auto-generate documentation for the proxy configuration annotations (thanks @StupidScience!) ## edge-19.10.5 This edge release adds support for integrating Linkerd's PKI with an external certificate issuer such as [`cert-manager`], adds distributed tracing support to the Linkerd control plane, and adds protection against DNS rebinding attacks to the web dashboard. In addition, it includes several improvements to the Linkerd CLI. * CLI * Added a new `--identity-external-issuer` flag to `linkerd install` that configures Linkerd to use certificates issued by an external certificate issuer (such as `cert-manager`) * Added support for injecting a namespace to `linkerd inject` (thanks @mayankshah1607!) * Added checks to `linkerd check --preinstall` ensuring Kubernetes Secrets can be created and accessed * Fixed `linkerd tap` sometimes displaying incorrect pod names for unmeshed IPs that match multiple running pods * Controller * Added support for using trust anchors from an external certificate issuer (such as `cert-manager`) to the `linkerd-identity` service * Web UI * Added `Host:` header validation to the `linkerd-web` service, to protect against DNS rebinding attacks * Internal * Added new `--trace-collector` and `--trace-collector-svc-account` flags to `linkerd inject` that configures the OpenCensus trace collector used by proxies in the injected workload (thanks @Pothulapati!) * Added a new `--control-plane-tracing` flag to `linkerd install` that enables distributed tracing in the control plane (thanks @Pothulapati!) * Added distributed tracing support to the control plane (thanks @Pothulapati!) Also, thanks to @joakimr-axis for several fixes and improvements to internal build scripts! [`cert-manager`]: https://github.com/jetstack/cert-manager ## edge-19.10.4 This edge release adds dashboard UX enhancements, and improves the speed of the CLI. * CLI * Made `linkerd install --ignore-cluster` and `--skip-checks` faster * Fixed a bug causing `linkerd upgrade` to fail when used with `--from-manifest` * Web UI * Made the dashboard sidebar component responsive * Changed the navigation bar color to the one used on the [Linkerd](https://linkerd.io/) website ## edge-19.10.3 This edge release adds support for headless services, improves the upgrade process after installing Linkerd with a custom cluster domain, and enhances the `check` functionality to report invalid trust anchors. * CLI * Made `--cluster-domain` an install-only flag (thanks @bmcstdio!) * Updated `check` to ensure that proxy trust anchors match configuration (thanks @ereslibre!) * Controller * Added support for headless services (thanks @JohannesEH!) * Helm * Updated the helm build to retain previous releases ## stable-2.6.0 This release introduces distributed tracing support, adds request and response headers to `linkerd tap`, dramatically improves the performance of the dashboard on large clusters, adds traffic split visualizations to the dashboard, adds a public Helm repo, and many more improvements! For more details, see the announcement blog post: To install this release, run: `curl https://run.linkerd.io/install | sh` **Upgrade notes**: Please see the [upgrade instructions](https://linkerd.io/2/tasks/upgrade/#upgrade-notice-stable-2-6-0). **Special thanks to**: @alenkacz, @arminbuerkle, @bmcstdio, @bourquep, @brianstorti, @kevtaylor, @KIVagant, @pierDipi, and @Pothulapati! **Full release notes**: * CLI * Added a new `json` output option to the `linkerd tap` command, which exposes request and response headers * Added a public Helm repo - for full installation instructions, see our [Helm documentation](https://linkerd.io/2/tasks/install-helm/). * Added an `--address` flag to `linkerd dashboard`, allowing users to specify a port-forwarding address (thanks @bmcstdio!) * Added node selector constraints to Helm installation, so users can control which nodes the control plane is deployed to (thanks @bmcstdio!) * Added a `--cluster-domain` flag to the `linkerd install` command that allows setting a custom cluster domain (thanks @arminbuerkle!) * Added a `--disable-heartbeat` flag for `linkerd install | upgrade` commands * Allowed disabling namespace creation when installing Linkerd using Helm (thanks @KIVagant!) * Improved the error message when the CLI cannot connect to Kubernetes (thanks @alenkacz!) * Controller * Updated the Prometheus config to keep only needed `cadvisor` metrics, substantially reducing the number of time-series stored in most clusters * Introduced `config.linkerd.io/trace-collector` and `config.alpha.linkerd.io/trace-collector-service-account` pod spec annotations to support per-pod tracing * Instrumented the proxy injector to provide additional metrics about injection (thanks @Pothulapati!) * Added Kubernetes events (and log lines) when the proxy injector injects a deployment, and when injection is skipped * Fixed a workload admission error between the Kubernetes apiserver and the HA proxy injector, by allowing workloads in a namespace to be omitted from the admission webhooks phase using the `config.linkerd.io/admission-webhooks: disabled` label (thanks @hasheddan!) * Fixed proxy injector timeout during a large number of concurrent injections * Added support for disabling the heartbeat cronjob (thanks @kevtaylor!) * Proxy * Added distributed tracing support * Decreased proxy Docker image size by removing bundled debug tools * Added 587 (SMTP) to the list of ports to ignore in protocol detection (bound to server-speaks-first protocols) (thanks @brianstorti!) * Web UI * Redesigned dashboard navigation so workloads are now viewed by namespace, with an "All Namespaces" option, in order to increase dashboard speed * Added Traffic Splits as a resource to the dashboard, including a Traffic Split detail page * Added a `Linkerd Namespace` Grafana dashboard, allowing users to view historical data for a given namespace, similar to CLI output for `linkerd stat deploy -n myNs` (thanks @bourquep!) * Fixed bad request in the top routes tab on empty fields (thanks @pierDipi!) * Internal * Moved CI from Travis to GitHub Actions * Added requirement for Go `1.12.9` for controller builds to include security fixes * Added support for Kubernetes `1.16` * Upgraded client-go to `v12.0.0` ## edge-19.10.2 This edge release is a release candidate for `stable-2.6`. * Controller * Added the destination container back to the controller; it had previously been separated into its own deployment. This ensures backwards compatibility and allows users to avoid data plane downtime during an upcoming upgrade to `stable-2.6`. ## edge-19.10.1 This edge release is a release candidate for `stable-2.6`. * Proxy * Improved error logging when the proxy fails to emit trace spans * Fixed bug in distributed tracing where trace ids with fewer than 16 bytes were discarded * Internal * Added integration tests for `linkerd edges` and `linkerd endpoints` ## edge-19.9.5 This edge release is a release candidate for `stable-2.6`. * Helm * Added node selector constraints, so users can control which nodes the control plane is deployed to (thanks @bmcstdio!) * CLI * Added request and response headers to the JSON output option for `linkerd tap` ## edge-19.9.4 This edge release introduces experimental support for distributed tracing as well as a redesigned sidebar in the Web UI! Experimental support for distributed tracing means that Linkerd data plane proxies can now emit trace spans, allowing you to see the exact amount of time spent in the Linkerd proxy for traced requests. The new `config.linkerd.io/trace-collector` and `config.alpha.linkerd.io/trace-collector-service-account` tracing annotations allow specifying which pods should emit trace spans. The goal of the dashboard's sidebar redesign was to reduce load on Prometheus and simplify navigation by providing top-level views centered around namespaces and workloads. * CLI * Introduced a new `--cluster-domain` flag to the `linkerd install` command that allows setting a custom cluster domain (thanks @arminbuerkle!) * Fixed the `linkerd endpoints` command to use the correct Destination API address (thanks @Pothulapati!) * Added `--disable-heartbeat` flag for `linkerd` `install|upgrade` commands * Controller * Instrumented the proxy-injector to provide additional metrics about injection (thanks @Pothulapati!) * Added support for `config.linkerd.io/admission-webhooks: disabled` label on namespaces so that the pods creation events in these namespaces are ignored by the proxy injector; this fixes situations in HA deployments where the proxy-injector is installed in `kube-system` (thanks @hasheddan!) * Introduced `config.linkerd.io/trace-collector` and `config.alpha.linkerd.io/trace-collector-service-account` pod spec annotations to support per-pod tracing * Web UI * Workloads are now viewed by namespace, with an "All Namespaces" option, to improve dashboard performance * Proxy * Added experimental distributed tracing support ## edge-19.9.3 * Helm * Allowed disabling namespace creation during install (thanks @KIVagant!) * CLI * Added a new `json` output option to the `linkerd tap` command * Controller * Fixed proxy injector timeout during a large number of concurrent injections * Separated the destination controller into its own separate deployment * Updated Prometheus config to keep only needed `cadvisor` metrics, substantially reducing the number of time-series stored in most clusters * Web UI * Fixed bad request in the top routes tab on empty fields (thanks @pierDipi!) * Proxy * Fixes to the client's backoff logic * Added 587 (SMTP) to the list of ports to ignore in protocol detection (bound to server-speaks-first protocols) (thanks @brianstorti!) ## edge-19.9.2 Much of our effort has been focused on improving our build and test infrastructure, but this edge release lays the groundwork for some big new features to land in the coming releases! * Helm * There's now a public Helm repo! This release can be installed with: `helm repo add linkerd-edge https://helm.linkerd.io/edge && helm install linkerd-edge/linkerd2` * Improved TLS credential parsing by ignoring spurious newlines * Proxy * Decreased proxy-init Docker image size by removing bundled debug tools * Web UI * Fixed an issue where the edges table could end up with duplicates * Added an icon to more clearly label external links * Internal * Upgraded client-go to v12.0.0 * Moved CI from Travis to GitHub Actions ## edge-19.9.1 This edge release adds traffic splits into the Linkerd dashboard as well as a variety of other improvements. * CLI * Improved the error message when the CLI cannot connect to Kubernetes (thanks @alenkacz!) * Added `--address` flag to `linkerd dashboard` (thanks @bmcstdio!) * Controller * Fixed an issue where the proxy-injector had insufficient RBAC permissions * Added support for disabling the heartbeat cronjob (thanks @kevtaylor!) * Proxy * Decreased proxy Docker image size by removing bundled debug tools * Fixed an issue where the incorrect content-length could be set for GET requests with bodies * Web UI * Added trafficsplits as a resource to the dashboard, including a trafficsplit detail page * Internal * Added support for Kubernetes 1.16 ## edge-19.8.7 * Controller * Added Kubernetes events (and log lines) when the proxy injector injects a deployment, and when injection is skipped * Additional preparation for configuring the cluster base domain (thanks @arminbuerkle!) * Proxy * Changed the proxy to require the `LINKERD2_PROXY_DESTINATION_SVC_ADDR` environment variable when starting up * Web UI * Increased dashboard speed by consolidating existing Prometheus queries ## edge-19.8.6 A new Grafana dashboard has been added which shows historical data for a selected namespace. The build process for controller components now requires `Go 1.12.9`. Additional contributions were made towards support for custom cluster domains. * Web UI * Added a `Linkerd Namespace` Grafana dashboard, allowing users to view historical data for a given namespace, similar to CLI output for `linkerd stat deploy -n myNs` (thanks @bourquep!) * Internal * Added requirement for Go `1.12.9` for controller builds to include security fixes * Set `LINKERD2_PROXY_DESTINATION_GET_SUFFIXES` proxy environment variable, in preparation for custom cluster domain support (thanks @arminbuerkle!) ## stable-2.5.0 This release adds [Helm support](https://linkerd.io/2/tasks/install-helm/), [tap authentication and authorization via RBAC](https://linkerd.io/tap-rbac), traffic split stats, dynamic logging levels, a new cluster monitoring dashboard, and countless performance enhancements and bug fixes. For more details, see the announcement blog post: To install this release, run: `curl https://run.linkerd.io/install | sh` **Upgrade notes**: Use the `linkerd upgrade` command to upgrade the control plane. This command ensures that all existing control plane's configuration and mTLS secrets are retained. For more details, please see the [upgrade instructions](https://linkerd.io/2/tasks/upgrade/#upgrade-notice-stable-2-5-0). **Special thanks to**: @alenkacz, @codeman9, @ethan-daocloud, @jonathanbeber, and @Pothulapati! **Full release notes**: * CLI * **New** Updated `linkerd tap`, `linkerd top` and `linkerd profile --tap` to require `tap.linkerd.io` RBAC privileges. See for more info * **New** Added traffic split metrics via `linkerd stat trafficsplits` subcommand * Made the `linkerd routes` command traffic split aware * Introduced the `linkerd --as` flag which allows users to impersonate another user for Kubernetes operations * Introduced the `--all-namespaces` (`-A`) option to the `linkerd get`, `linkerd edges` and `linkerd stat` commands to retrieve resources across all namespaces * Improved the installation report produced by the `linkerd check` command to include the control plane pods' live status * Fixed bug in the `linkerd upgrade config` command that was causing it to crash * Introduced `--use-wait-flag` to the `linkerd install-cni` command, to configure the CNI plugin to use the `-w` flag for `iptables` commands * Introduced `--restrict-dashboard-privileges` flag to `linkerd install` command, to disallow tap in the dashboard * Fixed `linkerd uninject` not removing `linkerd.io/inject: enabled` annotations * Fixed `linkerd stat -h` example commands (thanks @ethan-daocloud!) * Fixed incorrect "meshed" count in `linkerd stat` when resources share the same label selector for pods (thanks @jonathanbeber!) * Added pod status to the output of the `linkerd stat` command (thanks @jonathanbeber!) * Added namespace information to the `linkerd edges` command output and a new `-o wide` flag that shows the identity of the client and server if known * Added a check to the `linkerd check` command to validate the user has privileges necessary to create CronJobs * Added a new check to the `linkerd check --pre` command validating that if PSP is enabled, the NET_RAW capability is available * Controller * **New** Disabled all unauthenticated tap endpoints. Tap requests now require [RBAC authentication and authorization](https://linkerd.io/tap-rbac) * The `l5d-require-id` header is now set on tap requests so that a connection is established over TLS * Introduced a new RoleBinding in the `kube-system` namespace to provide [access to tap](https://linkerd.io/tap-rbac) * Added HTTP security headers on all dashboard responses * Added support for namespace-level proxy override annotations (thanks @Pothulapati!) * Added resource limits when HA is enabled (thanks @Pothulapati!) * Added pod anti-affinity rules to the control plane pods when HA is enabled (thanks @Pothulapati!) * Fixed a crash in the destination service when an endpoint does not have a `TargetRef` * Updated the destination service to return `InvalidArgument` for external name services so that the proxy does not immediately fail the request * Fixed an issue with discovering StatefulSet pods via their unique hostname * Fixed an issue with traffic split where outbound proxy stats are missing * Upgraded the service profile CRD to v1alpha2. No changes required for users currently using v1alpha1 * Updated the control plane's pod security policy to restrict workloads from running as `root` in the CNI mode (thanks @codeman9!) * Introduced optional cluster heartbeat cron job * Bumped Prometheus to 2.11.1 * Bumped Grafana to 6.2.5 * Proxy * **New** Added a new `/proxy-log-level` endpoint to update the log level at runtime * **New** Updated the tap server to only admit requests from the control plane's tap controller * Added `request_handle_us` histogram to measure proxy overhead * Fixed gRPC client cancellations getting recorded as failures rather than as successful * Fixed a bug where tap would stop streaming after a short amount of time * Fixed a bug that could cause the proxy to leak service discovery resolutions to the Destination controller * Web UI * **New** Added "Kubernetes cluster monitoring" Grafana dashboard with cluster and containers metrics * Updated the web server to use the new tap APIService. If the `linkerd-web` service account is not authorized to tap resources, users will see a link to documentation to remedy the error ## edge-19.8.5 This edge release is a release candidate for `stable-2.5`. * CLI * Fixed CLI filepath issue on Windows * Proxy * Fixed gRPC client cancellations getting recorded as failures rather than as successful ## edge-19.8.4 This edge release is a release candidate for `stable-2.5`. * CLI * Introduced `--use-wait-flag` to the `linkerd install-cni` command, to configure the CNI plugin to use the `-w` flag for `iptables` commands * Controller * Disabled the tap gRPC server listener. All tap requests now require RBAC authentication and authorization ## edge-19.8.3 This edge release introduces a new `linkerd stat trafficsplits` subcommand, to show traffic split metrics. It also introduces a "Kubernetes cluster monitoring" Grafana dashboard. * CLI * Added traffic split metrics via `linkerd stat trafficsplits` subcommand * Fixed `linkerd uninject` not removing `linkerd.io/inject: enabled` annotations * Fixed `linkerd stat -h` example commands (thanks @ethan-daocloud!) * Controller * Added support for namespace-level proxy override annotations * Removed unauthenticated tap from the Public API * Proxy * Added `request_handle_us` histogram to measure proxy overhead * Updated the tap server to only admit requests from the control plane's tap controller * Fixed a bug where tap would stop streaming after a short amount of time * Fixed a bug that could cause the proxy to leak service discovery resolutions to the Destination controller * Web UI * Added "Kubernetes cluster monitoring" Grafana dashboard with cluster and containers metrics * Internal * Updated `linkerd install` and `linkerd upgrade` to use Helm charts for templating * Pinned Helm tooling to `v2.14.3` * Added Helm integration tests * Added container CPU and memory usage to `linkerd-heartbeat` requests * Removed unused inject code (thanks @alenkacz!) ## edge-19.8.2 This edge release introduces the new Linkerd control plane Helm chart, named `linkerd2`. Helm users can now install and remove the Linkerd control plane by using the `helm install` and `helm delete` commands. Proxy injection also now uses Helm charts. No changes were made to the existing `linkerd install` behavior. For detailed installation steps using Helm, see the notes for [#3146](https://github.com/linkerd/linkerd2/pull/3146). * CLI * Updated `linkerd top` and `linkerd profile --tap` to require `tap.linkerd.io` RBAC privileges, see for more info * Modified `tap.linkerd.io` APIService to enable usage in `kubectl auth can-i` commands * Introduced `--restrict-dashboard-privileges` flag to `linkerd install` command, to restrict the dashboard's default privileges to disallow tap * Controller * Introduced a new ClusterRole, `linkerd-linkerd-tap-admin`, which gives cluster-wide tap privileges. Also introduced a new ClusterRoleBinding, `linkerd-linkerd-web-admin`, which binds the `linkerd-web` service account to the new tap ClusterRole * Removed successfully completed `linkerd-heartbeat` jobs from pod listing in the linkerd control plane to streamline `get po` output (thanks @Pothulapati!) * Web UI * Updated the web server to use the new tap APIService. If the `linkerd-web` service account is not authorized to tap resources, users will see a link to documentation to remedy the error ## edge-19.8.1 ### Significant Update This edge release introduces a new tap APIService. The Kubernetes apiserver authenticates the requesting tap user and then forwards tap requests to the new tap APIServer. The `linkerd tap` command now makes requests against the APIService. With this release, users must be authorized via RBAC to use the `linkerd tap` command. Specifically `linkerd tap` requires the `watch` verb on all resources in the `tap.linkerd.io/v1alpha1` APIGroup. More granular access is also available via sub-resources such as `deployments/tap` and `pods/tap`. * CLI * Added a check to the `linkerd check` command to validate the user has privileges necessary to create CronJobs * Introduced the `linkerd --as` flag which allows users to impersonate another user for Kubernetes operations * The `linkerd tap` command now makes requests against the tap APIService * Controller * Added HTTP security headers on all dashboard responses * Fixed nil pointer dereference in the destination service when an endpoint does not have a `TargetRef` * Added resource limits when HA is enabled * Added RSA support to TLS libraries * Updated the destination service to return `InvalidArgument` for external name services so that the proxy does not immediately fail the request * The `l5d-require-id` header is now set on tap requests so that a connection is established over TLS * Introduced the `APIService/v1alpha1.tap.linkerd.io` global resource * Introduced the `ClusterRoleBinding/linkerd-linkerd-tap-auth-delegator` global resource * Introduced the `Secret/linkerd-tap-tls` resource into the `linkerd` namespace * Introduced the `RoleBinding/linkerd-linkerd-tap-auth-reader` resource into the `kube-system` namespace * Proxy * Added the `LINKERD2_PROXY_TAP_SVC_NAME` environment variable so that the tap server attempts to authorize client identities * Internal * Replaced `dep` with Go modules for dependency management ## edge-19.7.5 * CLI * Improved the installation report produced by the `linkerd check` command to include the control plane pods' live status * Added the `--all-namespaces` (`-A`) option to the `linkerd get`, `linkerd edges` and `linkerd stat` commands to retrieve resources across all namespaces * Controller * Fixed an issue with discovering StatefulSet pods via their unique hostname * Fixed an issue with traffic split where outbound proxy stats are missing * Bumped Prometheus to 2.11.1 * Bumped Grafana to 6.2.5 * Upgraded the service profile CRD to v1alpha2 where the openAPIV3Schema validation is replaced by a validating admission webhook. No changes required for users currently using v1alpha1 * Updated the control plane's pod security policy to restrict workloads from running as `root` in the CNI mode (thanks @codeman9!) * Introduced cluster heartbeat cron job * Proxy * Introduced the `l5d-require-id` header to enforce TLS outbound communication from the Tap server ## edge-19.7.4 * CLI * Made the `linkerd routes` command traffic-split aware * Fixed bug in the `linkerd upgrade config` command that was causing it to crash * Added pod status to the output of the `linkerd stat`command (thanks @jonathanbeber!) * Fixed incorrect "meshed" count in `linkerd stat` when resources share the same label selector for pods (thanks @jonathanbeber!) * Added namespace information to the `linkerd edges` command output and a new `-o wide` flag that shows the identity of the client and server if known * Added a new check to the `linkerd check --pre` command validating that if PSP is enabled, the NET_RAW capability is available * Controller * Added pod anti-affinity rules to the control plane pods when HA is enabled (thanks @Pothulapati!) * Proxy * Improved performance by using a constant-time load balancer * Added a new `/proxy-log-level` endpoint to update the log level at runtime ## stable-2.4.0 This release adds traffic splitting functionality, support for the Kubernetes Service Mesh Interface (SMI), graduates high-availability support out of experimental status, and adds a tremendous list of other improvements, performance enhancements, and bug fixes. Linkerd's new traffic splitting feature allows users to dynamically control the percentage of traffic destined for a service. This powerful feature can be used to implement rollout strategies like canary releases and blue-green deploys. Support for the [Service Mesh Interface](https://smi-spec.io) (SMI) makes it easier for ecosystem tools to work across all service mesh implementations. Along with the introduction of optional install stages via the `linkerd install config` and `linkerd install control-plane` commands, the default behavior of the `linkerd inject` command only adds annotations and defers injection to the always-installed proxy injector component. Finally, there have been many performance and usability improvements to the proxy and UI, as well as production-ready features including: * A new `linkerd edges` command that provides fine-grained observability into the TLS-based identity system * A `--enable-debug-sidecar` flag for the `linkerd inject` command that improves debugging efforts Linkerd recently passed a CNCF-sponsored security audit! Check out the in-depth report [here](https://github.com/linkerd/linkerd2/blob/master/SECURITY_AUDIT.pdf). To install this release, run: `curl https://run.linkerd.io/install | sh` **Upgrade notes**: Use the `linkerd upgrade` command to upgrade the control plane. This command ensures that all existing control plane's configuration and mTLS secrets are retained. For more details, please see the [upgrade instructions](https://linkerd.io/2/tasks/upgrade/#upgrade-notice-stable-2-4-0) for more details. **Special thanks to**: @alenkacz, @codeman9, @dwj300, @jackprice, @liquidslr, @matej-g, @Pothulapati, @zaharidichev **Full release notes**: * CLI * **Breaking Change** Removed the `--proxy-auto-inject` flag, as the proxy injector is now always installed * **Breaking Change** Replaced the `--linkerd-version` flag with the `--proxy-version` flag in the `linkerd install` and `linkerd upgrade` commands, which allows setting the version for the injected proxy sidecar image, without changing the image versions for the control plane * Introduced install stages: `linkerd install config` and `linkerd install control-plane` * Introduced upgrade stages: `linkerd upgrade config` and `linkerd upgrade control-plane` * Introduced a new `--from-manifests` flag to `linkerd upgrade` allowing manually feeding a previously saved output of `linkerd install` into the command, instead of requiring a connection to the cluster to fetch the config * Introduced a new `--manual` flag to `linkerd inject` to output the proxy sidecar container spec * Introduced a new `--enable-debug-sidecar` flag to `linkerd inject`, that injects a debug sidecar to inspect traffic to and from the meshed pod * Added a new check for unschedulable pods and PSP issues (thanks, @liquidslr!) * Disabled the spinner in `linkerd check` when running without a TTY * Ensured the ServiceAccount for the proxy injector is created before its Deployment to avoid warnings when installing the proxy injector (thanks, @dwj300!) * Added a `linkerd check config` command for verifying that `linkerd install config` was successful * Improved the help documentation of `linkerd install` to clarify flag usage * Added support for private Kubernetes clusters by changing the CLI to connect to the control plane using a port-forward (thanks, @jackprice!) * Fixed `linkerd check` and `linkerd dashboard` failing when any control plane pod is not ready, even when multiple replicas exist (as in HA mode) * **New** Added a `linkerd edges` command that shows the source and destination name and identity for proxied connections, to assist in debugging * Tap can now be disabled for specific pods during injection by using the `--disable-tap` flag, or by using the `config.linkerd.io/disable-tap` annotation * Introduced pre-install healthcheck for clock skew (thanks, @matej-g!) * Added a JSON option to the `linkerd edges` command so that output is scripting friendly and can be parsed easily (thanks @alenkacz!) * Fixed an issue when Linkerd is installed with `--ha`, running `linkerd upgrade` without `--ha` will disable the high availability control plane * Fixed an issue with `linkerd upgrade` where running without `--ha` would unintentionally disable high availability features if they were previously enabled * Added a `--init-image-version` flag to `linkerd inject` to override the injected proxy-init container version * Added the `--linkerd-cni-enabled` flag to the `install` subcommands so that `NET_ADMIN` capability is omitted from the CNI-enabled control plane's PSP * Updated `linkerd check` to validate the caller can create `PodSecurityPolicy` resources * Added a check to `linkerd install` to prevent installing multiple control planes into different namespaces avoid conflicts between global resources * Added support for passing a URL directly to `linkerd inject` (thanks @Pothulapati!) * Added more descriptive output to the `linkerd check` output for control plane ReplicaSet readiness * Refactored the `linkerd endpoints` to use the same interface as used by the proxy for service discovery information * Fixed a bug where `linkerd inject` would fail when given a path to a file outside the current directory * Graduated high-availability support out of experimental status * Modified the error message for `linkerd install` to provide instructions for proceeding when an existing installation is found * Controller * Added Go pprof HTTP endpoints to all control plane components' admin servers to better assist debugging efforts * Fixed bug in the proxy injector, where sporadically the pod workload owner wasn't properly determined, which would result in erroneous stats * Added support for a new `config.linkerd.io/disable-identity` annotation to opt out of identity for a specific pod * Fixed pod creation failure when a `ResourceQuota` exists by adding a default resource spec for the proxy-init init container * Fixed control plane components failing on startup when the Kubernetes API returns an `ErrGroupDiscoveryFailed` * Added Controller Component Labels to the webhook config resources (thanks, @Pothulapati!) * Moved the tap service into its own pod * **New** Control plane installations now generate a self-signed certificate and private key pair for each webhook, to prepare for future work to make the proxy injector and service profile validator HA * Added the `config.linkerd.io/enable-debug-sidecar` annotation allowing the `--enable-debug-sidecar` flag to work when auto-injecting Linkerd proxies * Added multiple replicas for the `proxy-injector` and `sp-validator` controllers when run in high availability mode (thanks to @Pothulapati!) * Defined least privilege default security context values for the proxy container so that auto-injection does not fail (thanks @codeman9!) * Default the webhook failure policy to `Fail` in order to account for unexpected errors during auto-inject; this ensures uninjected applications are not deployed * Introduced control plane's PSP and RBAC resources into Helm templates; these policies are only in effect if the PSP admission controller is enabled * Removed `UPDATE` operation from proxy-injector webhook because pod mutations are disallowed during update operations * Default the mutating and validating webhook configurations `sideEffects` property to `None` to indicate that the webhooks have no side effects on other resources (thanks @Pothulapati!) * Added support for the SMI TrafficSplit API which allows users to define traffic splits in TrafficSplit custom resources * Added the `linkerd.io/control-plane-ns` label to all Linkerd resources allowing them to be identified using a label selector * Added Prometheus metrics for the Kubernetes watchers in the destination service for better visibility * Proxy * Replaced the fixed reconnect backoff with an exponential one (thanks, @zaharidichev!) * Fixed an issue where load balancers can become stuck * Added a dispatch timeout that limits the amount of time a request can be buffered in the proxy * Removed the limit on the number of concurrently active service discovery queries to the destination service * Fix an epoll notification issue that could cause excessive CPU usage * Added the ability to disable tap by setting an env var (thanks, @zaharidichev!) * Changed the proxy's routing behavior so that, when the control plane does not resolve a destination, the proxy forwards the request with minimal additional routing logic * Fixed a bug in the proxy's HPACK codec that could cause requests with very large header values to hang indefinitely * Fixed a memory leak that can occur if an HTTP/2 request with a payload ends before the entire payload is sent to the destination * The `l5d-override-dst` header is now used for inbound service profile discovery * Added errors totals to `response_total` metrics * Changed the load balancer to require that Kubernetes services are resolved via the control plane * Added the `NET_RAW` capability to the proxy-init container to be compatible with `PodSecurityPolicy`s that use `drop: all` * Fixed the proxy rejecting HTTP2 requests that don't have an `:authority` * Improved idle service eviction to reduce resource consumption for clients that send requests to many services * Fixed proxied HTTP/2 connections returning 502 errors when the upstream connection is reset, rather than propagating the reset to the client * Changed the proxy to treat unexpected HTTP/2 frames as stream errors rather than connection errors * Fixed a bug where DNS queries could persist longer than necessary * Improved router eviction to remove idle services in a more timely manner * Fixed a bug where the proxy would fail to process requests with obscure characters in the URI * Web UI * Added the Font Awesome stylesheet locally; this allows both Font Awesome and Material-UI sidebar icons to display consistently with no/limited internet access (thanks again, @liquidslr!) * Removed the Authorities table and sidebar link from the dashboard to prepare for a new, improved dashboard view communicating authority data * Fixed dashboard behavior that caused incorrect table sorting * Removed the "Debug" page from the Linkerd dashboard while the functionality of that page is being redesigned * Added an Edges table to the resource detail view that shows the source, destination name, and identity for proxied connections * Improved UI for Edges table in dashboard by changing column names, adding a "Secured" icon and showing an empty Edges table in the case of no returned edges * Internal * Known container errors were hidden in the integration tests; now they are reported in the output without having the tests fail * Fixed integration tests by adding known proxy-injector log warning to tests * Modified the integration test for `linkerd upgrade` in order to test upgrading from the latest stable release instead of the latest edge and reflect the typical use case * Moved the proxy-init container to a separate `linkerd/proxy-init` Git repository ## edge-19.7.3 * CLI * Graduated high-availability support out of experimental status * Modified the error message for `linkerd install` to provide instructions for proceeding when an existing installation is found * Controller * Added Prometheus metrics for the Kubernetes watchers in the destination service for better visibility ## edge-19.7.2 * CLI * Refactored the `linkerd endpoints` to use the same interface as used by the proxy for service discovery information * Fixed a bug where `linkerd inject` would fail when given a path to a file outside the current directory * Proxy * Fixed a bug where DNS queries could persist longer than necessary * Improved router eviction to remove idle services in a more timely manner * Fixed a bug where the proxy would fail to process requests with obscure characters in the URI ## edge-19.7.1 * CLI * Added more descriptive output to the `linkerd check` output for control plane ReplicaSet readiness * **Breaking change** Renamed `config.linkerd.io/debug` annotation to `config.linkerd.io/enable-debug-sidecar`, to match the `--enable-debug-sidecar` CLI flag that sets it * Fixed a bug in `linkerd edges` that caused incorrect identities to be displayed when requests were sent from two or more namespaces * Controller * Added the `linkerd.io/control-plane-ns` label to the SMI Traffic Split CRD * Proxy * Fixed proxied HTTP/2 connections returning 502 errors when the upstream connection is reset, rather than propagating the reset to the client * Changed the proxy to treat unexpected HTTP/2 frames as stream errors rather than connection errors ## edge-19.6.4 This release adds support for the SMI [Traffic Split](https://github.com/deislabs/smi-spec/blob/master/traffic-split.md) API. Creating a TrafficSplit resource will cause Linkerd to split traffic between the specified backend services. Please see [the spec](https://github.com/deislabs/smi-spec/blob/master/traffic-split.md) for more details. * CLI * Added a check to `install` to prevent installing multiple control planes into different namespaces * Added support for passing a URL directly to `linkerd inject` (thanks @Pothulapati!) * Added the `--all-namespaces` flag to `linkerd edges` * Controller * Added support for the SMI TrafficSplit API which allows users to define traffic splits in TrafficSplit custom resources * Web UI * Improved UI for Edges table in dashboard by changing column names, adding a "Secured" icon and showing an empty Edges table in the case of no returned edges ## edge-19.6.3 * CLI * Updated `linkerd check` to validate the caller can create `PodSecurityPolicy` resources * Controller * Default the mutating and validating webhook configurations `sideEffects` property to `None` to indicate that the webhooks have no side effects on other resources (thanks @Pothulapati!) * Proxy * Added the `NET_RAW` capability to the proxy-init container to be compatible with `PodSecurityPolicy`s that use `drop: all` * Fixed the proxy rejecting HTTP2 requests that don't have an `:authority` * Improved idle service eviction to reduce resource consumption for clients that send requests to many services * Web UI * Removed the "Debug" page from the Linkerd dashboard while the functionality of that page is being redesigned * Added an Edges table to the resource detail view that shows the source, destination name, and identity for proxied connections ## edge-19.6.2 * CLI * Added the `--linkerd-cni-enabled` flag to the `install` subcommands so that `NET_ADMIN` capability is omitted from the CNI-enabled control plane's PSP * Controller * Default to least-privilege security context values for the proxy container so that auto-inject does not fail on restricted PSPs (thanks @codeman9!) * Defined least privilege default security context values for the proxy container so that auto-injection does not fail on (thanks @codeman9!) * Default the webhook failure policy to `Fail` in order to account for unexpected errors during auto-inject; this ensures uninjected applications are not deployed * Introduced control plane's PSP and RBAC resources into Helm templates; these policies are only in effect if the PSP admission controller is enabled * Removed `UPDATE` operation from proxy-injector webhook because pod mutations are disallowed during update operations * Proxy * The `l5d-override-dst` header is now used for inbound service profile discovery * Include errors in `response_total` metrics * Changed the load balancer to require that Kubernetes services are resolved via the control plane * Web UI * Fixed dashboard behavior that caused incorrect table sorting ## edge-19.6.1 * CLI * Fixed an issue where, when Linkerd is installed with `--ha`, running `linkerd upgrade` without `--ha` will disable the high availability control plane * Added a `--init-image-version` flag to `linkerd inject` to override the injected proxy-init container version * Controller * Added multiple replicas for the `proxy-injector` and `sp-validator` controllers when run in high availability mode (thanks to @Pothulapati!) * Proxy * Fixed a memory leak that can occur if an HTTP/2 request with a payload ends before the entire payload is sent to the destination * Internal * Moved the proxy-init container to a separate `linkerd/proxy-init` Git repository ## stable-2.3.2 This stable release fixes a memory leak in the proxy. To install this release, run: `curl https://run.linkerd.io/install | sh` **Full release notes**: * Proxy * Fixed a memory leak that can occur if an HTTP/2 request with a payload ends before the entire payload is sent to the destination ## edge-19.5.4 * CLI * Added a JSON option to the `linkerd edges` command so that output is scripting friendly and can be parsed easily (thanks @alenkacz!) * Controller * **New** Control plane installations now generate a self-signed certificate and private key pair for each webhook, to prepare for future work to make the proxy injector and service profile validator HA * Added a debug container annotation, allowing the `--enable-debug-sidecar` flag to work when auto-injecting Linkerd proxies * Proxy * Changed the proxy's routing behavior so that, when the control plane does not resolve a destination, the proxy forwards the request with minimal additional routing logic * Fixed a bug in the proxy's HPACK codec that could cause requests with very large header values to hang indefinitely * Web UI * Removed the Authorities table and sidebar link from the dashboard to prepare for a new, improved dashboard view communicating authority data * Internal * Modified the integration test for `linkerd upgrade` to test upgrading from the latest stable release instead of the latest edge, to reflect the typical use case ## stable-2.3.1 This stable release adds a number of proxy stability improvements. To install this release, run: `curl https://run.linkerd.io/install | sh` **Special thanks to**: @zaharidichev and @11Takanori! **Full release notes**: * Proxy * Changed the proxy's routing behavior so that, when the control plane does not resolve a destination, the proxy forwards the request with minimal additional routing logic * Fixed a bug in the proxy's HPACK codec that could cause requests with very large header values to hang indefinitely * Replaced the fixed reconnect backoff with an exponential one (thanks, @zaharidichev!) * Fixed an issue where requests could be held indefinitely by the load balancer * Added a dispatch timeout that limits the amount of time a request can be buffered in the proxy * Removed the limit on the number of concurrently active service discovery queries to the destination service * Fixed an epoll notification issue that could cause excessive CPU usage * Added the ability to disable tap by setting an env var (thanks, @zaharidichev!) ## edge-19.5.3 * CLI * **New** Added a `linkerd edges` command that shows the source and destination name and identity for proxied connections, to assist in debugging * Tap can now be disabled for specific pods during injection by using the `--disable-tap` flag, or by using the `config.linkerd.io/disable-tap` annotation * Introduced pre-install healthcheck for clock skew (thanks, @matej-g!) * Controller * Added Controller Component Labels to the webhook config resources (thanks, @Pothulapati!) * Moved the tap service into its own pod * Proxy * Fix an epoll notification issue that could cause excessive CPU usage * Added the ability to disable tap by setting an env var (thanks, @zaharidichev!) ## edge-19.5.2 * CLI * Fixed `linkerd check` and `linkerd dashboard` failing when any control plane pod is not ready, even when multiple replicas exist (as in HA mode) * Controller * Fixed control plane components failing on startup when the Kubernetes API returns an `ErrGroupDiscoveryFailed` * Proxy * Added a dispatch timeout that limits the amount of time a request can be buffered in the proxy * Removed the limit on the number of concurrently active service discovery queries to the destination service Special thanks to @zaharidichev for adding end to end tests for proxies with TLS! ## edge-19.5.1 * CLI * Added a `linkerd check config` command for verifying that `linkerd install config` was successful * Improved the help documentation of `linkerd install` to clarify flag usage * Added support for private Kubernetes clusters by changing the CLI to connect to the control plane using a port-forward (thanks, @jackprice!) * Controller * Fixed pod creation failure when a `ResourceQuota` exists by adding a default resource spec for the proxy-init init container * Proxy * Replaced the fixed reconnect backoff with an exponential one (thanks, @zaharidichev!) * Fixed an issue where load balancers can become stuck * Internal * Fixed integration tests by adding known proxy-injector log warning to tests ## edge-19.4.5 ### Significant Update As of this edge release the proxy injector component is always installed. To have the proxy injector inject a pod you still can manually add the `linkerd.io/inject: enable` annotation into the pod spec, or at the namespace level to have all your pods be injected by default. With this release the behaviour of the `linkerd inject` command changes, where the proxy sidecar container YAML is no longer included in its output by default, but instead it will just add the annotations to defer the injection to the proxy injector. For use cases that require the full injected YAML to be output, a new `--manual` flag has been added. Another important update is the introduction of install stages. You still have the old `linkerd install` command, but now it can be broken into `linkerd install config` which installs the resources that require cluster-level privileges, and `linkerd install control-plane` that continues with the resources that only require namespace-level privileges. This also applies to the `linkerd upgrade` command. * CLI * **Breaking Change** Removed the `--proxy-auto-inject` flag, as the proxy injector is now always installed * **Breaking Change** Replaced the `--linkerd-version` flag with the `--proxy-version` flag in the `linkerd install` and `linkerd upgrade` commands, which allows setting the version for the injected proxy sidecar image, without changing the image versions for the control plane * Introduced install stages: `linkerd install config` and `linkerd install control-plane` * Introduced upgrade stages: `linkerd upgrade config` and `linkerd upgrade control-plane` * Introduced a new `--from-manifests` flag to `linkerd upgrade` allowing manually feeding a previously saved output of `linkerd install` into the command, instead of requiring a connection to the cluster to fetch the config * Introduced a new `--manual` flag to `linkerd inject` to output the proxy sidecar container spec * Introduced a new `--enable-debug-sidecar` option to `linkerd inject`, that injects a debug sidecar to inspect traffic to and from the meshed pod * Added a new check for unschedulable pods and PSP issues (thanks, @liquidslr!) * Disabled the spinner in `linkerd check` when running without a TTY * Ensured the ServiceAccount for the proxy injector is created before its Deployment to avoid warnings when installing the proxy injector (thanks, @dwj300!) * Controller * Added Go pprof HTTP endpoints to all control plane components' admin servers to better assist debugging efforts * Fixed bug in the proxy injector, where sporadically the pod workload owner wasn't properly determined, which would result in erroneous stats * Added support for a new `config.linkerd.io/disable-identity` annotation to opt out of identity for a specific pod * Web UI * Added the Font Awesome stylesheet locally; this allows both Font Awesome and Material-UI sidebar icons to display consistently with no/limited internet access (thanks again, @liquidslr!) * Internal * Known container errors were hidden in the integration tests; now they are reported in the output, still without having the tests fail ## stable-2.3.0 This stable release introduces a new TLS-based service identity system into the default Linkerd installation, replacing `--tls=optional` and the `linkerd-ca` controller. Now, proxies generate ephemeral private keys into a tmpfs directory and dynamically refresh certificates, authenticated by Kubernetes ServiceAccount tokens, and tied to ServiceAccounts as the identity primitive In this release, all meshed HTTP communication is private and authenticated by default. Among the many improvements to the web dashboard, we've added a Community page to surface news and updates from linkerd.io. For more details, see the announcement blog post: To install this release, run: `curl https://run.linkerd.io/install | sh` **Upgrade notes**: The `linkerd-ca` controller has been removed in favor of the `linkerd-identity` controller. If you had previously installed Linkerd with `--tls=optional`, manually delete the `linkerd-ca` deployment after upgrading. Also, `--single-namespace` mode is no longer supported. For full details on upgrading to this release, please see the [upgrade instructions](https://linkerd.io/2/tasks/upgrade/#upgrade-notice-stable-2-3-0). **Special thanks to**: @codeman9, @harsh-98, @huynq0911, @KatherineMelnyk, @liquidslr, @paranoidaditya, @Pothulapati, @TwinProduction, and @yb172! **Full release notes**: * CLI * Introduced an `upgrade` command! This allows an existing Linkerd control plane to be reinstalled or reconfigured; it is particularly useful for automatically reusing flags set in the previous `install` or `upgrade` * Introduced the `linkerd metrics` command for fetching proxy metrics * **Breaking Change:** The `--linkerd-cni-enabled` flag has been removed from the `inject` command; CNI is configured at the cluster level with the `install` command and no longer applies to the `inject` command * **Breaking Change** Removed the `--disable-external-profiles` flag from the `install` command; external profiles are now disabled by default and can be enabled with the new `--enable-external-profiles` flag * **Breaking change** Removed the `--api-port` flag from the `inject` and `install` commands, since there's no benefit to running the control plane's destination API on a non-default port (thanks, @paranoidaditya) * **Breaking change** Removed the `--tls=optional` flag from the `linkerd install` command, since TLS is now enabled by default * Changed `install` to accept or generate an issuer Secret for the Identity controller * Changed `install` to fail in the case of a conflict with an existing installation; this can be disabled with the `--ignore-cluster` flag * Added the ability to adjust the Prometheus log level via `--controller-log-level` * Implemented `--proxy-cpu-limit` and `--proxy-memory-limit` for setting the proxy resources limits (`--proxy-cpu` and `--proxy-memory` were deprecated in favor of `proxy-cpu-request` and `proxy-memory-request`) (thanks @TwinProduction!) * Added a validator for the `--proxy-log-level` flag * Updated the `inject` and `uninject` subcommands to issue warnings when resources lack a `Kind` property (thanks @Pothulapati!) * The `inject` command proxy options are now converted into config annotations; the annotations ensure that these configs are persisted in subsequent resource updates * Changed `inject` to require fetching a configuration from the control plane; this can be disabled with the `--ignore-cluster` and `--disable-identity` flags, though this will prevent the injected pods from participating in mesh identity * Included kubectl version check as part of `linkerd check` (thanks @yb172!) * Updated `linkerd check` to ensure hint URLs are displayed for RPC checks * Fixed sporadic (and harmless) race condition error in `linkerd check` * Introduced a check for NET_ADMIN in `linkerd check` * Fixed permissions check for CRDs * Updated the `linkerd dashboard` command to serve the dashboard on a fixed port, allowing it to leverage browser local storage for user settings * Updated the `linkerd routes` command to display rows for routes that are not receiving any traffic * Added TCP stats to the stat command, under the `-o wide` and `-o json` flags * The `stat` command now always shows the number of open TCP connections * Removed TLS metrics from the `stat` command; this is in preparation for surfacing identity metrics in a clearer way * Exposed the `install-cni` command and its flags, and tweaked their descriptions * Eliminated false-positive vulnerability warnings related to go.uuid * Controller * Added a new public API endpoint for fetching control plane configuration * **Breaking change** Removed support for running the control plane in single-namespace mode, which was severely limited in the number of features it supported due to not having access to cluster-wide resources; the end goal being Linkerd degrading gracefully depending on its privileges * Updated automatic proxy injection and CLI injection to support overriding inject defaults via pod spec annotations * Added support for the `config.linkerd.io/proxy-version` annotation on pod specs; this will override the injected proxy version * The auto-inject admission controller webhook is updated to watch pods creation and update events; with this change, proxy auto-injection now works for all kinds of workloads, including StatefulSets, DaemonSets, Jobs, etc * Service profile validation is now performed via a webhook endpoint; this prevents Kubernetes from accepting invalid service profiles * Changed the default CPU request from `10m` to `100m` for HA deployments; this will help some intermittent liveness/readiness probes from failing due to tight resource constraints * Updated destination service to return TLS identities only when the destination pod is TLS-aware and is in the same controller namespace * Lessen klog level to improve security * Updated control plane components to query Kubernetes at startup to determine authorized namespaces and if ServiceProfile support is available * Modified the stats payload to include the following TCP stats: `tcp_open_connections`, `tcp_read_bytes_total`, `tcp_write_bytes_total` * Instrumented clients in the control plane connecting to Kubernetes, thus providing better visibility for diagnosing potential problems with those connections * Renamed the "linkerd-proxy-api" service to "linkerd-destination" * Bumped Prometheus to version 2.7.1 and Grafana to version 5.4.3 * Proxy * Introduced per-proxy private key generation and dynamic certificate renewal * **Fixed** a connection starvation issue where TLS discovery detection on slow or idle connections could block all other connections from being accepted on the inbound listener of the proxy * **Fixed** a stream leak between the proxy and the control plane that could cause the `linkerd-controller` pod to use an excessive amount of memory * Added a readiness check endpoint on `:4191/ready` so that Kubernetes doesn't consider pods ready until they have acquired a certificate from the Identity controller * Some `l5d-*` informational headers have been temporarily removed from requests and responses because they could leak information to external clients * The proxy's connect timeouts have been updated, especially to improve reconnect behavior between the proxy and the control plane * Increased the inbound/router cap on MAX_CONCURRENT_STREAMS * The `l5d-remote-ip` header is now set on inbound requests and outbound responses * Fixed issue with proxy falling back to filesystem polling due to improperly sized inotify buffer * Web UI * **New** Added a Community page to surface news and updates from linkerd.io * Added a Debug page to the web dashboard, allowing you to introspect service discovery state * The Overview page in the Linkerd dashboard now renders appropriately when viewed on mobile devices * Added filter functionality to the metrics tables * Added stable sorting for table rows * Added TCP stats to the Linkerd Pod Grafana dashboard * Added TCP stat tables on the namespace landing page and resource detail page * The topology graph now shows TCP stats if no HTTP stats are available * Improved table display on the resource detail page for resources with TCP-only traffic * Updated the resource detail page to start displaying a table with TCP stats * Modified the Grafana variable queries to use a TCP-based metric, so that if there is only TCP traffic then the dropdowns don't end up empty * Fixed sidebar not updating when resources were added/deleted (thanks @liquidslr!) * Added validation to the "new service profile" form (thanks @liquidslr!) * Added a Grafana dashboard and web tables for displaying Job stats (thanks, @Pothulapati!) * Removed TLS columns from the dashboard tables; this is in preparation for surfacing identity metrics in a clearer way * Fixed the behavior of the Top query 'Start' button if a user's query returns no data * Fixed an issue with the order of tables returned from a Top Routes query * Added text wrap for paths in the modal for expanded Tap query data * Fixed a quoting issue with service profile downloads (thanks, @liquidslr!) * Updated sorting of route table to move default routes to the bottom * Removed 'Help' hierarchy and surfaced links on navigation sidebar * Ensured that all the tooltips in Grafana displaying the series are shared across all the graphs * Internals * Improved the `bin/go-run` script for the build process so that on failure, all associated background processes are terminated * Added more log errors to the integration tests * Removed the GOPATH dependence from the CLI dev environment * Consolidated injection code from CLI and admission controller code paths * Enabled the following linters: `unparam`, `unconvert`, `goimports`, `goconst`, `scopelint`, `unused`, `gosimple` * Bumped base Docker images * Added the flags `-update` and `-pretty-diff` to tests to allow overwriting fixtures and to print the full text of the fixtures upon mismatches * Introduced golangci-lint tooling, using `.golangci.yml` to centralize the config * Added a `-cover` parameter to track code coverage in go tests (more info in TEST.md) * Renamed a function in a test that was shadowing a go built-in function (thanks @huynq0911!) ## edge-19.4.4 * Proxy * **Fixed** a connection starvation issue where TLS discovery detection on slow or idle connections could block all other connections from being accepted on the inbound listener of the proxy * CLI * **Fixed** `inject` to allow the `--disable-identity` flag to be used without having to specify the `--ignore-cluster` flag * Web UI * The Overview page in the Linkerd dashboard now renders appropriately when viewed on mobile devices ## edge-19.4.3 * CLI * **Fixed** `linkerd upgrade` command not upgrading proxy containers (thanks @jon-walton for the issue report!) * **Fixed** `linkerd upgrade` command not installing the identity service when it was not already installed * Eliminate false-positive vulnerability warnings related to go.uuid Special thanks to @KatherineMelnyk for updating the web component to read the UUID from the `linkerd-config` ConfigMap! ## edge-19.4.2 * CLI * Removed TLS metrics from the `stat` command; this is in preparation for surfacing identity metrics in a clearer way * The `upgrade` command now outputs a URL that explains next steps for upgrading * **Breaking Change:** The `--linkerd-cni-enabled` flag has been removed from the `inject` command; CNI is configured at the cluster level with the `install` command and no longer applies to the `inject` command * Controller * Service profile validation is now performed via a webhook endpoint; this prevents Kubernetes from accepting invalid service profiles * Added support for the `config.linkerd.io/proxy-version` annotation on pod specs; this will override the injected proxy version * Changed the default CPU request from `10m` to `100m` for HA deployments; this will help some intermittent liveness/readiness probes from failing due to tight resource constraints * Proxy * The `CommonName` field on CSRs is now set to the proxy's identity name * Web UI * Removed TLS columns from the dashboard tables; this is in preparation for surfacing identity metrics in a clearer way ## edge-19.4.1 * CLI * Introduced an `upgrade` command! This allows an existing Linkerd control plane to be reinstalled or reconfigured; it is particularly useful for automatically reusing flags set in the previous `install` or `upgrade` * The `inject` command proxy options are now converted into config annotations; the annotations ensure that these configs are persisted in subsequent resource updates * The `stat` command now always shows the number of open TCP connections * **Breaking Change** Removed the `--disable-external-profiles` flag from the `install` command; external profiles are now disabled by default and can be enabled with the new `--enable-external-profiles` flag * Controller * The auto-inject admission controller webhook is updated to watch pods creation and update events; with this change, proxy auto-injection now works for all kinds of workloads, including StatefulSets, DaemonSets, Jobs, etc * Proxy * Some `l5d-*` informational headers have been temporarily removed from requests and responses because they could leak information to external clients * Web UI * The topology graph now shows TCP stats if no HTTP stats are available * Improved table display on the resource detail page for resources with TCP-only traffic * Added validation to the "new service profile" form (thanks @liquidslr!) ## edge-19.3.3 ### Significant Update This edge release introduces a new TLS Identity system into the default Linkerd installation, replacing `--tls=optional` and the `linkerd-ca` controller. Now, proxies generate ephemeral private keys into a tmpfs directory and dynamically refresh certificates, authenticated by Kubernetes ServiceAccount tokens, via the newly-introduced Identity controller. Now, all meshed HTTP communication is private and authenticated by default. * CLI * Changed `install` to accept or generate an issuer Secret for the Identity controller * Changed `install` to fail in the case of a conflict with an existing installation; this can be disabled with the `--ignore-cluster` flag * Changed `inject` to require fetching a configuration from the control plane; this can be disabled with the `--ignore-cluster` and `--disable-identity` flags, though this will prevent the injected pods from participating in mesh identity * **Breaking change** Removed the `--tls=optional` flag from the `linkerd install` command, since TLS is now enabled by default * Added the ability to adjust the Prometheus log level * Proxy * **Fixed** a stream leak between the proxy and the control plane that could cause the `linkerd-controller` pod to use an excessive amount of memory * Introduced per-proxy private key generation and dynamic certificate renewal * Added a readiness check endpoint on `:4191/ready` so that Kubernetes doesn't consider pods ready until they have acquired a certificate from the Identity controller * The proxy's connect timeouts have been updated, especially to improve reconnect behavior between the proxy and the control plane * Web UI * Added TCP stats to the Linkerd Pod Grafana dashboard * Fixed the behavior of the Top query 'Start' button if a user's query returns no data * Added stable sorting for table rows * Fixed an issue with the order of tables returned from a Top Routes query * Added text wrap for paths in the modal for expanded Tap query data * Internal * Improved the `bin/go-run` script for the build process so that on failure, all associated background processes are terminated Special thanks to @liquidslr for many useful UI and log changes, and to @mmalone and @sourishkrout at @smallstep for collaboration and advice on the Identity system! ## edge-19.3.2 * Controller * **Breaking change** Removed support for running the control plane in single-namespace mode, which was severely limited in the number of features it supported due to not having access to cluster-wide resources * Updated automatic proxy injection and CLI injection to support overriding inject defaults via pod spec annotations * Added a new public API endpoint for fetching control plane configuration * CLI * **Breaking change** Removed the `--api-port` flag from the `inject` and `install` commands, since there's no benefit to running the control plane's destination API on a non-default port (thanks, @paranoidaditya) * Introduced the `linkerd metrics` command for fetching proxy metrics * Updated the `linkerd routes` command to display rows for routes that are not receiving any traffic * Updated the `linkerd dashboard` command to serve the dashboard on a fixed port, allowing it to leverage browser local storage for user settings * Web UI * **New** Added a Community page to surface news and updates from linkerd.io * Fixed a quoting issue with service profile downloads (thanks, @liquidslr!) * Added a Grafana dashboard and web tables for displaying Job stats (thanks, @Pothulapati!) * Updated sorting of route table to move default routes to the bottom * Added TCP stat tables on the namespace landing page and resource detail page ## edge-19.3.1 * CLI * Introduced a check for NET_ADMIN in `linkerd check` * Fixed permissions check for CRDs * Included kubectl version check as part of `linkerd check` (thanks @yb172!) * Added TCP stats to the stat command, under the `-o wide` and `-o json` flags * Controller * Updated the `mutatingwebhookconfiguration` so that it is recreated when the proxy injector is restarted, so that the MWC always picks up the latest config template during version upgrade * Proxy * Increased the inbound/router cap on MAX_CONCURRENT_STREAMS * The `l5d-remote-ip` header is now set on inbound requests and outbound responses * Web UI * Fixed sidebar not updating when resources were added/deleted (thanks @liquidslr!) * Added filter functionality to the metrics tables * Internal * Added more log errors to the integration tests * Removed the GOPATH dependence from the CLI dev environment * Consolidated injection code from CLI and admission controller code paths ## edge-19.2.5 * CLI * Updated `linkerd check` to ensure hint URLs are displayed for RPC checks * Controller * Updated the auto-inject admission controller webhook to respond to UPDATE events for deployment workloads * Updated destination service to return TLS identities only when the destination pod is TLS-aware and is in the same controller namespace * Lessen klog level to improve security * Updated control plane components to query Kubernetes at startup to determine authorized namespaces and if ServiceProfile support is available * Modified the stats payload to include the following TCP stats: `tcp_open_connections`, `tcp_read_bytes_total`, `tcp_write_bytes_total` * Proxy * Fixed issue with proxy falling back to filesystem polling due to improperly sized inotify buffer * Web UI * Removed 'Help' hierarchy and surfaced links on navigation sidebar * Added a Debug page to the web dashboard, allowing you to introspect service discovery state * Updated the resource detail page to start displaying a table with TCP stats * Internal * Enabled the following linters: `unparam`, `unconvert`, `goimports`, `goconst`, `scopelint`, `unused`, `gosimple` * Bumped base Docker images ## stable-2.2.1 This stable release polishes some of the CLI help text and fixes two issues that came up since the stable-2.2.0 release. To install this release, run: `curl https://run.linkerd.io/install | sh` **Full release notes**: * CLI * Fixed handling of kubeconfig server urls that include paths * Updated the description of the `--proxy-auto-inject` flag to indicate that it is no longer experimental * Updated the `profile` help text to match the other commands * Added the "ep" alias for the `endpoints` command * Controller * Stopped logging an error when a route doesn't specify a timeout ## edge-19-2.4 * CLI * Implemented `--proxy-cpu-limit` and `--proxy-memory-limit` for setting the proxy resources limits (`--proxy-cpu` and `--proxy-memory` were deprecated in favor of `proxy-cpu-request` and `proxy-memory-request`) (thanks @TwinProduction!) * Updated the `inject` and `uninject` subcommands to issue warnings when resources lack a `Kind` property (thanks @Pothulapati!) * Exposed the `install-cni` command and its flags, and tweaked their descriptions * Fixed handling of kubeconfig server urls that include paths * Updated the description of the `--proxy-auto-inject` flag to indicate that it is no longer experimental * Updated the `profile` help text to match the other commands * Added the "ep" alias for the `endpoints` command (also @Pothulapati!) * Added a validator for the `--proxy-log-level` flag * Fixed sporadic (and harmless) race condition error in `linkerd check` * Controller * Instrumented clients in the control plane connecting to Kubernetes, thus providing better visibility for diagnosing potential problems with those connections * Stopped logging an error when a route doesn't specify a timeout * Renamed the "linkerd-proxy-api" service to "linkerd-destination" * Bumped Prometheus to version 2.7.1 and Grafana to version 5.4.3 * Web UI * Modified the Grafana variable queries to use a TCP-based metric, so that if there is only TCP traffic then the dropdowns don't end up empty * Ensured that all the tooltips in Grafana displaying the series are shared across all the graphs * Internals * Added the flags `-update` and `-pretty-diff` to tests to allow overwriting fixtures and to print the full text of the fixtures upon mismatches * Introduced golangci-lint tooling, using `.golangci.yml` to centralize the config * Added a `-cover` parameter to track code coverage in go tests (more info in TEST.md) * Added integration tests for `--single-namespace` * Renamed a function in a test that was shadowing a go built-in function (thanks @huynq0911!) ## stable-2.2.0 This stable release introduces automatic request retries and timeouts, and graduates auto-inject to be a fully-supported (non-experimental) feature. It adds several new CLI commands, including `logs` and `endpoints`, that provide diagnostic visibility into Linkerd's control plane. Finally, it introduces two exciting experimental features: a cryptographically-secured client identity header, and a CNI plugin that avoids the need for `NET_ADMIN` kernel capabilities at deploy time. For more details, see the announcement blog post: To install this release, run: `curl https://run.linkerd.io/install | sh` **Upgrade notes**: The default behavior for proxy auto injection and service profile ownership has changed as part of this release. Please see the [upgrade instructions](https://linkerd.io/2/tasks/upgrade/#upgrade-notice-stable-2-2-0) for more details. **Special thanks to**: @alenkacz, @codeman9, @jonrichards, @radu-matei, @yeya24, and @zknill **Full release notes**: * CLI * Improved service profile validation when running `linkerd check` in order to validate service profiles in all namespaces * Added the `linkerd endpoints` command to introspect Linkerd's service discovery state * Added the `--tap` flag to `linkerd profile` to generate service profiles using the route results seen during the tap * Added support for the `linkerd.io/inject: disabled` annotation on pod specs to disable injection for specific pods when running `linkerd inject` * Added support for `basePath` in OpenAPI 2.0 files when running `linkerd profile --open-api` * Increased `linkerd check` client timeout from 5 seconds to 30 seconds to fix issues for clusters with slow API servers * Updated `linkerd routes` to no longer return rows for `ExternalName` services in the namespace * Broadened the set of valid URLs when connecting to the Kubernetes API * Added the `--proto` flag to `linkerd profile` to output a service profile based on a Protobuf spec file * Fixed CLI connection failures to clusters that use self-signed certificates * Simplified `linkerd install` so that setting up proxy auto-injection (flag `--proxy-auto-inject`) no longer requires enabling TLS (flag `--tls`) * Added links for each `linkerd check` failure, pointing to a relevant section in our new FAQ page with resolution steps for each case * Added optional `linkerd install-sp` command to generate service profiles for the control plane, providing per-route metrics for control plane components * Removed `--proxy-bind-timeout` flag from `linkerd install` and `linkerd inject`, as the proxy no longer accepts this environment variable * Improved CLI appearance on Windows systems * Improved `linkerd check` output, fixed bug with `--single-namespace` * Fixed panic when `linkerd routes` is called in single-namespace mode * Added `linkerd logs` command to surface logs from any container in the Linkerd control plane * Added `linkerd uninject` command to remove the Linkerd proxy from a Kubernetes config * Improved `linkerd inject` to re-inject a resource that already has a Linkerd proxy * Improved `linkerd routes` to list all routes, including those without traffic * Improved readability in `linkerd check` and `linkerd inject` outputs * Adjusted the set of checks that are run before executing CLI commands, which allows the CLI to be invoked even when the control plane is not fully ready * Fixed reporting of injected resources when the `linkerd inject` command is run on `List` type resources with multiple items * Updated the `linkerd dashboard` command to use port-forwarding instead of proxying when connecting to the web UI and Grafana * Added validation for the `ServiceProfile` CRD * Updated the `linkerd check` command to disallow setting both the `--pre` and `--proxy` flags simultaneously * Added `--routes` flag to the `linkerd top` command, for grouping table rows by route instead of by path * Updated Prometheus configuration to automatically load `*_rules.yml` files * Removed TLS column from the `linkerd routes` command output * Updated `linkerd install` output to use non-default service accounts, `emptyDir` volume mounts, and non-root users * Removed cluster-wide resources from single-namespace installs * Fixed resource requests for proxy-injector container in `--ha` installs * Controller * Fixed issue with auto-injector not setting the proxy ID, which is required to successfully locate client service profiles * Added full stat and tap support for DaemonSets and StatefulSets in the CLI, Grafana, and web UI * Updated auto-injector to use the proxy log level configured at install time * Fixed issue with auto-injector including TLS settings in injected pods even when TLS was not enabled * Changed automatic proxy injection to be opt-in via the `linkerd.io/inject` annotation on the pod or namespace * Move service profile definitions to client and server namespaces, rather than the control plane namespace * Added `linkerd.io/created-by` annotation to the linkerd-cni DaemonSet * Added a 10 second keepalive default to resolve dropped connections in Azure environments * Improved node selection for installing the linkerd-cni DaemonSet * Corrected the expected controller identity when configuring pods with TLS * Modified klog to be verbose when controller log-level is set to `debug` * Added support for retries and timeouts, configured directly in the service profile for each route * Added an experimental CNI plugin to avoid requiring the NET_ADMIN capability when injecting proxies * Improved the API for `ListPods` * Fixed `GetProfiles` API call not returning immediately when no profile exists (resulting in proxies logging warnings) * Blocked controller initialization until caches have synced with kube API * Fixed proxy-api handling of named target ports in service configs * Added parameter to stats API to skip retrieving prometheus stats * Web UI * Updated navigation to link the Linkerd logo back to the Overview page * Fixed console warnings on the Top page * Grayed-out the tap icon for requests from sources that are not meshed * Improved resource detail pages to show all resource types * Fixed stats not appearing for routes that have service profiles installed * Added "meshed" and "no traffic" badges on the resource detail pages * Fixed `linkerd dashboard` to maintain proxy connection when browser open fails * Fixed JavaScript bundling to avoid serving old versions after upgrade * Reduced the size of the webpack JavaScript bundle by nearly 50% * Fixed an indexing error on the top results page * Restored unmeshed resources in the network graph on the resource detail page * Adjusted label for unknown routes in route tables, added tooltip * Updated Top Routes page to persist form settings in URL * Added button to create new service profiles on Top Routes page * Fixed CLI commands displayed when linkerd is running in non-default namespace * Proxy * Modified the way in which canonicalization warnings are logged to reduce the overall volume of error logs and make it clearer when failures occur * Added TCP keepalive configuration to fix environments where peers may silently drop connections * Updated the `Get` and `GetProfiles` APIs to accept a `proxy_id` parameter in order to return more tailored results * Removed TLS fallback-to-plaintext if handshake fails * Added the ability to override a proxy's normal outbound routing by adding an `l5d-override-dst` header * Added `LINKERD2_PROXY_DNS_CANONICALIZE_TIMEOUT` environment variable to customize the timeout for DNS queries to canonicalize a name * Added support for route timeouts in service profiles * Improved logging for gRPC errors and for malformed HTTP/2 request headers * Improved log readability by moving some noisy log messages to more verbose log levels * Fixed a deadlock in HTTP/2 stream reference counts * Updated the proxy-init container to exit with a non-zero exit code if initialization fails, making initialization errors much more visible * Fixed a memory leak due to leaked UDP sockets for failed DNS queries * Improved configuration of the PeakEwma load balancer * Improved handling of ports configured to skip protocol detection when the proxy is running with TLS enabled ## edge-19.2.3 * Controller * Fixed issue with auto-injector not setting the proxy ID, which is required to successfully locate client service profiles * Web UI * Updated navigation to link the Linkerd logo back to the Overview page * Fixed console warnings on the Top page ## edge-19.2.2 * CLI * Improved service profile validation when running `linkerd check` in order to validate service profiles in all namespaces * Controller * Added stat and tap support for StatefulSets in the CLI, Grafana, and web UI * Updated auto-injector to use the proxy log level configured at install time * Fixed issue with auto-injector including TLS settings in injected pods even when TLS was not enabled * Proxy * Modified the way in which canonicalization warnings are logged to reduce the overall volume of error logs and make it clearer when failures occur ## edge-19.2.1 * Controller * **Breaking change** Changed automatic proxy injection to be opt-in via the `linkerd.io/inject` annotation on the pod or namespace. More info: * **Breaking change** `ServiceProfile`s are now defined in client and server namespaces, rather than the control plane namespace. `ServiceProfile`s defined in the client namespace take priority over ones defined in the server namespace * Added `linkerd.io/created-by` annotation to the linkerd-cni DaemonSet (thanks @codeman9!) * Added a 10 second keepalive default to resolve dropped connections in Azure environments * Improved node selection for installing the linkerd-cni DaemonSet (thanks @codeman9!) * Corrected the expected controller identity when configuring pods with TLS * Modified klog to be verbose when controller log-level is set to `Debug` * CLI * Added the `linkerd endpoints` command to introspect Linkerd's service discovery state * Added the `--tap` flag to `linkerd profile` to generate a `ServiceProfile` by using the route results seen during the tap * Added support for the `linkerd.io/inject: disabled` annotation on pod specs to disable injection for specific pods when running `linkerd inject` * Added support for `basePath` in OpenAPI 2.0 files when running `linkerd profile --open-api` * Increased `linkerd check` client timeout from 5 seconds to 30 seconds to fix issues for clusters with a slower API server * `linkerd routes` will no longer return rows for `ExternalName` services in the namespace * Broadened set of valid URLs when connecting to the Kubernetes API * Improved `ServiceProfile` field validation in `linkerd check` * Proxy * Added TCP keepalive configuration to fix environments where peers may silently drop connections * The `Get` and `GetProfiles` API now accept a `proxy_id` parameter in order to return more tailored results * Removed TLS fallback-to-plaintext if handshake fails ## edge-19.1.4 * Controller * Added support for timeouts! Configurable in the service profiles for each route * Added an experimental CNI plugin to avoid requiring the NET_ADMIN capability when injecting proxies (more details at (thanks @codeman9!) * Added more improvements to the API for `ListPods` (thanks @alenkacz!) * Web UI * Grayed-out the tap icon for requests from sources that are not meshed * CLI * Added the `--proto` flag to `linkerd profile` to output a service profile based on a Protobuf spec file * Fixed CLI connection failure to clusters that use self-signed certificates * Simplified `linkerd install` so that setting up proxy auto-injection (flag `--proxy-auto-inject`) no longer requires enabling TLS (flag `--tls`) * Added links for each `linkerd check` failure, pointing to a relevant section in our new FAQ page with resolution steps for each case ## edge-19.1.3 * Controller * Improved API for `ListPods` (thanks @alenkacz!) * Fixed `GetProfiles` API call not returning immediately when no profile exists (resulting in proxies logging warnings) * Web UI * Improved resource detail pages now show all resource types * Fixed stats not appearing for routes that have service profiles installed * CLI * Added optional `linkerd install-sp` command to generate service profiles for the control plane, providing per-route metrics for control plane components * Removed `--proxy-bind-timeout` flag from `linkerd install` and `linkerd inject` commands, as the proxy no longer accepts this environment variable * Improved CLI appearance on Windows systems * Improved `linkerd check` output, fixed check bug when using `--single-namespace` (thanks to @djeeg for the bug report!) * Improved `linkerd stat` now supports DaemonSets (thanks @zknill!) * Fixed panic when `linkerd routes` is called in single-namespace mode * Proxy * Added the ability to override a proxy's normal outbound routing by adding an `l5d-override-dst` header * Added `LINKERD2_PROXY_DNS_CANONICALIZE_TIMEOUT` environment variable to customize the timeout for DNS queries to canonicalize a name * Added support for route timeouts in service profiles * Improved logging for gRPC errors and for malformed HTTP/2 request headers * Improved log readability by moving some noisy log messages to more verbose log levels ## edge-19.1.2 * Controller * Retry support! Introduce an `isRetryable` property to service profiles to enable configuring retries on a per-route basis * Web UI * Add "meshed" and "no traffic" badges on the resource detail pages * Fix `linkerd dashboard` to maintain proxy connection when browser open fails * Fix JavaScript bundling to avoid serving old versions after upgrade * CLI * Add `linkerd logs` command to surface logs from any container in the Linkerd control plane (shout out to [Stern](https://github.com/wercker/stern)!) * Add `linkerd uninject` command to remove the Linkerd proxy from a Kubernetes config * Improve `linkerd inject` to re-inject a resource that already has a Linkerd proxy * Improve `linkerd routes` to list all routes, including those without traffic * Improve readability in `linkerd check` and `linkerd inject` outputs * Proxy * Fix a deadlock in HTTP/2 stream reference counts ## edge-19.1.1 * CLI * Adjust the set of checks that are run before executing CLI commands, which allows the CLI to be invoked even when the control plane is not fully ready * Fix reporting of injected resources when the `linkerd inject` command is run on `List` type resources with multiple items * Update the `linkerd dashboard` command to use port-forwarding instead of proxying when connecting to the web UI and Grafana * Add validation for the `ServiceProfile` CRD (thanks, @alenkacz!) * Update the `linkerd check` command to disallow setting both the `--pre` and `--proxy` flags simultaneously (thanks again, @alenkacz!) * Web UI * Reduce the size of the webpack JavaScript bundle by nearly 50%! * Fix an indexing error on the top results page * Proxy * **Fixed** The proxy-init container now exits with a non-zero exit code if initialization fails, making initialization errors much more visible * **Fixed** The proxy previously leaked UDP sockets for failed DNS queries, causing a memory leak; this has been fixed ## edge-18.12.4 Upgrade notes: The control plane components have been renamed as of the edge-18.12.1 release to reduce possible naming collisions. To upgrade an older installation, see the [Upgrade Guide](https://linkerd.io/2/upgrade/). * CLI * Add `--routes` flag to the `linkerd top` command, for grouping table rows by route instead of by path * Update Prometheus configuration to automatically load `*_rules.yml` files * Remove TLS column from the `linkerd routes` command output * Web UI * Restore unmeshed resources in the network graph on the resource detail page * Reduce the overall size of the asset bundle for the web frontend * Proxy * Improve configuration of the PeakEwma load balancer Special thanks to @radu-matei for cleaning up a whole slew of Go lint warnings, and to @jonrichards for improving the Rust build setup! ## edge-18.12.3 Upgrade notes: The control plane components have been renamed as of the edge-18.12.1 release to reduce possible naming collisions. To upgrade an older installation, see the [Upgrade Guide](https://linkerd.io/2/upgrade/). * CLI * Multiple improvements to the `linkerd install` config (thanks @codeman9!) * Use non-default service accounts for grafana and web deployments * Use `emptyDir` volume mount for prometheus and grafana pods * Set security context on control plane components to not run as root * Remove cluster-wide resources from single-namespace installs * Disable service profiles in single-namespace mode * Require that namespace already exist for single-namespace installs * Fix resource requests for proxy-injector container in `--ha` installs * Controller * Block controller initialization until caches have synced with kube API * Fix proxy-api handling of named target ports in service configs * Add parameter to stats API to skip retrieving prometheus stats (thanks, @alpeb!) * Web UI * Adjust label for unknown routes in route tables, add tooltip * Update Top Routes page to persist form settings in URL * Add button to create new service profiles on Top Routes page * Fix CLI commands displayed when linkerd is running in non-default namespace * Proxy * Proxies with TLS enabled now honor ports configured to skip protocol detection ## stable-2.1.0 This stable release introduces several major improvements, including per-route metrics, service profiles, and a vastly improved dashboard UI. It also adds several significant experimental features, including proxy auto-injection, single namespace installs, and a high-availability mode for the control plane. For more details, see the announcement blog post: To install this release, run: `curl https://run.linkerd.io/install | sh` **Upgrade notes**: The control plane components have been renamed in this release to reduce possible naming collisions. Please make sure to read the [upgrade instructions](https://linkerd.io/2/upgrade/#upgrade-notice-stable-2-1-0) if you are upgrading from the `stable-2.0.0` release. **Special thanks to**: @alenkacz, @alpeb, @benjdlambert, @fahrradflucht, @ffd2subroutine, @hypnoglow, @ihcsim, @lucab, and @rochacon **Full release notes**: * CLI * `linkerd routes` command displays per-route stats for _any resource_ * Service profiles are now supported for external authorities * `linkerd routes --open-api` flag generates a service profile based on an OpenAPI specification (swagger) file * `linkerd routes` command displays per-route stats for services with service profiles * Add `--ha` flag to `linkerd install` command, for HA deployment of the control plane * Update stat command to accept multiple stat targets * Fix authority stat filtering when the `--from` flag is present * Various improvements to check command, including: * Emit warnings instead of errors when not running the latest version * Add retries if control plane health check fails initially * Run all pre-install RBAC checks, instead of stopping at first failure * Fixed an issue with the `--registry` install flag not accepting hosts with ports * Added an `--output` stat flag, for printing stats as JSON * Updated the `top` table to set column widths dynamically * Added a `--single-namespace` install flag for installing the control plane with Role permissions instead of ClusterRole permissions * Added a `--proxy-auto-inject` flag to the `install` command, allowing for auto-injection of sidecar containers * Added `--proxy-cpu` and `--proxy-memory` flags to the `install` and `inject` commands, giving the ability to configure CPU + Memory requests * Added a `--context` flag to specify the context to use to talk to the Kubernetes apiserver * The namespace in which Linkerd is installed is configurable via the `LINKERD_NAMESPACE` env var, in addition to the `--linkerd-namespace` flag * The wait time for the `check` and `dashboard` commands is configurable via the `--wait` flag * The `top` command now aggregates by HTTP method as well * Controller * Rename snake case fields to camel case in service profile spec * Controller components are now prefixed with `linkerd-` to prevent name collisions with existing resources * `linkerd install --disable-h2-upgrade` flag has been added to control automatic HTTP/2 upgrading * Fix auto injection issue on Kubernetes `v1.9.11` that would merge, rather than append, the proxy container into the application * Fixed a few issues with auto injection via the proxy-injector webhook: * Injected pods now execute the linkerd-init container last, to avoid rerouting requests during pod init * Original pod labels and annotations are preserved when auto-injecting * CLI health check now uses unified endpoint for data plane checks * Include Licence files in all Docker images * Proxy * The proxy's `tap` subsystem has been reimplemented to be more efficient and and reliable * The proxy now supports route metadata in tap queries and events * A potential HTTP/2 window starvation bug has been fixed * Prometheus counters now wrap properly for values greater than 2^53 * Add controller client metrics, scoped under `control_` * Canonicalize outbound names via DNS for inbound profiles * Fix routing issue when a pod makes a request to itself * Only include `classification` label on `response_total` metric * Remove panic when failing to get remote address * Better logging in TCP connect error messages * Web UI * Top routes page, served at `/routes` * Route metrics are now available in the resource detail pages for services with configured profiles * Service profiles can be created and downloaded from the Web UI * Top Routes page, served at `/routes` * Fixed a smattering of small UI issues * Added a new Grafana dashboard for authorities * Revamped look and feel of the Linkerd dashboard by switching component libraries from antd to material-ui * Added a Help section in the sidebar containing useful links * Tap and Top pages * Added clear button to query form * Resource Detail pages * Limit number of resources shown in the graph * Resource Detail page * Better rendering of the dependency graph at the top of the page * Unmeshed sources are now populated in the Inbound traffic table * Sources and destinations are aligned in the popover * Tap and Top pages * Additional validation and polish for the form controls * The top table clears older results when a new top call is started * The top table now aggregates by HTTP method as well ## edge-18.12.2 Upgrade notes: The control plane components have been renamed as of the edge-18.12.1 release to reduce possible naming collisions. To upgrade an older installation, see the [Upgrade Guide](https://linkerd.io/2/upgrade/). * Controller * Rename snake case fields to camel case in service profile spec ## edge-18.12.1 Upgrade notes: The control plane components have been renamed in this release to reduce possible naming collisions. To upgrade an existing installation: * Install new CLI: `curl https://run.linkerd.io/install-edge | sh` * Install new control plane: `linkerd install | kubectl apply -f -` * Remove old deploys/cms: `kubectl -n linkerd get deploy,cm -oname | grep -v linkerd | xargs kubectl -n linkerd delete` * Re-inject your applications: `linkerd inject my-app.yml | kubectl apply -f -` * Remove old services: `kubectl -n linkerd get svc -oname | grep -v linkerd | xargs kubectl -n linkerd delete` For more information, see the [Upgrade Guide](https://linkerd.io/2/upgrade/). * CLI * **Improved** `linkerd routes` command displays per-route stats for _any resource_! * **New** Service profiles are now supported for external authorities! * **New** `linkerd routes --open-api` flag generates a service profile based on an OpenAPI specification (swagger) file * Web UI * **New** Top routes page, served at `/routes` * **New** Route metrics are now available in the resource detail pages for services with configured profiles * **New** Service profiles can be created and downloaded from the Web UI * Controller * **Improved** Controller components are now prefixed with `linkerd-` to prevent name collisions with existing resources * **New** `linkerd install --disable-h2-upgrade` flag has been added to control automatic HTTP/2 upgrading * Proxy * **Improved** The proxy's `tap` subsystem has been reimplemented to be more efficient and and reliable * The proxy now supports route metadata in tap queries and events * **Fixed** A potential HTTP/2 window starvation bug has been fixed * **Fixed** Prometheus counters now wrap properly for values greater than 2^53 (thanks, @lucab!) ## edge-18.11.3 * CLI * **New** `linkerd routes` command displays per-route stats for services with service profiles * **Experimental** Add `--ha` flag to `linkerd install` command, for HA deployment of the control plane (thanks @benjdlambert!) * Web UI * **Experimental** Top Routes page, served at `/routes` * Controller * **Fixed** Fix auto injection issue on Kubernetes `v1.9.11` that would merge, rather than append, the proxy container into the application * Proxy * **Improved** Add controller client metrics, scoped under `control_` * **Improved** Canonicalize outbound names via DNS for inbound profiles ## edge-18.11.2 * CLI * **Improved** Update stat command to accept multiple stat targets * **Fixed** Fix authority stat filtering when the `--from` flag is present * Various improvements to check command, including: * Emit warnings instead of errors when not running the latest version * Add retries if control plane health check fails initially * Run all pre-install RBAC checks, instead of stopping at first failure * Proxy / Proxy-Init * **Fixed** Fix routing issue when a pod makes a request to itself (#1585) * Only include `classification` label on `response_total` metric ## edge-18.11.1 * Proxy * **Fixed** Remove panic when failing to get remote address * **Improved** Better logging in TCP connect error messages * Web UI * **Improved** Fixed a smattering of small UI issues ## edge-18.10.4 This release includes a major redesign of the web frontend to make use of the Material design system. Additional features that leverage the new design are coming soon! This release also includes the following changes: * CLI * **Fixed** Fixed an issue with the `--registry` install flag not accepting hosts with ports (thanks, @alenkacz!) * Web UI * **New** Added a new Grafana dashboard for authorities (thanks, @alpeb!) * **New** Revamped look and feel of the Linkerd dashboard by switching component libraries from antd to material-ui ## edge-18.10.3 * CLI * **New** Added an `--output` stat flag, for printing stats as JSON * **Improved** Updated the `top` table to set column widths dynamically * **Experimental** Added a `--single-namespace` install flag for installing the control plane with Role permissions instead of ClusterRole permissions * Controller * Fixed a few issues with auto injection via the proxy-injector webhook: * Injected pods now execute the linkerd-init container last, to avoid rerouting requests during pod init * Original pod labels and annotations are preserved when auto-injecting * Web UI * **New** Added a Help section in the sidebar containing useful links ## edge-18.10.2 This release brings major improvements to the CLI as described below, including support for auto-injecting deployments via a Kubernetes Admission Controller. Proxy auto-injection is **experimental**, and the implementation may change going forward. * CLI * **New** Added a `--proxy-auto-inject` flag to the `install` command, allowing for auto-injection of sidecar containers (Thanks @ihcsim!) * **Improved** Added `--proxy-cpu` and `--proxy-memory` flags to the `install` and `inject` commands, giving the ability to configure CPU + Memory requests (Thanks @benjdlambert!) * **Improved** Added a `--context` flag to specify the context to use to talk to the Kubernetes apiserver (Thanks @ffd2subroutine!) ## edge-18.10.1 * Web UI * **Improved** Tap and Top pages * Added clear button to query form * **Improved** Resource Detail pages * Limit number of resources shown in the graph * Controller * CLI health check now uses unified endpoint for data plane checks * Include Licence files in all Docker images Special thanks to @alenkacz for contributing to this release! ## edge-18.9.3 * Web UI * **Improved** Resource Detail page * Better rendering of the dependency graph at the top of the page * Unmeshed sources are now populated in the Inbound traffic table * Sources and destinations are aligned in the popover * **Improved** Tap and Top pages * Additional validation and polish for the form controls * The top table clears older results when a new top call is started * The top table now aggregates by HTTP method as well * CLI * **New** The namespace in which Linkerd is installed is configurable via the `LINKERD_NAMESPACE` env var, in addition to the `--linkerd-namespace` flag * **New** The wait time for the `check` and `dashboard` commands is configurable via the `--wait` flag * **Improved** The `top` command now aggregates by HTTP method as well Special thanks to @rochacon, @fahrradflucht and @alenkacz for contributing to this release! ## stable-2.0.0 ## edge-18.9.2 * **New** _edge_ and _stable_ release channels * Web UI * **Improved** Tap & Top UIs with better layout and linking * CLI * **Improved** `check --pre` command verifies the caller has sufficient permissions to install Linkerd * **Improved** `check` command verifies that Prometheus has data for proxied pods * Proxy * **Fix** `hyper` crate dependency corrects HTTP/1.0 Keep-Alive behavior ## v18.9.1 * Web UI * **New** Default landing page provides namespace overview with expandable sections * **New** Breadcrumb navigation at the top of the dashboard * **Improved** Tap and Top pages * Table rendering performance improvements via throttling * Tables now link to resource detail pages * Tap an entire namespace when no resource is specified * Tap websocket errors provide more descriptive text * Consolidated source and destination columns * Misc ui updates * Metrics tables now include a small success rate chart * Improved latency formatting for seconds latencies * Renamed upstream/downstream to inbound/outbound * Sidebar scrolls independently from main panel, scrollbars hidden when not needed * Removed social links from sidebar * CLI * **New** `linkerd check` now validates Linkerd proxy versions and readiness * **New** `linkerd inject` now provides an injection status report, and warns when resources are not injectable * **New** `linkerd top` now has a `--hide-sources` flag, to hide the source column and collapse top results accordingly * Control Plane * Updated Prometheus to v2.4.0, Grafana to 5.2.4 ## v18.8.4 * Web UI * **Improved** Tap and Top now have a better sampling rate * **Fixed** Missing sidebar headings now appear ## v18.8.3 * Web UI * **Improved** Kubernetes resource navigation in the sidebar * **Improved** resource detail pages: * **New** live request view * **New** success rate graphs * CLI * `tap` and `top` have been improved to sample up to 100 RPS * Control plane * Injected proxy containers now have readiness and liveness probes enabled Special thanks to @sourishkrout for contributing a web readability fix! ## v18.8.2 * CLI * **New** `linkerd top` command has been added, displays live traffic stats * `linkerd check` has been updated with additional checks, now supports a `--pre` flag for running pre-install checks * `linkerd check` and `linkerd dashboard` now support a `--wait` flag that tells the CLI to wait for the control plane to become ready * `linkerd tap` now supports a `--output` flag to display output in a wide format that includes src and dst resources and namespaces * `linkerd stat` includes additional validation for command line inputs * All commands that talk to the Linkerd API now show better error messages when the control plane is unavailable * Web UI * **New** individual resources can now be viewed on a resource detail page, which includes stats for the resource itself and its nearest neighbors * **Experimental** web-based Top interface accessible at `/top`, aggregates tap data in real time to display live traffic stats * The `/tap` page has multiple improvements, including displaying additional src/dst metadata, improved form controls, and better latency formatting * All resource tables have been updated to display meshed pod counts, as well as an icon linking to the resource's Grafana dashboard if it is meshed * The UI now shows more useful information when server errors are encountered * Proxy * The `h2` crate fixed a HTTP/2 window management bug * The `rustls` crate fixed a bug that could improperly fail TLS streams * Control Plane * The tap server now hydrates metadata for both sources and destinations ## v18.8.1 * Web UI * **New** Tap UI makes it possible to query & inspect requests from the browser! * Proxy * **New** Automatic, transparent HTTP/2 multiplexing of HTTP/1 traffic reduces the cost of short-lived HTTP/1 connections * Control Plane * **Improved** `linkerd inject` now supports injecting all resources in a folder * **Fixed** `linkerd tap` no longer crashes when there are many pods * **New** Prometheus now only scrapes proxies belonging to its own linkerd install * **Fixed** Prometheus metrics collection for clusters with >100 pods Special thanks to @ihcsim for contributing the `inject` improvement! ## v18.7.3 Linkerd2 v18.7.3 completes the rebranding from Conduit to Linkerd2, and improves overall performance and stability. * Proxy * **Improved** CPU utilization by ~20% * Web UI * **Experimental** `/tap` page now supports additional filters * Control Plane * Updated all k8s.io dependencies to 1.11.1 ## v18.7.2 Linkerd2 v18.7.2 introduces new stability features as we work toward production readiness. * Control Plane * **Breaking change** Injected pod labels have been renamed to be more consistent with Kubernetes; previously injected pods must be re-injected with new version of linkerd CLI in order to work with updated control plane * The "ca-bundle-distributor" deployment has been renamed to "ca" * Proxy * **Fixed** HTTP/1.1 connections were not properly reused, leading to elevated latencies and CPU load * **Fixed** The `process_cpu_seconds_total` was calculated incorrectly * Web UI * **New** per-namespace application topology graph * **Experimental** web-based Tap interface accessible at `/tap` * Updated favicon to the Linkerd logo ## v18.7.1 Linkerd2 v18.7.1 is the first release of the Linkerd2 project, which was formerly hosted at github.com/runconduit/conduit. * Packaging * Introduce new date-based versioning scheme, `vYY.M.n` * Move all Docker images to `gcr.io/linkerd-io` repo * User Interface * Update branding to reference Linkerd throughout * The CLI is now called `linkerd` * Production Readiness * Fix issue with destination service sending back incomplete pod metadata * Fix high CPU usage during proxy shutdown * ClusterRoles are now unique per Linkerd install, allowing multiple instances to be installed in the same Kubernetes cluster ## v0.5.0 Conduit v0.5.0 introduces a new, experimental feature that automatically enables Transport Layer Security between Conduit proxies to secure application traffic. It also adds support for HTTP protocol upgrades, so applications that use WebSockets can now benefit from Conduit. * Security * **New** `conduit install --tls=optional` enables automatic, opportunistic TLS. See [the docs][auto-tls] for more info. * Production Readiness * The proxy now transparently supports HTTP protocol upgrades to support, for instance, WebSockets. * The proxy now seamlessly forwards HTTP `CONNECT` streams. * Controller services are now configured with liveness and readiness probes. * User Interface * `conduit stat` now supports a virtual `authority` resource that aggregates traffic by the `:authority` (or `Host`) header of an HTTP request. * `dashboard`, `stat`, and `tap` have been updated to describe TLS state for traffic. * `conduit tap` now has more detailed information, including the direction of each message (outbound or inbound). * `conduit stat` now more-accurately records histograms for low-latency services. * `conduit dashboard` now includes error messages when a Conduit-enabled pod fails. * Internals * Prometheus has been upgraded to v2.3.1. * A potential live-lock has been fixed in HTTP/2 servers. * `conduit tap` could crash due to a null-pointer access. This has been fixed. [auto-tls]: docs/automatic-tls.md ## v0.4.4 Conduit v0.4.4 continues to improve production suitability and sets up internals for the upcoming v0.5.0 release. * Production Readiness * The destination service has been mostly-rewritten to improve safety and correctness, especially during controller initialization. * Readiness and Liveness checks have been added for some controller components. * RBAC settings have been expanded so that Prometheus can access node-level metrics. * User Interface * Ad blockers like uBlock prevented the Conduit dashboard from fetching API data. This has been fixed. * The UI now highlights pods that have failed to start a proxy. * Internals * Various dependency upgrades, including Rust 1.26.2. * TLS testing continues to bear fruit, precipitating stability improvements to dependencies like Rustls. Special thanks to @alenkacz for improving docker build times! ## v0.4.3 Conduit v0.4.3 continues progress towards production readiness. It features a new latency-aware load balancer. * Production Readiness * The proxy now uses a latency-aware load balancer for outbound requests. This implementation is based on Finagle's Peak-EWMA balancer, which has been proven to significantly reduce tail latencies. This is the same load balancing strategy used by Linkerd. * User Interface * `conduit stat` is now slightly more predictable in the way it outputs things, especially for commands like `watch conduit stat all --all-namespaces`. * Failed and completed pods are no longer shown in stat summary results. * Internals * The proxy now supports some TLS configuration, though these features remain disabled and undocumented pending further testing and instrumentation. Special thanks to @ihcsim for contributing his first PR to the project and to @roanta for discussing the Peak-EWMA load balancing algorithm with us. ## v0.4.2 Conduit v0.4.2 is a major step towards production readiness. It features a wide array of fixes and improvements for long-running proxies, and several new telemetry features. It also lays the groundwork for upcoming releases that introduce mutual TLS everywhere. * Production Readiness * The proxy now drops metrics that do not update for 10 minutes, preventing unbounded memory growth for long-running processes. * The proxy now constrains the number of services that a node can route to simultaneously (default: 100). This protects long-running proxies from consuming unbounded resources by tearing down the longest-idle clients when the capacity is reached. * The proxy now properly honors HTTP/2 request cancellation. * The proxy could incorrectly handle requests in the face of some connection errors. This has been fixed. * The proxy now honors DNS TTLs. * `conduit inject` now works with `statefulset` resources. * Telemetry * **New** `conduit stat` now supports the `all` Kubernetes resource, which shows traffic stats for all Kubernetes resources in a namespace. * **New** the Conduit web UI has been reorganized to provide namespace overviews. * **Fix** a bug in Tap that prevented the proxy from simultaneously satisfying more than one Tap request. * **Fix** a bug that could prevent stats from being reported for some TCP streams in failure conditions. * The proxy now measures response latency as time-to-first-byte. * Internals * The proxy now supports user-friendly time values (e.g. `10s`) from environment configuration. * The control plane now uses client for Kubernetes 1.10.2. * Much richer proxy debug logging, including socket and stream metadata. * The proxy internals have been changed substantially in preparation for TLS support. Special thanks to @carllhw, @kichristensen, & @sfroment for contributing to this release! ### Upgrading from v0.4.1 When upgrading from v0.4.1, we suggest that the control plane be upgraded to v0.4.2 before injecting application pods to use v0.4.2 proxies. ## v0.4.1 Conduit 0.4.1 builds on the telemetry work from 0.4.0, providing rich, Kubernetes-aware observability and debugging. * Web UI * **New** Automatically-configured Grafana dashboards for Services, Pods, ReplicationControllers, and Conduit mesh health. * **New** `conduit dashboard` Pod and ReplicationController views. * Command-line interface * **Breaking change** `conduit tap` now operates on most Kubernetes resources. * `conduit stat` and `conduit tap` now both support kubectl-style resource strings (`deploy`, `deploy/web`, and `deploy web`), specifically: * `namespaces` * `deployments` * `replicationcontrollers` * `services` * `pods` * Telemetry * **New** Tap support for filtering by and exporting destination metadata. Now you can sample requests from A to B, where A and B are any resource or group of resources. * **New** TCP-level stats, including connection counts and durations, and throughput, wired through to Grafana dashboards. * Service Discovery * The proxy now uses the [trust-dns] DNS resolver. This fixes a number of DNS correctness issues. * The destination service could sometimes return incorrect, stale, labels for an endpoint. This has been fixed! [trust-dns]: https://github.com/bluejekyll/trust-dns ## v0.4.0 Conduit 0.4.0 overhauls Conduit's telemetry system and improves service discovery reliability. * Web UI * **New** automatically-configured Grafana dashboards for all Deployments. * Command-line interface * `conduit stat` has been completely rewritten to accept arguments like `kubectl get`. The `--to` and `--from` filters can be used to filter traffic by destination and source, respectively. `conduit stat` currently can operate on `Namespace` and `Deployment` Kubernetes resources. More resource types will be added in the next release! * Proxy (data plane) * **New** Prometheus-formatted metrics are now exposed on `:4191/metrics`, including rich destination labeling for outbound HTTP requests. The proxy no longer pushes metrics to the control plane. * The proxy now handles `SIGINT` or `SIGTERM`, gracefully draining requests until all are complete or `SIGQUIT` is received. * SMTP and MySQL (ports 25 and 3306) are now treated as opaque TCP by default. You should no longer have to specify `--skip-outbound-ports` to communicate with such services. * When the proxy reconnected to the controller, it could continue to send requests to old endpoints. Now, when the proxy reconnects to the controller, it properly removes invalid endpoints. * A bug impacting some HTTP/2 reset scenarios has been fixed. * Service Discovery * Previously, the proxy failed to resolve some domain names that could be misinterpreted as a Kubernetes Service name. This has been fixed by extending the _Destination_ API with a negative acknowledgement response. * Control Plane * The _Telemetry_ service and associated APIs have been removed. * Documentation * Updated [Roadmap](doc/roadmap.md) Special thanks to @ahume, @alenkacz, & @xiaods for contributing to this release! ### Upgrading from v0.3.1 When upgrading from v0.3.1, it's important to upgrade proxies before upgrading the controller. As you upgrade proxies, the controller will lose visibility into some data plane stats. Once all proxies are updated, `conduit install |kubectl apply -f -` can be run to upgrade the controller without causing any data plane disruptions. Once the controller has been restarted, traffic stats should become available. ## v0.3.1 Conduit 0.3.1 improves Conduit's resilience and transparency. * Proxy (data plane) * The proxy now makes fewer changes to requests and responses being proxied. In particular, requests and responses without bodies or with empty bodies are better supported. * HTTP/1 requests with different `Host` header fields are no longer sent on the same HTTP/1 connection even when those hostnames resolve to the same IP address. * A connection leak during proxying of non-HTTP TCP connections was fixed. * The proxy now handles unavailable services more gracefully by timing out while waiting for an endpoint to become available for the service. * Command-line interface * `$KUBECONFIG` with multiple paths is now supported. (PR #482 by @hypnoglow). * `conduit check` now checks for the availability of a Conduit update. (PR #460 by @ahume). * Service Discovery * Kubernetes services with type `ExternalName` are now supported. * Control Plane * The proxy is injected into the control plane during installation to improve the control plane's resilience and to "dogfood" the proxy. * The control plane is now more resilient regarding networking failures. * Documentation * The markdown source for the documentation published at is now open source at ## v0.3.0 Conduit 0.3 focused heavily on production hardening of Conduit's telemetry system. Conduit 0.3 should "just work" for most apps on Kubernetes 1.8 or 1.9 without configuration, and should support Kubernetes clusters with hundreds of services, thousands of instances, and hundreds of RPS per instance. With this release, Conduit also moves from _experimental_ to _alpha_---meaning that we're ready for some serious testing and vetting from you. As part of this, we've published the [Conduit roadmap](https://conduit.io/roadmap/), and we've also launched some new mailing lists: [conduit-users](https://groups.google.com/forum/#!forum/conduit-users), [conduit-dev](https://groups.google.com/forum/#!forum/conduit-dev), and [conduit-announce](https://groups.google.com/forum/#!forum/conduit-announce). * CLI * CLI commands no longer depend on `kubectl` * `conduit dashboard` now runs on an ephemeral port, removing port 8001 conflicts * `conduit inject` now skips pods with `hostNetwork=true` * CLI commands now have friendlier error messages, and support a `--verbose` flag for debugging * Web UI * All displayed metrics are now instantaneous snapshots rather than aggregated over 10 minutes * The sidebar can now be collapsed * UX refinements and bug fixes * Conduit proxy (data plane) * Proxy does load-aware (P2C + least-loaded) L7 balancing for HTTP * Proxy can now route to external DNS names * Proxy now properly sheds load in some pathological cases when it cannot route * Telemetry system * Many optimizations and refinements to support scale goals * Per-path and per-pod metrics have been removed temporarily to improve scalability and stability; they will be reintroduced in Conduit 0.4 (#405) * Build improvements * The Conduit docker images are now much smaller. * Dockerfiles have been changed to leverage caching, improving build times substantially Known Issues: * Some DNS lookups to external domains fail (#62, #155, #392) * Applications that use WebSockets, HTTP tunneling/proxying, or protocols such as MySQL and SMTP, require additional configuration (#339) ## v0.2.0 This is a big milestone! With this release, Conduit adds support for HTTP/1.x and raw TCP traffic, meaning it should "just work" for most applications that are running on Kubernetes without additional configuration. * Data plane * Conduit now transparently proxies all TCP traffic, including HTTP/1.x and HTTP/2. (See caveats below.) * Command-line interface * Improved error handling for the `tap` command * `tap` also now works with HTTP/1.x traffic * Dashboard * Minor UI appearance tweaks * Deployments now searchable from the dashboard sidebar Caveats: * Conduit will automatically work for most protocols. However, applications that use WebSockets, HTTP tunneling/proxying, or protocols such as MySQL and SMTP, will require some additional configuration. See the [documentation](https://conduit.io/adding-your-service/#protocol-support) for details. * Conduit doesn't yet support external DNS lookups. These will be addressed in an upcoming release. * There are known issues with Conduit's telemetry pipeline that prevent it from scaling beyond a few nodes. These will be addressed in an upcoming release. * Conduit is still in alpha! Please help us by [filing issues and contributing pull requests](https://github.com/runconduit/conduit/issues/new). ## v0.1.3 * This is a minor bugfix for some web dashboard UI elements that were not rendering correctly. ## v0.1.2 Conduit 0.1.2 continues down the path of increasing usability and improving debugging and introspection of the service mesh itself. * Conduit CLI * New `conduit check` command reports on the health of your Conduit installation. * New `conduit completion` command provides shell completion. * Dashboard * Added per-path metrics to the deployment detail pages. * Added animations to line graphs indicating server activity. * More descriptive CSS variable names. (Thanks @natemurthy!) * A variety of other minor UI bugfixes and improvements * Fixes * Fixed Prometheus config when using RBAC. (Thanks @FaKod!) * Fixed `tap` failure when pods do not belong to a deployment. (Thanks @FaKod!) ## v0.1.1 Conduit 0.1.1 is focused on making it easier to get started with Conduit. * Conduit can now be installed on Kubernetes clusters that use RBAC. * The `conduit inject` command now supports a `--skip-outbound-ports` flag that directs Conduit to bypass proxying for specific outbound ports, making Conduit easier to use with non-gRPC or HTTP/2 protocols. * The `conduit tap` command output has been reformatted to be line-oriented, making it easier to parse with common UNIX command line utilities. * Conduit now supports routing of non-fully qualified domain names. * The web UI has improved support for large deployments and deployments that don't have any inbound/outbound traffic. ## v0.1.0 Conduit 0.1.0 is the first public release of Conduit. * This release supports services that communicate via gRPC only. non-gRPC HTTP/2 services should work. More complete HTTP support, including HTTP/1.0 and HTTP/1.1 and non-gRPC HTTP/2, will be added in an upcoming release. * Kubernetes 1.8.0 or later is required. * kubectl 1.8.0 or later is required. `conduit dashboard` will not work with earlier versions of kubectl. * When deploying to Minikube, Minikube 0.23 or 0.24.1 or later are required. Earlier versions will not work. * This release has been tested using Google Kubernetes Engine and Minikube. Upcoming releases will be tested on additional providers too. * Configuration settings and protocols are not stable yet. * Services written in Go must use grpc-go 1.3 or later to avoid [grpc-go bug #1120](https://github.com/grpc/grpc-go/issues/1120).