Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improve how CRs and k8s work with CNI plugins and cgroup drivers #15463

Merged

Conversation

prezha
Copy link
Contributor

@prezha prezha commented Dec 5, 2022

<update>

initial goal

reduce flakiness of Docker_Linux tests

initial analysis & conclusion

tests are actually mostly ok, we have some real issues

redefined goal

find & fix the issues

outcome

  • Docker_Linux tests: all pass/green
  • KVM_Linux tests: all pass/green
  • Docker_Cloud_Shell tests: all pass/green
  • Hyperkit_macOS tests: all pass/green
  • Docker_Linux_containerd tests: all but one pass (TestPreload)
  • KVM_Linux_containerd tests: all but two pass (TestPreload and TestStoppedBinaryUpgrade/Upgrade - fixed but not yet committed)

a bit more details & context

i've spent some time trying to figure out why all the tests are consistently passing on my machine but then just failing when run on ci/jenkins

spoiler: as it turns out, it all mostly boils down to inconsistent cgroups across the "stack" and also how CNIs and CRs play (or not!) together

part of the conclusion from that "investigation" was completely unexpected and surprising (to me, at least) discovery that we have ci/jenkins agents with different configurations! here the difference is not just in ServerVersion but also in the CgroupDriver used inside the os (ie, how the agent machine was booted) - examples:

Docker_Linux-26999:

I1215 02:07:39.609753   11500 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2022-12-15 02:07:39.535647944 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a05d175400b1145e5e6a735a6710579d181e7fb0 Expected:a05d175400b1145e5e6a735a6710579d181e7fb0} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}

Docker_Linux-27065:

I1219 20:54:45.962607   10285 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2022-12-19 20:54:45.130424738 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:78f51771157abb6c9ed224c22013cdf09962315d Expected:78f51771157abb6c9ed224c22013cdf09962315d} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}

while i couldn't change/affect the ServerVersion, i went on and made minikube auto-adaptable to the underlying cgroup(s) driver, and that helped eliminate some of the "flakiness" but also hopefully made minikube more "robust" in terms of the "flavours" of os/settings our users have
sounds like good timing as well, since k8s now recommends using systemd as the default cgroup driver, and with this, we shouldn't have much to change additionally going forward

code fixes and improvements (candidates for individual PRs - next steps)

  • cgroup(s) driver: auto-detect on linux (defaulting to cgroupfs/v1 for non-linux) and auto-configure all components accordingly
  • CNIs and CRs: refactored (again!) how those work together and also removed the need to use/have custom /etc/cni/net.mk dir for CRs and CNIs (note: that dir should be removed from our "distro", as it confuses containerd thinking it's default one to look into; until then, there's a "hack" that removes it)
  • container runtimes: ensured containerd is also properly configured when docker is selected as container runtime (and bound to it)
  • network - cni/localhost: automatically fix the missing name and update to the now-expected version (v1.0.0)
  • network - cni/bridge: automatically mask all default bridges when cni is used and auto-fix subnet to use DefaultPodCIDR and appropriate gateway (instead of default 10.8x.0.0) otherwise - when default bridge should be used
  • network: improved detection and reservation of free subnet, so it works across processes (and not just locally) - reduced possibility of collisions and thus the time needed to pick actually free subnet (also avoid giving up after 5 failed attempts)
  • network - cni/flannel: updated to the latest v1.1.2/v0.20.2 and moved manifest out from "code" to embedding separate yaml manifest file
  • network - cni/calico: updated to the latest v3.24.5
  • general: prevent race condition over multiple minikube instances rewriting shared "ca-certs" using locks, that was causing some auth issues before
  • network: made hair-pin work (now configurable in kubelet config) - also for tests
  • general: made kubelet respect custom runtimeRequestTimeout, so eg, it doesn't CrashLoopBackOff on pulling larger images after "hardcoded" default of 5mins (now configurable in kubelet config; note: "runtime-request-timeout" it's not supported as kubelet param anymore, so i moved it to config)

hacks (that should be replaced with proper "distribution" updates!)

used to eliminate some of the "known issues" with the upstreams:

  • cri-docker: use v0.2.6
  • containerd: use v1.6.14
  • contained: change systemd service to use KillMode=process (in line with all other container runtimes) so it doesn't restart containers when the service itself is restarted
  • remove '/etc/cni/net.mk' dir

a number of other minor tweaks, additions and fixes

eg, timeouts, juju packages updates, comments, docs, spellings, etc.

tests fixes and improvements

  • TestStartStop: this one was a pain to discover - we used 192.168.111.111/16 as kubeadm.pod-network-cidr that overlapped with all other "standard" network subnets we use; that is fixed now
  • fixed TestJSONOutput when unexpected but still valid cases of out-of-order events occur (due to goroutines finishing at different times)
  • updated "netcat" deployment to use the latest dnsutils/agnhost image v2.40
  • GCPAuth addon test skipped until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved;
    => here, the issue actually might be in os.Setenv("MOCK_GOOGLE_TOKEN", "true") and we just need to address that - https://github.com/GoogleContainerTools/gcp-auth-webhook#gcp-auth-webhook:

Setting the environment variable MOCK_GOOGLE_TOKEN to true will prevent using the google application credentials to fetch the token used for the image pull secret. Instead the token will be mocked.

  • ensure tests have unique but still short "random" names (previously, it was based on the second they've started, which could lead to potential conflicts)
  • added a number of additional debug logs collected for TestNetworkPlugins (can be removed/suspended now)

general notes

  • optimising the code flow resulted in an overall reduction of tests' duration but
  • in the current pr, in addition to the hacks above (that take additional time to download and replace binaries), there are also some additional "waits" i've implemented to force staging of phases and make debugging easier (so, eg, i've implemented additional "Ready" method for CNIs to wait for and only continue once they are ready, and also WaitForPods is waiting for them to become Ready not just Running) - that's the reason why we see "longer" times for 'minikube start' and 'enable ingress'; those extra "waits" should not be necessary in "production" after other fixes i've made, while we can improve how we check/wait for different resources health (but that's another topic)

</update>


the goal of this pr is to see if we can reduce the rate of errors and flakes with the TestNetworkPlugins test group with the linux+docker combo

locally, these tests all pass (TestNetworkPlugins-linux+docker.log), and if ci_cd tests show similar, we might want to breakdown these into separate PRs

key points:

  • starting from scratch (ie, no images cached lcoally) - avoid race condition overwriting minikube's certs (should fix relevant flake with other tests as well)
    • unauthorised (as a consequence of the previous point) is a non-retryable error - fail fast w/o retrying
      example:
...
I1204 12:52:01.424956 2504034 certs.go:54] Setting up /home/prezha/.minikube/profiles/custom-flannel-124934 for IP: 172.17.0.2
I1204 12:52:01.424999 2504034 certs.go:187] generating minikubeCA CA: /home/prezha/.minikube/ca.key
I1204 12:52:01.543493 2504034 crypto.go:156] Writing cert to /home/prezha/.minikube/ca.crt ...
...
I1204 12:52:01.435619 2504033 certs.go:54] Setting up /home/prezha/.minikube/profiles/bridge-124934 for IP: 172.17.0.5
I1204 12:52:01.435645 2504033 certs.go:187] generating minikubeCA CA: /home/prezha/.minikube/ca.key
I1204 12:52:01.647799 2504033 crypto.go:156] Writing cert to /home/prezha/.minikube/ca.crt ...
...
I1204 12:52:01.443825 2504037 certs.go:54] Setting up /home/prezha/.minikube/profiles/flannel-124934 for IP: 192.168.76.2
I1204 12:52:01.443859 2504037 certs.go:187] generating minikubeCA CA: /home/prezha/.minikube/ca.key
I1204 12:52:01.781017 2504037 crypto.go:156] Writing cert to /home/prezha/.minikube/ca.crt ...
...
I1204 12:52:01.444867 2504029 certs.go:54] Setting up /home/prezha/.minikube/profiles/auto-124934 for IP: 192.168.67.2
I1204 12:52:01.444891 2504029 certs.go:187] generating minikubeCA CA: /home/prezha/.minikube/ca.key
I1204 12:52:01.521760 2504029 crypto.go:156] Writing cert to /home/prezha/.minikube/ca.crt ...
...

just to fail afterwards:

...
W1204 12:54:30.228254 2504033 out.go:239] X Exiting due to GUEST_START: wait 10m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "bridge-124934": Unauthorized
...
W1204 12:54:33.471702 2504037 out.go:239] X Exiting due to GUEST_START: wait 10m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "flannel-124934": Unauthorized
...
W1204 12:54:33.496623 2504029 out.go:239] X Exiting due to GUEST_START: wait 10m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "auto-124934": Unauthorized
...
W1204 12:54:34.868234 2504030 out.go:239] X Exiting due to GUEST_START: wait 10m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "kindnet-124934": Unauthorized
...
Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "..." network for pod "...": networkPlugin cni failed to set up pod "..." network: missing network name:,

and

failed to clean up sandbox container "..." network for pod "...": networkPlugin cni failed to teardown pod "..." network: missing network name]
  • kubenet also needs the cni (eg, bridge) to support hairpin mode
  • updated calico to the latest version
  • updated flannel to the latest version and extracted k8s manifests from code to a separate file that's then embedded
    • solved: "iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory" => enabled TestNetworkPlugins for docker + flannel
  • increased download timeout for k8s binaries
  • increased tests' memory and wait timeout to avoid weird issues with constraint resources
  • updated juju packages to the latest version - ours are ~3 years old

@k8s-ci-robot
Copy link
Contributor

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Dec 5, 2022
@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 5, 2022
@prezha
Copy link
Contributor Author

prezha commented Dec 5, 2022

/ok-to-test

@k8s-ci-robot k8s-ci-robot added the ok-to-test Indicates a non-member PR verified by an org member that is safe to test. label Dec 5, 2022
@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 15463) |
+----------------+----------+---------------------+
| minikube start | 54.1s    | 54.4s               |
| enable ingress | 27.3s    | 26.4s               |
+----------------+----------+---------------------+

Times for minikube start: 54.5s 53.7s 54.7s 54.0s 53.5s
Times for minikube (PR 15463) start: 55.2s 54.8s 55.6s 53.2s 53.1s

Times for minikube ingress: 28.1s 27.2s 24.2s 28.2s 28.6s
Times for minikube (PR 15463) ingress: 24.2s 27.7s 24.1s 28.6s 27.6s

docker driver with docker runtime
error collecting results for docker driver: timing run 0 with minikube: timing cmd: [out/minikube addons enable ingress]: waiting for minikube: exit status 10
docker driver with containerd runtime
error downloading artifacts: artifact download start: exit status 90

@minikube-pr-bot
Copy link

These are the flake rates of all failed tests.

Environment Failed Tests Flake Rate (%)
Docker_Linux_containerd TestNetworkPlugins/group/custom-flannel/Start (gopogh) n/a
Docker_Linux_containerd TestNetworkPlugins/group/flannel/Start (gopogh) n/a
Docker_Linux TestNetworkPlugins/group/custom-flannel/DNS (gopogh) n/a
Docker_Linux TestNetworkPlugins/group/flannel/Start (gopogh) n/a
Docker_Windows TestMultiNode/serial/StartAfterStop (gopogh) 0.00 (chart)
Docker_Windows TestPause/serial/Pause (gopogh) 0.00 (chart)
KVM_Linux_containerd TestFunctional/parallel/ConfigCmd (gopogh) 0.64 (chart)
Hyper-V_Windows TestErrorSpam/setup (gopogh) 0.68 (chart)
Hyperkit_macOS TestNoKubernetes/serial/StartWithK8s (gopogh) 1.34 (chart)
Docker_Windows TestNetworkPlugins/group/false/DNS (gopogh) 1.41 (chart)
Hyper-V_Windows TestNetworkPlugins/group/kubenet/NetCatPod (gopogh) 3.70 (chart)
Hyper-V_Windows TestPause/serial/SecondStartNoReconfiguration (gopogh) 19.18 (chart)
Docker_macOS TestStartStop/group/newest-cni/serial/Pause (gopogh) 28.89 (chart)
Docker_Linux_containerd TestNetworkPlugins/group/enable-default-cni/DNS (gopogh) 67.48 (chart)
Docker_Linux_containerd TestNetworkPlugins/group/bridge/DNS (gopogh) 69.83 (chart)
Docker_Linux TestNetworkPlugins/group/calico/Start (gopogh) 71.11 (chart)
Docker_Linux TestNetworkPlugins/group/bridge/DNS (gopogh) 79.26 (chart)
Docker_Linux TestNetworkPlugins/group/enable-default-cni/DNS (gopogh) 80.74 (chart)
Docker_Linux TestNetworkPlugins/group/false/DNS (gopogh) 80.74 (chart)
Docker_Linux TestNetworkPlugins/group/kubenet/DNS (gopogh) 83.70 (chart)
Docker_Linux_containerd TestNetworkPlugins/group/calico/Start (gopogh) 96.43 (chart)
Docker_macOS TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (gopogh) 97.93 (chart)
Docker_Windows TestNetworkPlugins/group/enable-default-cni/DNS (gopogh) 98.59 (chart)
Docker_Windows TestNetworkPlugins/group/bridge/DNS (gopogh) 99.30 (chart)
Docker_Linux_containerd TestKubernetesUpgrade (gopogh) 100.00 (chart)
Docker_Linux_containerd TestPreload (gopogh) 100.00 (chart)
Docker_macOS TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (gopogh) 100.00 (chart)
Docker_macOS TestIngressAddonLegacy/serial/ValidateIngressAddons (gopogh) 100.00 (chart)
Docker_macOS TestIngressAddonLegacy/StartLegacyK8sCluster (gopogh) 100.00 (chart)
Docker_macOS TestKubernetesUpgrade (gopogh) 100.00 (chart)
More tests... Continued...

Too many tests failed - See test logs for more details.

To see the flake rates of all tests by environment, click here.

@medyagh
Copy link
Member

medyagh commented Dec 7, 2022

/ok-to-test

I still see the failures... on Docker_linux

@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 15463) |
+----------------+----------+---------------------+
| minikube start | 54.5s    | 54.3s               |
| enable ingress | 26.1s    | 26.3s               |
+----------------+----------+---------------------+

Times for minikube ingress: 25.6s 24.1s 27.1s 25.7s 28.1s
Times for minikube (PR 15463) ingress: 23.6s 25.2s 28.7s 28.7s 25.1s

Times for minikube start: 56.0s 55.5s 54.1s 53.4s 53.7s
Times for minikube (PR 15463) start: 55.4s 53.9s 54.3s 54.4s 53.6s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 15463) |
+----------------+----------+---------------------+
| minikube start | 25.9s    | 26.1s               |
| enable ingress | 20.6s    | 20.9s               |
+----------------+----------+---------------------+

Times for minikube start: 27.5s 25.8s 25.5s 25.2s 25.7s
Times for minikube (PR 15463) start: 26.1s 28.5s 24.9s 24.9s 26.0s

Times for minikube ingress: 21.0s 19.9s 21.5s 19.9s 20.9s
Times for minikube (PR 15463) ingress: 18.9s 22.9s 21.5s 19.4s 22.0s

docker driver with containerd runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 15463) |
+----------------+----------+---------------------+
| minikube start | 22.3s    | 24.1s               |
| enable ingress | 29.4s    | 26.4s               |
+----------------+----------+---------------------+

Times for minikube start: 21.7s 21.6s 24.2s 22.7s 21.1s
Times for minikube (PR 15463) start: 21.7s 22.5s 32.4s 22.3s 21.8s

Times for minikube ingress: 26.4s 43.4s 26.4s 24.4s 26.4s
Times for minikube (PR 15463) ingress: 26.4s 26.5s 26.4s 26.5s 26.4s

@prezha
Copy link
Contributor Author

prezha commented Dec 7, 2022

/ok-to-test

I still see the failures... on Docker_linux

yep, i've created a draft pr to test initial fixes, some other issues surfaced - should be further improved with the commit i just made, we'll see

@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 15463) |
+----------------+----------+---------------------+
| minikube start | 55.6s    | 53.8s               |
| enable ingress | 27.6s    | 26.5s               |
+----------------+----------+---------------------+

Times for minikube ingress: 28.2s 27.7s 26.6s 23.7s 31.7s
Times for minikube (PR 15463) ingress: 28.7s 24.7s 24.2s 27.7s 27.2s

Times for minikube start: 54.0s 57.3s 54.9s 57.0s 54.7s
Times for minikube (PR 15463) start: 54.0s 53.9s 53.5s 53.5s 54.0s

docker driver with docker runtime

+-------------------+----------+---------------------+
|      COMMAND      | MINIKUBE | MINIKUBE (PR 15463) |
+-------------------+----------+---------------------+
| minikube start    | 26.0s    | 26.8s               |
| ⚠️  enable ingress | 19.7s    | 27.0s ⚠️             |
+-------------------+----------+---------------------+

Times for minikube start: 25.7s 26.6s 26.6s 24.7s 26.1s
Times for minikube (PR 15463) start: 25.8s 24.4s 28.9s 28.9s 25.8s

Times for minikube ingress: 19.4s 20.4s 18.4s 21.0s 19.0s
Times for minikube (PR 15463) ingress: 50.0s 21.5s 22.5s 20.4s 20.5s

docker driver with containerd runtime

+-------------------+----------+---------------------+
|      COMMAND      | MINIKUBE | MINIKUBE (PR 15463) |
+-------------------+----------+---------------------+
| minikube start    | 21.7s    | 22.8s               |
| ⚠️  enable ingress | 26.4s    | 40.1s ⚠️             |
+-------------------+----------+---------------------+

Times for minikube start: 22.3s 22.0s 21.5s 21.2s 21.2s
Times for minikube (PR 15463) start: 22.6s 24.8s 22.2s 22.2s 22.1s

Times for minikube ingress: 26.4s 26.4s 26.4s 26.4s 26.4s
Times for minikube (PR 15463) ingress: 32.4s 31.0s 26.9s 31.5s 79.0s

@minikube-pr-bot
Copy link

These are the flake rates of all failed tests.

Environment Failed Tests Flake Rate (%)
Docker_Linux_containerd TestNetworkPlugins/group/custom-flannel/Start (gopogh) n/a
Docker_Linux_containerd TestNetworkPlugins/group/flannel/Start (gopogh) n/a
Docker_Linux TestNetworkPlugins/group/custom-flannel/Start (gopogh) n/a
Docker_Linux TestNetworkPlugins/group/flannel/DNS (gopogh) n/a
Docker_Windows TestFunctional/parallel/ServiceCmd (gopogh) n/a
Docker_Windows TestNetworkPlugins/group/bridge/DNS (gopogh) n/a
Docker_Windows TestNetworkPlugins/group/calico/Start (gopogh) n/a
Docker_Windows TestNetworkPlugins/group/cilium/Start (gopogh) n/a
Docker_Windows TestNetworkPlugins/group/enable-default-cni/DNS (gopogh) n/a
Docker_Windows TestNetworkPlugins/group/kubenet/HairPin (gopogh) n/a
Hyperkit_macOS TestStartStop/group/no-preload/serial/DeployApp (gopogh) 0.00 (chart)
Hyperkit_macOS TestStartStop/group/no-preload/serial/EnableAddonWhileActive (gopogh) 0.00 (chart)
Hyperkit_macOS TestStartStop/group/no-preload/serial/FirstStart (gopogh) 0.00 (chart)
Docker_Linux_containerd TestStartStop/group/embed-certs/serial/DeployApp (gopogh) 33.93 (chart)
Docker_Linux_containerd TestStartStop/group/embed-certs/serial/FirstStart (gopogh) 33.93 (chart)
Docker_Linux_containerd TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (gopogh) 33.93 (chart)
Docker_Linux_containerd TestStartStop/group/old-k8s-version/serial/DeployApp (gopogh) 34.82 (chart)
Docker_Linux_containerd TestStartStop/group/old-k8s-version/serial/FirstStart (gopogh) 34.82 (chart)
Docker_Linux_containerd TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (gopogh) 34.82 (chart)
Docker_Linux_containerd TestStartStop/group/old-k8s-version/serial/SecondStart (gopogh) 35.96 (chart)
Docker_Linux_containerd TestStartStop/group/embed-certs/serial/SecondStart (gopogh) 36.21 (chart)
Docker_Linux_containerd TestStartStop/group/default-k8s-diff-port/serial/FirstStart (gopogh) 40.18 (chart)
Docker_Linux_containerd TestStartStop/group/default-k8s-diff-port/serial/SecondStart (gopogh) 40.18 (chart)
Docker_Linux_containerd TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (gopogh) 40.18 (chart)
Docker_Linux_containerd TestStartStop/group/default-k8s-diff-port/serial/DeployApp (gopogh) 42.24 (chart)
KVM_Linux TestMultiNode/serial/RestartMultiNode (gopogh) 42.48 (chart)
KVM_Linux TestPause/serial/SecondStartNoReconfiguration (gopogh) 42.48 (chart)
Docker_Linux_containerd TestNetworkPlugins/group/enable-default-cni/DNS (gopogh) 67.48 (chart)
Docker_Linux_containerd TestNetworkPlugins/group/bridge/DNS (gopogh) 69.23 (chart)
Docker_Linux TestNetworkPlugins/group/false/DNS (gopogh) 80.58 (chart)
More tests... Continued...

Too many tests failed - See test logs for more details.

To see the flake rates of all tests by environment, click here.

@minikube-pr-bot
Copy link

These are the flake rates of all failed tests.

Environment Failed Tests Flake Rate (%)
Docker_Linux_containerd TestNetworkPlugins/group/custom-flannel/Start (gopogh) n/a
Docker_Linux_containerd TestNetworkPlugins/group/flannel/DNS (gopogh) n/a
Docker_Windows TestFunctional/parallel/ServiceCmd (gopogh) n/a
Docker_Windows TestNetworkPlugins/group/bridge/DNS (gopogh) n/a
Docker_Windows TestNetworkPlugins/group/calico/Start (gopogh) n/a
Docker_Windows TestNetworkPlugins/group/cilium/Start (gopogh) n/a
Docker_Windows TestNetworkPlugins/group/enable-default-cni/DNS (gopogh) n/a
Docker_Windows TestNetworkPlugins/group/kubenet/HairPin (gopogh) n/a
Hyperkit_macOS TestMultiNode/serial/DeployApp2Nodes (gopogh) 0.00 (chart)
Hyperkit_macOS TestMultiNode/serial/PingHostFrom2Pods (gopogh) 0.00 (chart)
KVM_Linux_containerd TestMultiNode/serial/DeployApp2Nodes (gopogh) 0.00 (chart)
KVM_Linux_containerd TestMultiNode/serial/PingHostFrom2Pods (gopogh) 0.00 (chart)
KVM_Linux TestAddons/Setup (gopogh) 0.00 (chart)
KVM_Linux TestErrorSpam/setup (gopogh) 0.00 (chart)
KVM_Linux TestMultiNode/serial/DeployApp2Nodes (gopogh) 0.00 (chart)
KVM_Linux TestMultiNode/serial/PingHostFrom2Pods (gopogh) 0.00 (chart)
none_Linux TestAddons/parallel/MetricsServer (gopogh) 0.00 (chart)
Hyperkit_macOS TestErrorSpam/setup (gopogh) 1.33 (chart)
Docker_Linux_containerd TestNetworkPlugins/group/kindnet/DNS (gopogh) 4.46 (chart)
Docker_Cloud_Shell TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop (gopogh) 18.83 (chart)
Docker_Cloud_Shell TestStartStop/group/cloud-shell/serial/EnableAddonAfterStop (gopogh) 19.35 (chart)
Docker_Cloud_Shell TestStartStop/group/cloud-shell/serial/SecondStart (gopogh) 19.35 (chart)
Docker_Cloud_Shell TestStartStop/group/cloud-shell/serial/Stop (gopogh) 19.35 (chart)
Docker_Cloud_Shell TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop (gopogh) 19.35 (chart)
Hyperkit_macOS TestPause/serial/SecondStartNoReconfiguration (gopogh) 22.15 (chart)
Docker_macOS TestMultiNode/serial/RestartKeepsNodes (gopogh) 60.69 (chart)
Docker_Linux_containerd TestNetworkPlugins/group/enable-default-cni/DNS (gopogh) 67.74 (chart)
Docker_Linux_containerd TestNetworkPlugins/group/bridge/DNS (gopogh) 69.49 (chart)
Docker_Linux_containerd TestNetworkPlugins/group/calico/Start (gopogh) 96.46 (chart)
Docker_macOS TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (gopogh) 99.31 (chart)
More tests... Continued...

Too many tests failed - See test logs for more details.

To see the flake rates of all tests by environment, click here.

@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 15463) |
+----------------+----------+---------------------+
| minikube start | 55.5s    | 54.3s               |
| enable ingress | 28.1s    | 26.5s               |
+----------------+----------+---------------------+

Times for minikube start: 54.9s 56.2s 53.7s 55.1s 57.4s
Times for minikube (PR 15463) start: 53.9s 55.2s 54.7s 54.3s 53.7s

Times for minikube ingress: 28.2s 27.1s 28.7s 28.6s 27.7s
Times for minikube (PR 15463) ingress: 28.2s 25.1s 28.2s 25.7s 25.2s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 15463) |
+----------------+----------+---------------------+
| minikube start | 26.9s    | 26.6s               |
| enable ingress | 32.5s    | 26.9s               |
+----------------+----------+---------------------+

Times for minikube ingress: 80.9s 20.0s 21.0s 19.5s 20.9s
Times for minikube (PR 15463) ingress: 20.5s 49.9s 19.5s 24.0s 20.5s

Times for minikube start: 28.3s 27.0s 25.7s 24.9s 28.8s
Times for minikube (PR 15463) start: 24.8s 27.9s 24.3s 27.8s 27.9s

docker driver with containerd runtime

+-------------------+----------+---------------------+
|      COMMAND      | MINIKUBE | MINIKUBE (PR 15463) |
+-------------------+----------+---------------------+
| minikube start    | 23.9s    | 22.4s               |
| ⚠️  enable ingress | 26.3s    | 53.9s ⚠️             |
+-------------------+----------+---------------------+

Times for minikube ingress: 26.5s 26.4s 25.9s 26.4s 26.4s
Times for minikube (PR 15463) ingress: 81.4s 46.4s 31.4s 30.9s 79.5s

Times for minikube start: 21.6s 20.9s 22.3s 32.2s 22.3s
Times for minikube (PR 15463) start: 24.4s 21.6s 21.8s 22.4s 22.0s

@prezha prezha force-pushed the fix-TestNetworkPlugins-Linux_Docker branch from ecd0bed to 394302c Compare December 7, 2022 12:36
@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 15463) |
+----------------+----------+---------------------+
| minikube start | 54.1s    | 54.6s               |
| enable ingress | 27.7s    | 26.8s               |
+----------------+----------+---------------------+

Times for minikube start: 53.4s 53.2s 54.4s 54.7s 54.8s
Times for minikube (PR 15463) start: 54.9s 54.2s 55.6s 54.0s 54.2s

Times for minikube ingress: 28.2s 28.2s 27.1s 27.2s 27.7s
Times for minikube (PR 15463) ingress: 25.1s 30.7s 25.2s 28.7s 24.2s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 15463) |
+----------------+----------+---------------------+
| minikube start | 25.6s    | 26.6s               |
| enable ingress | 33.0s    | 21.5s               |
+----------------+----------+---------------------+

Times for minikube (PR 15463) start: 27.6s 25.0s 28.3s 26.4s 25.7s
Times for minikube start: 25.4s 25.6s 25.7s 27.7s 23.8s

Times for minikube ingress: 21.9s 20.4s 22.0s 81.0s 19.5s
Times for minikube (PR 15463) ingress: 21.5s 20.4s 24.9s 20.4s 20.5s

docker driver with containerd runtime

+-------------------+----------+---------------------+
|      COMMAND      | MINIKUBE | MINIKUBE (PR 15463) |
+-------------------+----------+---------------------+
| minikube start    | 25.6s    | 25.3s               |
| ⚠️  enable ingress | 26.4s    | 38.9s ⚠️             |
+-------------------+----------+---------------------+

Times for minikube (PR 15463) ingress: 18.9s 32.5s 79.4s 31.4s 32.4s
Times for minikube ingress: 26.4s 26.4s 26.4s 26.5s 26.4s

Times for minikube start: 32.9s 23.8s 24.7s 22.4s 24.3s
Times for minikube (PR 15463) start: 21.1s 25.1s 22.9s 21.9s 35.4s

@minikube-pr-bot
Copy link

These are the flake rates of all failed tests.

Environment Failed Tests Flake Rate (%)
Docker_Linux_containerd TestNetworkPlugins/group/custom-flannel/Start (gopogh) n/a
Docker_Linux_containerd TestNetworkPlugins/group/flannel/Start (gopogh) n/a
Docker_Linux TestNetworkPlugins/group/flannel/DNS (gopogh) n/a
Docker_Linux TestMissingContainerUpgrade (gopogh) 0.00 (chart)
Hyperkit_macOS TestMultiNode/serial/DeployApp2Nodes (gopogh) 0.00 (chart)
Hyperkit_macOS TestMultiNode/serial/PingHostFrom2Pods (gopogh) 0.00 (chart)
KVM_Linux_containerd TestMultiNode/serial/DeployApp2Nodes (gopogh) 0.00 (chart)
KVM_Linux_containerd TestMultiNode/serial/PingHostFrom2Pods (gopogh) 0.00 (chart)
KVM_Linux TestMultiNode/serial/DeployApp2Nodes (gopogh) 0.00 (chart)
KVM_Linux TestMultiNode/serial/PingHostFrom2Pods (gopogh) 0.00 (chart)
Docker_Linux_containerd TestStartStop/group/embed-certs/serial/DeployApp (gopogh) 34.78 (chart)
Docker_Linux_containerd TestStartStop/group/embed-certs/serial/FirstStart (gopogh) 34.78 (chart)
Docker_Linux_containerd TestStartStop/group/old-k8s-version/serial/DeployApp (gopogh) 35.65 (chart)
Docker_Linux_containerd TestStartStop/group/old-k8s-version/serial/FirstStart (gopogh) 35.65 (chart)
Docker_Linux_containerd TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (gopogh) 35.65 (chart)
Docker_Linux_containerd TestStartStop/group/old-k8s-version/serial/SecondStart (gopogh) 36.75 (chart)
Docker_Linux_containerd TestStartStop/group/embed-certs/serial/SecondStart (gopogh) 36.97 (chart)
Docker_Linux_containerd TestStartStop/group/default-k8s-diff-port/serial/FirstStart (gopogh) 40.87 (chart)
Docker_Linux_containerd TestStartStop/group/default-k8s-diff-port/serial/SecondStart (gopogh) 40.87 (chart)
Docker_Linux_containerd TestStartStop/group/default-k8s-diff-port/serial/DeployApp (gopogh) 42.86 (chart)
Docker_macOS TestMultiNode/serial/RestartKeepsNodes (gopogh) 61.49 (chart)
Docker_Linux_containerd TestNetworkPlugins/group/enable-default-cni/DNS (gopogh) 67.72 (chart)
Docker_Linux_containerd TestNetworkPlugins/group/bridge/DNS (gopogh) 69.17 (chart)
Docker_Linux TestNetworkPlugins/group/false/DNS (gopogh) 80.42 (chart)
Docker_Linux TestNetworkPlugins/group/bridge/DNS (gopogh) 81.82 (chart)
Docker_Linux TestNetworkPlugins/group/enable-default-cni/DNS (gopogh) 81.82 (chart)
Docker_Linux TestNetworkPlugins/group/kubenet/DNS (gopogh) 85.31 (chart)
Docker_Linux TestNetworkPlugins/group/calico/NetCatPod (gopogh) 92.86 (chart)
Docker_Linux_containerd TestNetworkPlugins/group/calico/Start (gopogh) 96.52 (chart)
Docker_macOS TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (gopogh) 99.32 (chart)
More tests... Continued...

Too many tests failed - See test logs for more details.

To see the flake rates of all tests by environment, click here.

@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 15463) |
+----------------+----------+---------------------+
| minikube start | 54.3s    | 54.0s               |
| enable ingress | 26.6s    | 26.2s               |
+----------------+----------+---------------------+

Times for minikube start: 54.6s 54.5s 55.4s 52.9s 54.0s
Times for minikube (PR 15463) start: 53.6s 52.0s 53.5s 56.0s 54.8s

Times for minikube ingress: 28.1s 27.7s 27.6s 24.7s 25.1s
Times for minikube (PR 15463) ingress: 24.6s 24.1s 25.1s 29.7s 27.6s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 15463) |
+----------------+----------+---------------------+
| minikube start | 24.8s    | 25.9s               |
| enable ingress | 26.9s    | 26.4s               |
+----------------+----------+---------------------+

Times for minikube ingress: 48.9s 21.5s 21.4s 20.9s 21.9s
Times for minikube (PR 15463) ingress: 20.9s 20.0s 50.0s 19.9s 20.9s

Times for minikube start: 24.6s 25.2s 25.6s 24.1s 24.7s
Times for minikube (PR 15463) start: 25.2s 25.1s 25.7s 27.6s 25.9s

docker driver with containerd runtime

+-------------------+----------+---------------------+
|      COMMAND      | MINIKUBE | MINIKUBE (PR 15463) |
+-------------------+----------+---------------------+
| minikube start    | 24.4s    | 22.2s               |
| ⚠️  enable ingress | 26.3s    | 50.9s ⚠️             |
+-------------------+----------+---------------------+

Times for minikube start: 25.2s 32.7s 21.6s 21.4s 21.1s
Times for minikube (PR 15463) start: 24.4s 22.1s 22.5s 20.8s 21.5s

Times for minikube ingress: 26.4s 26.4s 26.4s 26.4s 25.9s
Times for minikube (PR 15463) ingress: 31.4s 79.4s 32.4s 79.4s 31.5s

@minikube-pr-bot
Copy link

These are the flake rates of all failed tests.

Environment Failed Tests Flake Rate (%)
Docker_Linux_containerd TestNetworkPlugins/group/custom-flannel/Start (gopogh) n/a
Docker_Linux_containerd TestNetworkPlugins/group/flannel/Start (gopogh) n/a
Docker_Linux TestNetworkPlugins/group/custom-flannel/NetCatPod (gopogh) n/a
Docker_Linux TestNetworkPlugins/group/flannel/DNS (gopogh) n/a
Docker_Windows TestFunctional/parallel/ServiceCmd (gopogh) n/a
Docker_Windows TestNetworkPlugins/group/bridge/DNS (gopogh) n/a
Docker_Windows TestNetworkPlugins/group/calico/Start (gopogh) n/a
Docker_Windows TestNetworkPlugins/group/cilium/Start (gopogh) n/a
Docker_Windows TestNetworkPlugins/group/enable-default-cni/DNS (gopogh) n/a
Docker_Windows TestNetworkPlugins/group/kubenet/HairPin (gopogh) n/a
Hyperkit_macOS TestMultiNode/serial/DeployApp2Nodes (gopogh) 0.00 (chart)
Hyperkit_macOS TestMultiNode/serial/PingHostFrom2Pods (gopogh) 0.00 (chart)
KVM_Linux_containerd TestMultiNode/serial/DeployApp2Nodes (gopogh) 0.00 (chart)
KVM_Linux_containerd TestMultiNode/serial/PingHostFrom2Pods (gopogh) 0.00 (chart)
KVM_Linux TestMultiNode/serial/DeployApp2Nodes (gopogh) 0.00 (chart)
KVM_Linux TestMultiNode/serial/PingHostFrom2Pods (gopogh) 0.00 (chart)
Docker_Linux_containerd TestStartStop/group/embed-certs/serial/DeployApp (gopogh) 35.34 (chart)
Docker_Linux_containerd TestStartStop/group/embed-certs/serial/FirstStart (gopogh) 35.34 (chart)
Docker_Linux_containerd TestStartStop/group/old-k8s-version/serial/DeployApp (gopogh) 36.21 (chart)
Docker_Linux_containerd TestStartStop/group/old-k8s-version/serial/FirstStart (gopogh) 36.21 (chart)
Docker_Linux_containerd TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (gopogh) 36.21 (chart)
Docker_Linux_containerd TestStartStop/group/old-k8s-version/serial/SecondStart (gopogh) 37.29 (chart)
Docker_Linux_containerd TestStartStop/group/embed-certs/serial/SecondStart (gopogh) 37.50 (chart)
Docker_Linux_containerd TestStartStop/group/default-k8s-diff-port/serial/FirstStart (gopogh) 41.38 (chart)
Docker_Linux_containerd TestStartStop/group/default-k8s-diff-port/serial/SecondStart (gopogh) 41.38 (chart)
KVM_Linux TestMultiNode/serial/RestartMultiNode (gopogh) 43.04 (chart)
Docker_Linux_containerd TestStartStop/group/default-k8s-diff-port/serial/DeployApp (gopogh) 43.33 (chart)
Docker_Linux_containerd TestNetworkPlugins/group/enable-default-cni/DNS (gopogh) 67.97 (chart)
Docker_Linux_containerd TestNetworkPlugins/group/bridge/DNS (gopogh) 69.42 (chart)
Docker_Linux TestNetworkPlugins/group/calico/Start (gopogh) 70.14 (chart)
More tests... Continued...

Too many tests failed - See test logs for more details.

To see the flake rates of all tests by environment, click here.

@minikube-pr-bot
Copy link

These are the flake rates of all failed tests.

Environment Failed Tests Flake Rate (%)
Docker_Linux_containerd TestNetworkPlugins/group/custom-flannel/Start (gopogh) n/a
Docker_Linux_containerd TestNetworkPlugins/group/flannel/Start (gopogh) n/a
Docker_Linux TestNetworkPlugins/group/flannel/DNS (gopogh) n/a
Docker_Windows TestFunctional/parallel/ServiceCmd (gopogh) n/a
Docker_Windows TestMultiNode/serial/StartAfterStop (gopogh) n/a
Docker_Windows TestNetworkPlugins/group/bridge/DNS (gopogh) n/a
Docker_Windows TestNetworkPlugins/group/calico/Start (gopogh) n/a
Docker_Windows TestNetworkPlugins/group/cilium/Start (gopogh) n/a
Docker_Windows TestNetworkPlugins/group/enable-default-cni/DNS (gopogh) n/a
Docker_Windows TestNetworkPlugins/group/false/DNS (gopogh) n/a
Docker_Windows TestNetworkPlugins/group/kubenet/HairPin (gopogh) n/a
Docker_Windows TestPause/serial/Pause (gopogh) n/a
Hyperkit_macOS TestMultiNode/serial/DeployApp2Nodes (gopogh) 0.00 (chart)
Hyperkit_macOS TestMultiNode/serial/PingHostFrom2Pods (gopogh) 0.00 (chart)
KVM_Linux_containerd TestMultiNode/serial/DeployApp2Nodes (gopogh) 0.00 (chart)
KVM_Linux_containerd TestMultiNode/serial/PingHostFrom2Pods (gopogh) 0.00 (chart)
KVM_Linux TestMultiNode/serial/DeployApp2Nodes (gopogh) 0.00 (chart)
KVM_Linux TestMultiNode/serial/PingHostFrom2Pods (gopogh) 0.00 (chart)
Docker_macOS TestMultiNode/serial/RestartMultiNode (gopogh) 31.69 (chart)
KVM_Linux TestPause/serial/SecondStartNoReconfiguration (gopogh) 43.71 (chart)
Docker_Linux_containerd TestNetworkPlugins/group/enable-default-cni/DNS (gopogh) 68.85 (chart)
Docker_Linux_containerd TestNetworkPlugins/group/bridge/DNS (gopogh) 70.18 (chart)
Docker_Linux TestNetworkPlugins/group/false/DNS (gopogh) 81.02 (chart)
Docker_Linux TestNetworkPlugins/group/bridge/DNS (gopogh) 82.48 (chart)
Docker_Linux TestNetworkPlugins/group/enable-default-cni/DNS (gopogh) 82.48 (chart)
Docker_Linux TestNetworkPlugins/group/kubenet/DNS (gopogh) 86.13 (chart)
Docker_Linux TestNetworkPlugins/group/calico/NetCatPod (gopogh) 94.74 (chart)
Docker_Linux_containerd TestNetworkPlugins/group/calico/Start (gopogh) 96.33 (chart)
Docker_macOS TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (gopogh) 99.30 (chart)
Docker_Linux_containerd TestKubernetesUpgrade (gopogh) 100.00 (chart)
More tests... Continued...

Too many tests failed - See test logs for more details.

To see the flake rates of all tests by environment, click here.

@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 15463) |
+----------------+----------+---------------------+
| minikube start | 54.9s    | 58.2s               |
| enable ingress | 27.4s    | 25.7s               |
+----------------+----------+---------------------+

Times for minikube start: 53.4s 55.5s 53.9s 56.7s 54.7s
Times for minikube (PR 15463) start: 55.8s 61.2s 58.3s 58.8s 57.0s

Times for minikube ingress: 27.6s 26.6s 27.6s 25.7s 29.6s
Times for minikube (PR 15463) ingress: 25.7s 26.2s 24.6s 23.1s 28.7s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 15463) |
+----------------+----------+---------------------+
| minikube start | 27.6s    | 29.5s               |
| enable ingress | 20.5s    | 21.1s               |
+----------------+----------+---------------------+

Times for minikube start: 25.6s 24.2s 29.2s 28.9s 30.1s
Times for minikube (PR 15463) start: 30.9s 28.4s 29.5s 28.4s 30.4s

Times for minikube ingress: 19.9s 20.4s 20.9s 21.4s 20.0s
Times for minikube (PR 15463) ingress: 20.5s 19.9s 21.9s 21.9s 21.4s

docker driver with containerd runtime

+-------------------+----------+---------------------+
|      COMMAND      | MINIKUBE | MINIKUBE (PR 15463) |
+-------------------+----------+---------------------+
| minikube start    | 22.2s    | 26.4s               |
| ⚠️  enable ingress | 26.3s    | 44.7s ⚠️             |
+-------------------+----------+---------------------+

Times for minikube start: 21.6s 25.3s 21.4s 22.1s 20.8s
Times for minikube (PR 15463) start: 23.4s 25.1s 25.6s 23.7s 34.0s

Times for minikube (PR 15463) ingress: 31.9s 79.9s 30.4s 33.4s 48.0s
Times for minikube ingress: 26.4s 25.9s 26.4s 26.4s 26.4s

@prezha
Copy link
Contributor Author

prezha commented Jan 13, 2023

hmmm, we again had the same issue as seen earlier today - network issues?

...
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/i/icu/libicu66_66.1-2ubuntu2.1_arm64.deb  Connection failed [IP: 185.125.190.36 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
Fetched 73.1 MB in 9min 51s (124 kB/s)
...

@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 15463) |
+----------------+----------+---------------------+
| minikube start | 55.1s    | 57.0s               |
| enable ingress | 27.3s    | 28.8s               |
+----------------+----------+---------------------+

Times for minikube start: 54.3s 54.4s 55.3s 55.3s 56.2s
Times for minikube (PR 15463) start: 56.6s 57.6s 57.5s 56.1s 57.1s

Times for minikube ingress: 29.2s 25.7s 29.6s 29.2s 22.7s
Times for minikube (PR 15463) ingress: 30.2s 29.2s 25.7s 30.2s 28.6s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 15463) |
+----------------+----------+---------------------+
| minikube start | 26.1s    | 28.0s               |
| enable ingress | 26.6s    | 20.7s               |
+----------------+----------+---------------------+

Times for minikube start: 24.6s 27.5s 27.9s 25.5s 25.0s
Times for minikube (PR 15463) start: 29.0s 27.2s 27.9s 27.9s 28.2s

Times for minikube ingress: 50.0s 19.0s 19.9s 20.9s 23.0s
Times for minikube (PR 15463) ingress: 21.9s 20.4s 20.4s 19.4s 21.5s

docker driver with containerd runtime

+-------------------+----------+---------------------+
|      COMMAND      | MINIKUBE | MINIKUBE (PR 15463) |
+-------------------+----------+---------------------+
| minikube start    | 24.3s    | 25.5s               |
| ⚠️  enable ingress | 26.5s    | 40.3s ⚠️             |
+-------------------+----------+---------------------+

Times for minikube start: 33.0s 22.3s 22.3s 22.2s 21.7s
Times for minikube (PR 15463) start: 23.3s 34.6s 23.2s 22.9s 23.7s

Times for minikube ingress: 25.9s 27.4s 26.4s 26.4s 26.4s
Times for minikube (PR 15463) ingress: 81.9s 33.0s 28.4s 26.5s 31.5s

@minikube-pr-bot
Copy link

These are the flake rates of all failed tests.

Environment Failed Tests Flake Rate (%)
Docker_macOS TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (gopogh) 97.99 (chart)
Docker_macOS TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (gopogh) 100.00 (chart)
Docker_macOS TestIngressAddonLegacy/serial/ValidateIngressAddons (gopogh) 100.00 (chart)
Docker_macOS TestIngressAddonLegacy/StartLegacyK8sCluster (gopogh) 100.00 (chart)
Docker_macOS TestKubernetesUpgrade (gopogh) 100.00 (chart)
Docker_macOS TestMissingContainerUpgrade (gopogh) 100.00 (chart)
Docker_macOS TestRunningBinaryUpgrade (gopogh) 100.00 (chart)
Docker_macOS TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (gopogh) 100.00 (chart)
Docker_macOS TestStartStop/group/old-k8s-version/serial/DeployApp (gopogh) 100.00 (chart)
Docker_macOS TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (gopogh) 100.00 (chart)
Docker_macOS TestStartStop/group/old-k8s-version/serial/FirstStart (gopogh) 100.00 (chart)
Docker_macOS TestStartStop/group/old-k8s-version/serial/SecondStart (gopogh) 100.00 (chart)
Docker_macOS TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (gopogh) 100.00 (chart)
Docker_macOS TestStoppedBinaryUpgrade/Upgrade (gopogh) 100.00 (chart)

To see the flake rates of all tests by environment, click here.

@prezha
Copy link
Contributor Author

prezha commented Jan 14, 2023

/retest-this-please

@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 15463) |
+----------------+----------+---------------------+
| minikube start | 54.9s    | 58.2s               |
| enable ingress | 28.1s    | 27.6s               |
+----------------+----------+---------------------+

Times for minikube ingress: 28.7s 28.1s 27.7s 27.7s 28.1s
Times for minikube (PR 15463) ingress: 25.7s 28.3s 29.2s 25.7s 29.1s

Times for minikube start: 55.3s 55.5s 54.2s 54.9s 54.8s
Times for minikube (PR 15463) start: 57.9s 58.8s 60.4s 56.0s 57.7s

docker driver with docker runtime

+-------------------+----------+---------------------+
|      COMMAND      | MINIKUBE | MINIKUBE (PR 15463) |
+-------------------+----------+---------------------+
| minikube start    | 25.8s    | 28.0s               |
| ⚠️  enable ingress | 25.8s    | 33.7s ⚠️             |
+-------------------+----------+---------------------+

Times for minikube (PR 15463) ingress: 22.9s 23.0s 20.4s 81.5s 20.5s
Times for minikube ingress: 20.4s 19.4s 20.9s 49.0s 19.4s

Times for minikube start: 25.7s 25.3s 26.0s 26.1s 26.0s
Times for minikube (PR 15463) start: 27.4s 28.3s 28.2s 27.1s 29.3s

docker driver with containerd runtime

+-------------------+----------+---------------------+
|      COMMAND      | MINIKUBE | MINIKUBE (PR 15463) |
+-------------------+----------+---------------------+
| minikube start    | 21.7s    | 25.7s               |
| ⚠️  enable ingress | 26.2s    | 41.8s ⚠️             |
+-------------------+----------+---------------------+

Times for minikube start: 21.2s 21.4s 21.4s 21.9s 22.4s
Times for minikube (PR 15463) start: 22.7s 21.8s 22.9s 22.5s 38.6s

Times for minikube ingress: 25.9s 26.4s 25.9s 26.5s 26.4s
Times for minikube (PR 15463) ingress: 32.0s 32.5s 31.5s 32.9s 80.0s

@minikube-pr-bot
Copy link

These are the flake rates of all failed tests.

Environment Failed Tests Flake Rate (%)
Docker_Windows TestNoKubernetes/serial/StartNoArgs (gopogh) 11.76 (chart)
Docker_Windows TestPause/serial/SecondStartNoReconfiguration (gopogh) 32.35 (chart)
Hyper-V_Windows TestNetworkPlugins/group/bridge/Start (gopogh) 70.59 (chart)
Hyper-V_Windows TestNetworkPlugins/group/flannel/Start (gopogh) 70.59 (chart)
Hyper-V_Windows TestNetworkPlugins/group/kubenet/Start (gopogh) 70.59 (chart)
Hyper-V_Windows TestPause/serial/SecondStartNoReconfiguration (gopogh) 70.59 (chart)
Docker_macOS TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (gopogh) 97.93 (chart)
Docker_macOS TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (gopogh) 100.00 (chart)
Docker_macOS TestIngressAddonLegacy/serial/ValidateIngressAddons (gopogh) 100.00 (chart)
Docker_macOS TestIngressAddonLegacy/StartLegacyK8sCluster (gopogh) 100.00 (chart)
Docker_macOS TestKubernetesUpgrade (gopogh) 100.00 (chart)
Docker_macOS TestMissingContainerUpgrade (gopogh) 100.00 (chart)
Docker_macOS TestRunningBinaryUpgrade (gopogh) 100.00 (chart)
Docker_macOS TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (gopogh) 100.00 (chart)
Docker_macOS TestStartStop/group/old-k8s-version/serial/DeployApp (gopogh) 100.00 (chart)
Docker_macOS TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (gopogh) 100.00 (chart)
Docker_macOS TestStartStop/group/old-k8s-version/serial/FirstStart (gopogh) 100.00 (chart)
Docker_macOS TestStartStop/group/old-k8s-version/serial/SecondStart (gopogh) 100.00 (chart)
Docker_macOS TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (gopogh) 100.00 (chart)
Docker_macOS TestStoppedBinaryUpgrade/Upgrade (gopogh) 100.00 (chart)
Docker_Windows TestFunctional/parallel/ServiceCmd (gopogh) 100.00 (chart)
Hyper-V_Windows TestMultiNode/serial/PingHostFrom2Pods (gopogh) 100.00 (chart)
Hyper-V_Windows TestMultiNode/serial/RestartKeepsNodes (gopogh) 100.00 (chart)
Hyper-V_Windows TestNoKubernetes/serial/StartWithStopK8s (gopogh) 100.00 (chart)
KVM_Linux_containerd TestPreload (gopogh) 100.00 (chart)

To see the flake rates of all tests by environment, click here.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jan 18, 2023
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jan 18, 2023
@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 15463) |
+----------------+----------+---------------------+
| minikube start | 55.6s    | 58.0s               |
| enable ingress | 27.5s    | 28.2s               |
+----------------+----------+---------------------+

Times for minikube start: 56.1s 55.9s 54.3s 56.5s 55.1s
Times for minikube (PR 15463) start: 57.2s 56.8s 59.2s 60.2s 56.7s

Times for minikube ingress: 27.7s 28.7s 27.7s 25.7s 27.7s
Times for minikube (PR 15463) ingress: 25.2s 28.7s 29.2s 28.7s 29.2s

docker driver with docker runtime

+-------------------+----------+---------------------+
|      COMMAND      | MINIKUBE | MINIKUBE (PR 15463) |
+-------------------+----------+---------------------+
| minikube start    | 26.5s    | 27.1s               |
| ⚠️  enable ingress | 39.2s    | 50.8s ⚠️             |
+-------------------+----------+---------------------+

Times for minikube start: 27.4s 26.4s 25.4s 25.6s 27.9s
Times for minikube (PR 15463) start: 27.5s 28.8s 26.5s 26.1s 26.5s

Times for minikube ingress: 20.4s 21.4s 50.0s 82.4s 21.5s
Times for minikube (PR 15463) ingress: 81.5s 21.0s 49.5s 19.9s 82.0s

docker driver with containerd runtime

+-------------------+----------+---------------------+
|      COMMAND      | MINIKUBE | MINIKUBE (PR 15463) |
+-------------------+----------+---------------------+
| minikube start    | 24.0s    | 25.9s               |
| ⚠️  enable ingress | 26.3s    | 39.4s ⚠️             |
+-------------------+----------+---------------------+

Times for minikube start: 32.3s 22.1s 22.4s 21.8s 21.3s
Times for minikube (PR 15463) start: 22.2s 24.2s 32.7s 24.8s 25.6s

Times for minikube ingress: 26.5s 26.0s 26.0s 26.4s 26.5s
Times for minikube (PR 15463) ingress: 31.5s 32.0s 80.4s 31.5s 21.5s

@minikube-pr-bot
Copy link

These are the flake rates of all failed tests.

Environment Failed Tests Flake Rate (%)
Docker_Linux_containerd TestFunctional/serial/LogsFileCmd (gopogh) 7.19 (chart)
Hyperkit_macOS TestPause/serial/SecondStartNoReconfiguration (gopogh) 27.97 (chart)
Docker_Windows TestMultiNode/serial/StartAfterStop (gopogh) 44.44 (chart)
KVM_Linux TestPause/serial/SecondStartNoReconfiguration (gopogh) 46.23 (chart)
Docker_Windows TestStartStop/group/newest-cni/serial/Pause (gopogh) 61.90 (chart)
Docker_Windows TestNetworkPlugins/group/kubenet/DNS (gopogh) 63.49 (chart)
Docker_Windows TestPause/serial/SecondStartNoReconfiguration (gopogh) 63.49 (chart)
Docker_Windows TestNetworkPlugins/group/enable-default-cni/DNS (gopogh) 64.52 (chart)
Hyper-V_Windows TestNetworkPlugins/group/bridge/Start (gopogh) 87.50 (chart)
Hyper-V_Windows TestNetworkPlugins/group/flannel/Start (gopogh) 87.50 (chart)
Hyper-V_Windows TestNetworkPlugins/group/kubenet/Start (gopogh) 87.50 (chart)
Hyper-V_Windows TestPause/serial/SecondStartNoReconfiguration (gopogh) 87.50 (chart)
Docker_Windows TestNetworkPlugins/group/calico/Start (gopogh) 96.83 (chart)
Docker_macOS TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (gopogh) 97.93 (chart)
Docker_macOS TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (gopogh) 100.00 (chart)
Docker_macOS TestIngressAddonLegacy/serial/ValidateIngressAddons (gopogh) 100.00 (chart)
Docker_macOS TestIngressAddonLegacy/StartLegacyK8sCluster (gopogh) 100.00 (chart)
Docker_macOS TestKubernetesUpgrade (gopogh) 100.00 (chart)
Docker_macOS TestMissingContainerUpgrade (gopogh) 100.00 (chart)
Docker_macOS TestRunningBinaryUpgrade (gopogh) 100.00 (chart)
Docker_macOS TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (gopogh) 100.00 (chart)
Docker_macOS TestStartStop/group/old-k8s-version/serial/DeployApp (gopogh) 100.00 (chart)
Docker_macOS TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (gopogh) 100.00 (chart)
Docker_macOS TestStartStop/group/old-k8s-version/serial/FirstStart (gopogh) 100.00 (chart)
Docker_macOS TestStartStop/group/old-k8s-version/serial/SecondStart (gopogh) 100.00 (chart)
Docker_macOS TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (gopogh) 100.00 (chart)
Docker_macOS TestStoppedBinaryUpgrade/Upgrade (gopogh) 100.00 (chart)
Docker_Windows TestFunctional/parallel/ServiceCmd (gopogh) 100.00 (chart)
Docker_Windows TestNetworkPlugins/group/cilium/Start (gopogh) 100.00 (chart)
Hyper-V_Windows TestMultiNode/serial/PingHostFrom2Pods (gopogh) 100.00 (chart)
More tests... Continued...

Too many tests failed - See test logs for more details.

To see the flake rates of all tests by environment, click here.

@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 15463) |
+----------------+----------+---------------------+
| minikube start | 55.6s    | 57.7s               |
| enable ingress | 26.3s    | 28.0s               |
+----------------+----------+---------------------+

Times for minikube start: 55.9s 53.9s 56.8s 59.3s 52.2s
Times for minikube (PR 15463) start: 55.1s 58.7s 56.4s 61.7s 56.5s

Times for minikube ingress: 27.2s 24.7s 24.6s 26.2s 28.6s
Times for minikube (PR 15463) ingress: 29.2s 25.2s 29.2s 28.3s 28.3s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 15463) |
+----------------+----------+---------------------+
| minikube start | 26.3s    | 27.1s               |
| enable ingress | 32.7s    | 33.6s               |
+----------------+----------+---------------------+

Times for minikube start: 24.6s 26.6s 25.6s 25.9s 28.8s
Times for minikube (PR 15463) start: 28.3s 26.3s 26.8s 27.0s 27.0s

Times for minikube ingress: 81.5s 20.0s 22.5s 19.0s 20.5s
Times for minikube (PR 15463) ingress: 21.0s 84.4s 19.5s 21.0s 22.0s

docker driver with containerd runtime

+-------------------+----------+---------------------+
|      COMMAND      | MINIKUBE | MINIKUBE (PR 15463) |
+-------------------+----------+---------------------+
| minikube start    | 22.8s    | 24.5s               |
| ⚠️  enable ingress | 26.3s    | 61.0s ⚠️             |
+-------------------+----------+---------------------+

Times for minikube ingress: 25.9s 26.0s 26.4s 26.5s 26.5s
Times for minikube (PR 15463) ingress: 31.5s 80.0s 33.0s 80.5s 79.9s

Times for minikube start: 21.6s 22.0s 24.3s 21.2s 25.2s
Times for minikube (PR 15463) start: 22.7s 24.4s 21.4s 32.4s 21.7s

@minikube-pr-bot
Copy link

These are the flake rates of all failed tests.

Environment Failed Tests Flake Rate (%)
Hyper-V_Windows TestFunctional/parallel/ImageCommands/ImageListJson (gopogh) 0.00 (chart)
Hyper-V_Windows TestFunctional/parallel/ImageCommands/ImageListShort (gopogh) 0.00 (chart)
Hyper-V_Windows TestFunctional/parallel/ImageCommands/ImageListTable (gopogh) 0.00 (chart)
Hyper-V_Windows TestFunctional/parallel/ImageCommands/ImageListYaml (gopogh) 0.00 (chart)
Hyper-V_Windows TestFunctional/serial/CacheCmd/cache/cache_reload (gopogh) 0.00 (chart)
Hyper-V_Windows TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (gopogh) 0.00 (chart)
Hyper-V_Windows TestFunctional/serial/SoftStart (gopogh) 0.00 (chart)
Hyper-V_Windows TestJSONOutput/pause/Command (gopogh) 0.00 (chart)
Hyper-V_Windows TestJSONOutput/start/Command (gopogh) 0.00 (chart)
Hyper-V_Windows TestJSONOutput/unpause/Command (gopogh) 0.00 (chart)
Hyper-V_Windows TestMultiNode/serial/RestartMultiNode (gopogh) 0.00 (chart)
Hyper-V_Windows TestMultiNode/serial/StartAfterStop (gopogh) 0.00 (chart)
Hyper-V_Windows TestNetworkPlugins/group/enable-default-cni/Start (gopogh) 0.00 (chart)
Hyper-V_Windows TestRunningBinaryUpgrade (gopogh) 0.00 (chart)
Hyper-V_Windows TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (gopogh) 0.00 (chart)
Hyper-V_Windows TestStartStop/group/newest-cni/serial/FirstStart (gopogh) 0.00 (chart)
Hyper-V_Windows TestStartStop/group/newest-cni/serial/Pause (gopogh) 0.00 (chart)
Hyper-V_Windows TestStartStop/group/newest-cni/serial/SecondStart (gopogh) 0.00 (chart)
Hyper-V_Windows TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (gopogh) 0.00 (chart)
Hyper-V_Windows TestStartStop/group/no-preload/serial/AddonExistsAfterStop (gopogh) 0.00 (chart)
Hyper-V_Windows TestStartStop/group/no-preload/serial/Pause (gopogh) 0.00 (chart)
Hyper-V_Windows TestStartStop/group/no-preload/serial/SecondStart (gopogh) 0.00 (chart)
Hyper-V_Windows TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (gopogh) 0.00 (chart)
Hyper-V_Windows TestStartStop/group/no-preload/serial/VerifyKubernetesImages (gopogh) 0.00 (chart)
Hyper-V_Windows TestStoppedBinaryUpgrade/Upgrade (gopogh) 1.69 (chart)
Hyper-V_Windows TestNoKubernetes/serial/StartWithK8s (gopogh) 8.47 (chart)
Docker_macOS TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (gopogh) 99.31 (chart)
Docker_macOS TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (gopogh) 100.00 (chart)
Docker_macOS TestIngressAddonLegacy/serial/ValidateIngressAddons (gopogh) 100.00 (chart)
Docker_macOS TestIngressAddonLegacy/StartLegacyK8sCluster (gopogh) 100.00 (chart)
More tests... Continued...

Too many tests failed - See test logs for more details.

To see the flake rates of all tests by environment, click here.

pkg/drivers/kic/oci/network_create.go Show resolved Hide resolved
pkg/minikube/cruntime/containerd.go Show resolved Hide resolved
pkg/minikube/node/start.go Show resolved Hide resolved
@spowelljr spowelljr marked this pull request as ready for review January 19, 2023 21:00
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jan 19, 2023
Copy link
Member

@spowelljr spowelljr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you very much for your large undertaking to improving minikube as a whole, it is very much appreciated!

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: prezha, spowelljr

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@prezha prezha changed the title fix TestNetworkPlugins tests for Linux with Docker driver improve how CRs and k8s work with CNI plugins and cgroup drivers Jan 19, 2023
@spowelljr spowelljr merged commit 0e7aefc into kubernetes:master Jan 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants