Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6543985 2021-12-03 21:12:15 2021-12-04 09:09:48 2021-12-04 09:29:11 0:19:23 0:08:17 0:11:06 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/few rados thrashers/default thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi012 with status 5: 'sudo systemctl stop ceph-82b8238e-54e4-11ec-8c2e-001a4aab830c@mon.a'

fail 6543986 2021-12-03 21:12:16 2021-12-04 09:10:59 2021-12-04 09:37:06 0:26:07 0:15:06 0:11:01 smithi master rados/cephadm/workunits/{agent/on mon_election/classic task/test_orch_cli} 1
Failure Reason:

Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI)

fail 6543987 2021-12-03 21:12:17 2021-12-04 09:11:09 2021-12-04 10:12:28 1:01:19 0:51:15 0:10:04 smithi master centos 8.3 rados/singleton/{all/ec-inconsistent-hinfo mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
Failure Reason:

"2021-12-04T10:01:05.670610+0000 osd.3 (osd.3) 4 : cluster [WRN] Error(s) ignored for 2:ad551702:::test:head enough copies available" in cluster log

fail 6543988 2021-12-03 21:12:18 2021-12-04 09:11:10 2021-12-04 09:30:40 0:19:30 0:08:40 0:10:50 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi019 with status 5: 'sudo systemctl stop ceph-795a5596-54e4-11ec-8c2e-001a4aab830c@mon.a'

fail 6543989 2021-12-03 21:12:19 2021-12-04 09:11:40 2021-12-04 09:28:42 0:17:02 0:06:07 0:10:55 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/host rook/master} 3
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi003.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

pass 6543990 2021-12-03 21:12:20 2021-12-04 09:13:01 2021-12-04 09:36:31 0:23:30 0:16:53 0:06:37 smithi master centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
fail 6543991 2021-12-03 21:12:21 2021-12-04 09:13:21 2021-12-04 09:33:40 0:20:19 0:08:39 0:11:40 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/pacific backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi027 with status 5: 'sudo systemctl stop ceph-e74af06a-54e4-11ec-8c2e-001a4aab830c@mon.a'

fail 6543992 2021-12-03 21:12:22 2021-12-04 09:14:01 2021-12-04 09:32:28 0:18:27 0:09:23 0:09:04 smithi master centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{centos_8.stream} tasks/prometheus} 2
Failure Reason:

Test failure: test_standby (tasks.mgr.test_prometheus.TestPrometheus)

fail 6543993 2021-12-03 21:12:23 2021-12-04 09:15:02 2021-12-04 09:34:29 0:19:27 0:08:49 0:10:38 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/few rados thrashers/none thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi094 with status 5: 'sudo systemctl stop ceph-06133688-54e5-11ec-8c2e-001a4aab830c@mon.a'

fail 6543994 2021-12-03 21:12:24 2021-12-04 09:15:02 2021-12-04 09:34:05 0:19:03 0:08:46 0:10:17 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/osd-delay rados thrashers/pggrow thrashosds-health workloads/radosbench} 3
Failure Reason:

Command failed on smithi060 with status 5: 'sudo systemctl stop ceph-f71c245a-54e4-11ec-8c2e-001a4aab830c@mon.a'

pass 6543995 2021-12-03 21:12:26 2021-12-04 09:15:03 2021-12-04 09:30:56 0:15:53 0:09:10 0:06:43 smithi master centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
pass 6543996 2021-12-03 21:12:27 2021-12-04 09:15:33 2021-12-04 09:45:04 0:29:31 0:19:14 0:10:17 smithi master ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} supported-random-distro$/{ubuntu_latest} tasks/progress} 2
pass 6543997 2021-12-03 21:12:28 2021-12-04 09:15:33 2021-12-04 09:39:50 0:24:17 0:13:59 0:10:18 smithi master rados/cephadm/workunits/{agent/off mon_election/classic task/test_orch_cli} 1
fail 6543998 2021-12-03 21:12:29 2021-12-04 09:15:54 2021-12-04 09:35:56 0:20:02 0:08:30 0:11:32 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/few rados thrashers/default thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi081 with status 5: 'sudo systemctl stop ceph-32a5033e-54e5-11ec-8c2e-001a4aab830c@mon.a'

fail 6543999 2021-12-03 21:12:30 2021-12-04 09:16:44 2021-12-04 09:48:20 0:31:36 0:23:49 0:07:47 smithi master rhel 8.4 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-low-osd-mem-target} supported-random-distro$/{rhel_8} tasks/prometheus} 2
Failure Reason:

Test failure: test_standby (tasks.mgr.test_prometheus.TestPrometheus)