Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6538243 2021-12-01 11:07:49 2021-12-01 15:26:51 2021-12-01 15:44:03 0:17:12 0:08:22 0:08:50 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/few rados thrashers/default thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi137 with status 5: 'sudo systemctl stop ceph-608a9c70-52bd-11ec-8c2d-001a4aab830c@mon.a'

fail 6538244 2021-12-01 11:07:50 2021-12-01 15:26:52 2021-12-01 15:52:34 0:25:42 0:14:41 0:11:01 smithi master rados/cephadm/workunits/{agent/on mon_election/classic task/test_orch_cli} 1
Failure Reason:

Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI)

fail 6538245 2021-12-01 11:07:51 2021-12-01 15:26:52 2021-12-01 17:35:09 2:08:17 1:56:40 0:11:37 smithi master centos 8.3 rados/singleton/{all/ec-inconsistent-hinfo mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
Failure Reason:

"2021-12-01T16:42:01.429002+0000 osd.3 (osd.3) 4 : cluster [WRN] Error(s) ignored for 2:ad551702:::test:head enough copies available" in cluster log

fail 6538246 2021-12-01 11:07:52 2021-12-01 15:27:52 2021-12-01 15:45:23 0:17:31 0:08:08 0:09:23 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi066 with status 5: 'sudo systemctl stop ceph-8b93ac0e-52bd-11ec-8c2d-001a4aab830c@mon.a'

pass 6538247 2021-12-01 11:07:53 2021-12-01 15:27:53 2021-12-01 16:07:25 0:39:32 0:26:47 0:12:45 smithi master ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zlib} supported-random-distro$/{ubuntu_latest} tasks/module_selftest} 2
fail 6538248 2021-12-01 11:07:54 2021-12-01 15:29:13 2021-12-01 15:46:15 0:17:02 0:06:09 0:10:53 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/host rook/master} 3
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi016.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

pass 6538249 2021-12-01 11:07:55 2021-12-01 15:30:05 2021-12-01 16:01:32 0:31:27 0:21:06 0:10:21 smithi master centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
fail 6538250 2021-12-01 11:07:56 2021-12-01 15:30:05 2021-12-01 15:50:26 0:20:21 0:08:35 0:11:46 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/pacific backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi129 with status 5: 'sudo systemctl stop ceph-06cdf80c-52be-11ec-8c2d-001a4aab830c@mon.a'

fail 6538251 2021-12-01 11:07:57 2021-12-01 15:30:55 2021-12-01 15:55:25 0:24:30 0:13:34 0:10:56 smithi master centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{centos_8.stream} tasks/prometheus} 2
Failure Reason:

Test failure: test_standby (tasks.mgr.test_prometheus.TestPrometheus)

pass 6538252 2021-12-01 11:07:58 2021-12-01 15:31:46 2021-12-01 16:15:55 0:44:09 0:33:42 0:10:27 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 6538253 2021-12-01 11:07:59 2021-12-01 15:32:26 2021-12-01 15:52:19 0:19:53 0:08:28 0:11:25 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/few rados thrashers/none thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi022 with status 5: 'sudo systemctl stop ceph-48477768-52be-11ec-8c2d-001a4aab830c@mon.a'

fail 6538254 2021-12-01 11:08:00 2021-12-01 15:33:07 2021-12-01 15:52:19 0:19:12 0:08:53 0:10:19 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/osd-delay rados thrashers/pggrow thrashosds-health workloads/radosbench} 3
Failure Reason:

Command failed on smithi001 with status 5: 'sudo systemctl stop ceph-4908d3e0-52be-11ec-8c2d-001a4aab830c@mon.a'

pass 6538255 2021-12-01 11:08:01 2021-12-01 15:33:07 2021-12-01 15:57:24 0:24:17 0:13:55 0:10:22 smithi master centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
fail 6538256 2021-12-01 11:08:02 2021-12-01 15:33:58 2021-12-01 15:54:32 0:20:34 0:08:34 0:12:00 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/fastclose rados thrashers/careful thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi047 with status 5: 'sudo systemctl stop ceph-9883330c-52be-11ec-8c2d-001a4aab830c@mon.a'

pass 6538257 2021-12-01 11:08:03 2021-12-01 15:35:08 2021-12-01 16:17:02 0:41:54 0:31:43 0:10:11 smithi master centos 8.3 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-snappy} supported-random-distro$/{centos_8} tasks/module_selftest} 2
pass 6538258 2021-12-01 11:08:04 2021-12-01 15:35:49 2021-12-01 18:23:13 2:47:24 2:27:02 0:20:22 smithi master rhel 8.4 rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{rhel_8}} 1
pass 6538259 2021-12-01 11:08:05 2021-12-01 15:35:49 2021-12-01 16:01:06 0:25:17 0:17:53 0:07:24 smithi master rhel 8.4 rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/repave-all} 2
pass 6538260 2021-12-01 11:08:06 2021-12-01 15:36:19 2021-12-01 16:03:47 0:27:28 0:20:10 0:07:18 smithi master rhel 8.4 rados/cephadm/smoke/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
pass 6538261 2021-12-01 11:08:07 2021-12-01 15:36:20 2021-12-01 16:04:03 0:27:43 0:18:20 0:09:23 smithi master ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} supported-random-distro$/{ubuntu_latest} tasks/progress} 2
pass 6538262 2021-12-01 11:08:08 2021-12-01 15:36:20 2021-12-01 16:00:03 0:23:43 0:13:51 0:09:52 smithi master rados/cephadm/workunits/{agent/off mon_election/classic task/test_orch_cli} 1
fail 6538263 2021-12-01 11:08:09 2021-12-01 15:36:20 2021-12-01 15:54:18 0:17:58 0:08:22 0:09:36 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/few rados thrashers/default thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi055 with status 5: 'sudo systemctl stop ceph-cfed4102-52be-11ec-8c2d-001a4aab830c@mon.a'

fail 6538264 2021-12-01 11:08:10 2021-12-01 15:36:51 2021-12-01 16:08:49 0:31:58 0:24:02 0:07:56 smithi master rhel 8.4 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-low-osd-mem-target} supported-random-distro$/{rhel_8} tasks/prometheus} 2
Failure Reason:

Test failure: test_standby (tasks.mgr.test_prometheus.TestPrometheus)

dead 6538265 2021-12-01 11:08:11 2021-12-01 15:37:21 2021-12-01 15:52:22 0:15:01 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds