Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6525052 2021-11-25 10:31:47 2021-11-25 10:32:34 2021-11-25 10:49:49 0:17:15 0:08:07 0:09:08 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/few rados thrashers/default thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi093 with status 5: 'sudo systemctl stop ceph-4204b3ec-4ddd-11ec-8c2d-001a4aab830c@mon.a'

fail 6525053 2021-11-25 10:31:48 2021-11-25 10:32:35 2021-11-25 11:07:48 0:35:13 0:23:29 0:11:44 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} tasks/dashboard} 2
Failure Reason:

Test failure: test_ganesha (unittest.loader._FailedTest)

dead 6525054 2021-11-25 10:31:49 2021-11-25 10:32:35 2021-11-25 22:41:01 12:08:26 smithi master centos 8.3 rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 6525055 2021-11-25 10:31:50 2021-11-25 10:32:35 2021-11-25 10:52:06 0:19:31 0:08:34 0:10:57 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi079 with status 5: 'sudo systemctl stop ceph-5b1523c6-4ddd-11ec-8c2d-001a4aab830c@mon.a'

fail 6525056 2021-11-25 10:31:51 2021-11-25 10:32:36 2021-11-25 10:50:48 0:18:12 0:06:18 0:11:54 smithi master centos 8.3 rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls agent/off mon_election/connectivity} 2
Failure Reason:

Command failed on smithi027 with status 5: 'sudo systemctl stop ceph-2b14011a-4ddd-11ec-8c2d-001a4aab830c@mon.a'

dead 6525057 2021-11-25 10:31:52 2021-11-25 10:32:36 2021-11-25 22:40:46 12:08:10 smithi master centos 8.3 rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

pass 6525058 2021-11-25 10:31:53 2021-11-25 10:32:37 2021-11-25 11:22:55 0:50:18 0:39:32 0:10:46 smithi master ubuntu 20.04 rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
pass 6525059 2021-11-25 10:31:54 2021-11-25 10:32:37 2021-11-25 11:00:37 0:28:00 0:17:14 0:10:46 smithi master ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} tasks/rados_stress_watch} 2
pass 6525060 2021-11-25 10:31:55 2021-11-25 10:32:37 2021-11-25 11:18:57 0:46:20 0:34:42 0:11:38 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/host rook/master} 3
fail 6525061 2021-11-25 10:31:56 2021-11-25 10:32:37 2021-11-25 11:01:57 0:29:20 0:19:38 0:09:42 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zlib} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5c1ba840272b9f9cc9ad704c802bf7daf2491ad5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

dead 6525062 2021-11-25 10:31:57 2021-11-25 10:32:38 2021-11-25 22:42:15 12:09:37 smithi master centos 8.3 rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 6525063 2021-11-25 10:31:58 2021-11-25 10:32:38 2021-11-25 10:57:24 0:24:46 0:08:28 0:16:18 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/pacific backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi050 with status 5: 'sudo systemctl stop ceph-21054188-4dde-11ec-8c2d-001a4aab830c@mon.a'

pass 6525064 2021-11-25 10:31:59 2021-11-25 10:38:09 2021-11-25 11:16:21 0:38:12 0:26:51 0:11:21 smithi master centos 8.3 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/sync workloads/rados_api_tests} 2
fail 6525065 2021-11-25 10:32:00 2021-11-25 10:39:10 2021-11-25 10:57:51 0:18:41 0:08:13 0:10:28 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/few rados thrashers/none thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi073 with status 5: 'sudo systemctl stop ceph-611e4788-4dde-11ec-8c2d-001a4aab830c@mon.a'

dead 6525066 2021-11-25 10:32:01 2021-11-25 10:40:40 2021-11-25 22:49:07 12:08:27 smithi master centos 8.3 rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 6525067 2021-11-25 10:32:02 2021-11-25 10:41:21 2021-11-25 10:56:52 0:15:31 0:05:57 0:09:34 smithi master centos 8.3 rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls agent/off mon_election/classic} 2
Failure Reason:

Command failed on smithi149 with status 5: 'sudo systemctl stop ceph-4073b18a-4dde-11ec-8c2d-001a4aab830c@mon.a'

fail 6525068 2021-11-25 10:32:03 2021-11-25 10:41:41 2021-11-25 11:14:56 0:33:15 0:23:32 0:09:43 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} tasks/dashboard} 2
Failure Reason:

Test failure: test_ganesha (unittest.loader._FailedTest)

dead 6525069 2021-11-25 10:32:04 2021-11-25 10:41:41 2021-11-25 22:56:54 12:15:13 smithi master centos 8.3 rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

pass 6525070 2021-11-25 10:32:05 2021-11-25 10:48:33 2021-11-25 11:12:40 0:24:07 0:13:49 0:10:18 smithi master centos 8.2 rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
fail 6525071 2021-11-25 10:32:06 2021-11-25 10:49:13 2021-11-25 11:07:12 0:17:59 0:08:19 0:09:40 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/osd-delay rados thrashers/pggrow thrashosds-health workloads/radosbench} 3
Failure Reason:

Command failed on smithi093 with status 5: 'sudo systemctl stop ceph-b755f7b2-4ddf-11ec-8c2d-001a4aab830c@mon.a'

dead 6525072 2021-11-25 10:32:07 2021-11-25 10:49:53 2021-11-25 22:59:31 12:09:38 smithi master centos 8.3 rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 6525073 2021-11-25 10:32:08 2021-11-25 10:49:54 2021-11-25 11:09:36 0:19:42 0:08:39 0:11:03 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/fastclose rados thrashers/careful thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi018 with status 5: 'sudo systemctl stop ceph-ce532692-4ddf-11ec-8c2d-001a4aab830c@mon.a'

fail 6525074 2021-11-25 10:32:09 2021-11-25 10:50:24 2021-11-25 11:18:07 0:27:43 0:16:39 0:11:04 smithi master centos 8.3 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} supported-random-distro$/{centos_8} tasks/module_selftest} 2
Failure Reason:

Test failure: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest)

fail 6525075 2021-11-25 10:32:10 2021-11-25 10:50:55 2021-11-25 11:20:07 0:29:12 0:19:54 0:09:18 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi017 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5c1ba840272b9f9cc9ad704c802bf7daf2491ad5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

dead 6525076 2021-11-25 10:32:11 2021-11-25 10:50:55 2021-11-25 22:59:32 12:08:37 smithi master centos 8.3 rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 6525077 2021-11-25 10:32:12 2021-11-25 10:50:55 2021-11-25 11:08:33 0:17:38 0:08:24 0:09:14 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/few rados thrashers/default thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi169 with status 5: 'sudo systemctl stop ceph-e60aa2a6-4ddf-11ec-8c2d-001a4aab830c@mon.a'

fail 6525078 2021-11-25 10:32:13 2021-11-25 10:51:06 2021-11-25 11:06:21 0:15:15 0:05:58 0:09:17 smithi master centos 8.3 rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls agent/on mon_election/connectivity} 2
Failure Reason:

Command failed on smithi016 with status 5: 'sudo systemctl stop ceph-8f6f8e2a-4ddf-11ec-8c2d-001a4aab830c@mon.a'

dead 6525079 2021-11-25 10:32:14 2021-11-25 10:51:06 2021-11-25 22:59:44 12:08:38 smithi master centos 8.3 rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout