Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6492738 2021-11-08 20:33:05 2021-11-08 20:48:31 2021-11-08 21:31:47 0:43:16 0:32:52 0:10:24 smithi master centos 8.3 rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6492739 2021-11-08 20:33:06 2021-11-08 20:48:41 2021-11-08 21:17:19 0:28:38 0:14:21 0:14:17 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/1-node k8s/1.21 net/calico rook/master} 1
pass 6492740 2021-11-08 20:33:07 2021-11-08 20:50:02 2021-11-08 22:51:35 2:01:33 1:53:48 0:07:45 smithi master rhel 8.4 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/minsize_recovery thrashosds-health workloads/ec-radosbench} 2
fail 6492741 2021-11-08 20:33:08 2021-11-08 20:50:22 2021-11-08 21:24:56 0:34:34 0:23:26 0:11:08 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-low-osd-mem-target} tasks/dashboard} 2
Failure Reason:

Test failure: test_ganesha (unittest.loader._FailedTest)

dead 6492742 2021-11-08 20:33:09 2021-11-08 20:51:52 2021-11-09 09:03:15 12:11:23 smithi master centos 8.3 rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

pass 6492743 2021-11-08 20:33:10 2021-11-08 20:52:43 2021-11-08 21:30:51 0:38:08 0:27:51 0:10:17 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
dead 6492744 2021-11-08 20:33:11 2021-11-08 20:53:23 2021-11-09 09:04:46 12:11:23 smithi master centos 8.3 rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

pass 6492745 2021-11-08 20:33:12 2021-11-08 20:54:04 2021-11-08 21:33:43 0:39:39 0:28:02 0:11:37 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 6492746 2021-11-08 20:33:13 2021-11-08 20:54:04 2021-11-08 21:22:40 0:28:36 0:18:01 0:10:35 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-stupid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi145 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a1fc5c50f942497cfdd5964d2d4cdf839a489d78 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 6492747 2021-11-08 20:33:14 2021-11-08 20:55:14 2021-11-08 21:42:33 0:47:19 0:34:57 0:12:22 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
pass 6492748 2021-11-08 20:33:15 2021-11-08 20:57:05 2021-11-08 21:41:11 0:44:06 0:32:23 0:11:43 smithi master centos 8.3 rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6492749 2021-11-08 20:33:16 2021-11-08 20:57:55 2021-11-08 21:18:27 0:20:32 0:09:52 0:10:40 smithi master centos 8.3 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/one workloads/rados_5925} 2
pass 6492750 2021-11-08 20:33:17 2021-11-08 20:59:16 2021-11-08 21:24:27 0:25:11 0:14:12 0:10:59 smithi master centos 8.3 rados/cephadm/orchestrator_cli/{0-random-distro$/{centos_8.3_container_tools_3.0} 2-node-mgr agent/on orchestrator_cli} 2
pass 6492751 2021-11-08 20:33:18 2021-11-08 20:59:16 2021-11-08 21:40:44 0:41:28 0:31:53 0:09:35 smithi master centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-comp-zlib supported-random-distro$/{centos_8.stream} tasks/module_selftest} 2
pass 6492752 2021-11-08 20:33:19 2021-11-08 20:59:16 2021-11-08 21:20:12 0:20:56 0:12:43 0:08:13 smithi master centos 8.3 rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
dead 6492753 2021-11-08 20:33:20 2021-11-08 20:59:17 2021-11-09 09:14:06 12:14:49 smithi master centos 8.3 rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

pass 6492754 2021-11-08 20:33:21 2021-11-08 21:03:38 2021-11-08 21:49:14 0:45:36 0:35:44 0:09:52 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
pass 6492755 2021-11-08 20:33:22 2021-11-08 21:03:58 2021-11-08 21:47:59 0:44:01 0:34:42 0:09:19 smithi master rhel 8.4 rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/root mon_election/classic msgr/async-v2only start tasks/rados_python} 2
fail 6492756 2021-11-08 20:33:23 2021-11-08 21:06:49 2021-11-08 21:41:01 0:34:12 0:22:57 0:11:15 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zstd} tasks/dashboard} 2
Failure Reason:

Test failure: test_ganesha (unittest.loader._FailedTest)

dead 6492757 2021-11-08 20:33:24 2021-11-08 21:07:49 2021-11-09 09:18:48 12:10:59 smithi master centos 8.3 rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

pass 6492758 2021-11-08 20:33:25 2021-11-08 21:08:09 2021-11-08 21:45:45 0:37:36 0:27:09 0:10:27 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 6492759 2021-11-08 20:33:26 2021-11-08 21:08:30 2021-11-08 22:23:36 1:15:06 1:05:22 0:09:44 smithi master centos 8.stream rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/mon} 1
dead 6492760 2021-11-08 20:33:27 2021-11-08 21:08:40 2021-11-09 09:20:08 12:11:28 smithi master centos 8.3 rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

pass 6492761 2021-11-08 20:33:28 2021-11-08 21:09:51 2021-11-08 21:45:06 0:35:15 0:25:57 0:09:18 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 6492762 2021-11-08 20:33:28 2021-11-08 21:09:51 2021-11-08 21:35:04 0:25:13 0:15:12 0:10:01 smithi master centos 8.3 rados/cephadm/smoke/{0-nvme-loop agent/on distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/classic start} 2
fail 6492763 2021-11-08 20:33:29 2021-11-08 21:09:51 2021-11-08 21:38:49 0:28:58 0:19:00 0:09:58 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi035 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a1fc5c50f942497cfdd5964d2d4cdf839a489d78 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

dead 6492764 2021-11-08 20:33:30 2021-11-08 21:10:02 2021-11-09 09:21:09 12:11:07 smithi master centos 8.3 rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout