Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6501924 2021-11-13 17:24:21 2021-11-13 19:01:54 2021-11-13 19:35:56 0:34:02 0:23:11 0:10:51 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zlib} tasks/dashboard} 2
Failure Reason:

Test failure: test_ganesha (unittest.loader._FailedTest)

dead 6501925 2021-11-13 17:24:23 2021-11-13 19:02:25 2021-11-14 07:12:51 12:10:26 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

pass 6501926 2021-11-13 17:24:24 2021-11-13 19:03:55 2021-11-13 20:13:05 1:09:10 0:57:02 0:12:08 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
dead 6501927 2021-11-13 17:24:25 2021-11-13 19:05:46 2021-11-14 07:15:39 12:09:53 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 6501928 2021-11-13 17:24:25 2021-11-13 19:07:16 2021-11-13 19:39:15 0:31:59 0:18:11 0:13:48 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-stupid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi146 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8ade4cf5599c09abd17c05bca7e31dd62824e4aa TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

dead 6501929 2021-11-13 17:24:26 2021-11-13 19:09:47 2021-11-14 07:18:50 12:09:03 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

pass 6501930 2021-11-13 17:24:27 2021-11-13 19:10:48 2021-11-13 19:45:50 0:35:02 0:24:07 0:10:55 smithi master centos 8.2 rados/cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} 2
fail 6501931 2021-11-13 17:24:28 2021-11-13 19:12:18 2021-11-13 19:35:30 0:23:12 0:12:39 0:10:33 smithi master centos 8.3 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} supported-random-distro$/{centos_8} tasks/prometheus} 2
Failure Reason:

Test failure: test_standby (tasks.mgr.test_prometheus.TestPrometheus)

dead 6501932 2021-11-13 17:24:29 2021-11-13 19:12:19 2021-11-14 07:22:47 12:10:28 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

dead 6501933 2021-11-13 17:24:30 2021-11-13 19:12:49 2021-11-14 07:22:48 12:09:59 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 6501934 2021-11-13 17:24:31 2021-11-13 19:14:10 2021-11-13 19:47:45 0:33:35 0:23:27 0:10:08 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zlib} tasks/dashboard} 2
Failure Reason:

Test failure: test_ganesha (unittest.loader._FailedTest)

dead 6501935 2021-11-13 17:24:32 2021-11-13 19:14:30 2021-11-14 07:22:51 12:08:21 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

pass 6501936 2021-11-13 17:24:33 2021-11-13 19:14:31 2021-11-13 19:56:57 0:42:26 0:29:47 0:12:39 smithi master centos 8.3 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 6501937 2021-11-13 17:24:34 2021-11-13 19:15:41 2021-11-13 19:43:43 0:28:02 0:16:24 0:11:38 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
dead 6501938 2021-11-13 17:24:35 2021-11-13 19:16:32 2021-11-14 07:26:59 12:10:27 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

pass 6501939 2021-11-13 17:24:36 2021-11-13 20:32:23 3680 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} 2
fail 6501940 2021-11-13 17:24:37 2021-11-13 19:18:53 2021-11-13 19:48:35 0:29:42 0:18:40 0:11:02 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{filestore-xfs} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi050 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8ade4cf5599c09abd17c05bca7e31dd62824e4aa TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

dead 6501941 2021-11-13 17:24:38 2021-11-13 19:19:23 2021-11-14 07:33:15 12:13:52 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 6501942 2021-11-13 17:24:39 2021-11-13 19:24:04 2021-11-13 19:55:59 0:31:55 0:23:58 0:07:57 smithi master rhel 8.4 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-snappy} supported-random-distro$/{rhel_8} tasks/prometheus} 2
Failure Reason:

Test failure: test_standby (tasks.mgr.test_prometheus.TestPrometheus)

pass 6501943 2021-11-13 17:24:40 2021-11-13 19:24:34 2021-11-13 20:11:52 0:47:18 0:34:19 0:12:59 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
dead 6501944 2021-11-13 17:24:41 2021-11-13 19:26:35 2021-11-14 07:35:05 12:08:30 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

pass 6501945 2021-11-13 17:24:42 2021-11-13 19:26:35 2021-11-13 20:16:24 0:49:49 0:42:27 0:07:22 smithi master rhel 8.4 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} 1