Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6369602 2021-09-01 10:06:28 2021-09-01 10:06:29 2021-09-01 12:46:17 2:39:48 2:10:38 0:29:10 smithi master centos 8.3 rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8}} 1
pass 6369603 2021-09-01 10:06:28 2021-09-01 10:06:30 2021-09-01 12:45:25 2:38:55 2:11:32 0:27:23 smithi master centos 8.3 rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8}} 1
pass 6369604 2021-09-01 10:06:28 2021-09-01 10:06:30 2021-09-01 12:45:07 2:38:37 2:10:50 0:27:47 smithi master centos 8.3 rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8}} 1
pass 6369605 2021-09-01 10:06:28 2021-09-01 10:06:31 2021-09-01 12:43:34 2:37:03 2:09:32 0:27:31 smithi master centos 8.3 rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8}} 1
pass 6369606 2021-09-01 10:06:28 2021-09-01 10:06:31 2021-09-01 12:35:43 2:29:12 2:04:37 0:24:35 smithi master centos 8.3 rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8}} 1
pass 6369607 2021-09-01 10:06:28 2021-09-01 10:06:31 2021-09-01 12:47:58 2:41:27 2:09:20 0:32:07 smithi master centos 8.3 rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8}} 1
pass 6369608 2021-09-01 10:06:28 2021-09-01 10:06:31 2021-09-01 12:35:35 2:29:04 2:04:09 0:24:55 smithi master centos 8.3 rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8}} 1
pass 6369609 2021-09-01 10:06:28 2021-09-01 10:06:31 2021-09-01 12:45:32 2:39:01 2:11:28 0:27:33 smithi master centos 8.3 rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8}} 1
pass 6369610 2021-09-01 10:06:28 2021-09-01 10:06:32 2021-09-01 12:43:33 2:37:01 2:10:03 0:26:58 smithi master centos 8.3 rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8}} 1
pass 6369611 2021-09-01 10:06:28 2021-09-01 10:06:32 2021-09-01 12:43:07 2:36:35 2:10:14 0:26:21 smithi master centos 8.3 rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8}} 1
fail 6369612 2021-09-01 10:06:29 2021-09-01 10:06:32 2021-09-01 10:33:20 0:26:48 0:15:42 0:11:06 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi131 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6369613 2021-09-01 10:06:29 2021-09-01 10:06:33 2021-09-01 10:27:33 0:21:00 0:14:07 0:06:53 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi084 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6369614 2021-09-01 10:06:29 2021-09-01 10:06:35 2021-09-01 10:30:59 0:24:24 0:14:56 0:09:28 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi025 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6369615 2021-09-01 10:06:29 2021-09-01 10:06:35 2021-09-01 10:31:42 0:25:07 0:15:49 0:09:18 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi026 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6369616 2021-09-01 10:06:29 2021-09-01 10:06:35 2021-09-01 10:29:43 0:23:08 0:14:55 0:08:13 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6369617 2021-09-01 10:06:29 2021-09-01 10:06:35 2021-09-01 10:33:14 0:26:39 0:16:02 0:10:37 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi091 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6369618 2021-09-01 10:06:29 2021-09-01 10:06:35 2021-09-01 10:27:20 0:20:45 0:14:41 0:06:04 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi060 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6369619 2021-09-01 10:06:29 2021-09-01 10:06:36 2021-09-01 10:30:38 0:24:02 0:15:07 0:08:55 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi043 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6369620 2021-09-01 10:06:29 2021-09-01 10:06:37 2021-09-01 10:30:04 0:23:27 0:15:02 0:08:25 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi142 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6369621 2021-09-01 10:06:29 2021-09-01 10:06:37 2021-09-01 10:28:02 0:21:25 0:14:09 0:07:16 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi117 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 6369622 2021-09-01 10:06:32 2021-09-01 10:06:37 2021-09-01 13:03:54 2:57:17 2:17:09 0:40:08 smithi master centos 8.stream rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8.stream}} 1
pass 6369623 2021-09-01 10:06:33 2021-09-01 10:06:38 2021-09-01 13:05:31 2:58:53 2:17:31 0:41:22 smithi master centos 8.stream rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8.stream}} 1
pass 6369624 2021-09-01 10:06:33 2021-09-01 10:06:38 2021-09-01 13:05:41 2:59:03 2:18:10 0:40:53 smithi master centos 8.stream rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8.stream}} 1
pass 6369625 2021-09-01 10:06:33 2021-09-01 10:06:39 2021-09-01 13:04:48 2:58:09 2:17:25 0:40:44 smithi master centos 8.stream rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8.stream}} 1
pass 6369626 2021-09-01 10:06:33 2021-09-01 10:06:39 2021-09-01 13:06:07 2:59:28 2:21:34 0:37:54 smithi master centos 8.stream rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8.stream}} 1
pass 6369627 2021-09-01 10:06:34 2021-09-01 10:06:39 2021-09-01 13:06:04 2:59:25 2:17:18 0:42:07 smithi master centos 8.stream rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8.stream}} 1
pass 6369628 2021-09-01 10:06:34 2021-09-01 10:06:40 2021-09-01 13:00:42 2:54:02 2:13:48 0:40:14 smithi master centos 8.stream rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8.stream}} 1
pass 6369629 2021-09-01 10:06:34 2021-09-01 10:06:40 2021-09-01 13:05:34 2:58:54 2:17:32 0:41:22 smithi master centos 8.stream rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8.stream}} 1
pass 6369630 2021-09-01 10:06:34 2021-09-01 10:06:40 2021-09-01 13:04:52 2:58:12 2:15:48 0:42:24 smithi master centos 8.stream rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8.stream}} 1
pass 6369631 2021-09-01 10:06:34 2021-09-01 10:06:41 2021-09-01 13:01:35 2:54:54 2:15:04 0:39:50 smithi master centos 8.stream rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8.stream}} 1
pass 6369632 2021-09-01 10:06:36 2021-09-01 10:06:44 2021-09-01 10:43:17 0:36:33 0:23:00 0:13:33 smithi master centos 8.stream rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} 2
pass 6369633 2021-09-01 10:06:36 2021-09-01 10:06:44 2021-09-01 10:52:36 0:45:52 0:36:05 0:09:47 smithi master centos 8.stream rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} 2
pass 6369634 2021-09-01 10:06:38 2021-09-01 10:06:45 2021-09-01 10:40:53 0:34:08 0:24:08 0:10:00 smithi master centos 8.stream rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} 2
pass 6369635 2021-09-01 10:06:38 2021-09-01 10:06:45 2021-09-01 10:52:49 0:46:04 0:32:24 0:13:40 smithi master centos 8.stream rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} 2
pass 6369636 2021-09-01 10:06:38 2021-09-01 10:06:45 2021-09-01 10:49:44 0:42:59 0:31:49 0:11:10 smithi master centos 8.stream rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} 2
pass 6369637 2021-09-01 10:06:38 2021-09-01 10:06:45 2021-09-01 10:42:22 0:35:37 0:26:03 0:09:34 smithi master centos 8.stream rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} 2
pass 6369638 2021-09-01 10:06:38 2021-09-01 10:06:46 2021-09-01 10:43:36 0:36:50 0:24:40 0:12:10 smithi master centos 8.stream rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} 2
pass 6369639 2021-09-01 10:06:38 2021-09-01 10:06:47 2021-09-01 10:53:30 0:46:43 0:34:56 0:11:47 smithi master centos 8.stream rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} 2
pass 6369640 2021-09-01 10:06:38 2021-09-01 10:06:47 2021-09-01 10:58:19 0:51:32 0:39:55 0:11:37 smithi master centos 8.stream rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} 2
pass 6369641 2021-09-01 10:06:38 2021-09-01 10:06:47 2021-09-01 10:46:44 0:39:57 0:27:39 0:12:18 smithi master centos 8.stream rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} 2
pass 6369642 2021-09-01 10:06:39 2021-09-01 10:06:47 2021-09-01 10:38:38 0:31:51 0:18:43 0:13:08 smithi master centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream} tasks/rados_cls_all} 2
fail 6369643 2021-09-01 10:06:40 2021-09-01 10:06:49 2021-09-01 10:37:06 0:30:17 0:18:33 0:11:44 smithi master centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream} tasks/rados_cls_all} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi012 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

pass 6369644 2021-09-01 10:06:40 2021-09-01 10:06:49 2021-09-01 10:38:37 0:31:48 0:18:16 0:13:32 smithi master centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream} tasks/rados_cls_all} 2
pass 6369645 2021-09-01 10:06:40 2021-09-01 10:06:50 2021-09-01 10:38:26 0:31:36 0:18:32 0:13:04 smithi master centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream} tasks/rados_cls_all} 2
fail 6369646 2021-09-01 10:06:40 2021-09-01 10:06:50 2021-09-01 10:36:29 0:29:39 0:18:11 0:11:28 smithi master centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream} tasks/rados_cls_all} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi139 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

pass 6369647 2021-09-01 10:06:40 2021-09-01 10:06:52 2021-09-01 10:35:14 0:28:22 0:18:14 0:10:08 smithi master centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream} tasks/rados_cls_all} 2
fail 6369648 2021-09-01 10:06:41 2021-09-01 10:06:52 2021-09-01 10:38:13 0:31:21 0:18:10 0:13:11 smithi master centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream} tasks/rados_cls_all} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi129 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

pass 6369649 2021-09-01 10:06:41 2021-09-01 10:06:52 2021-09-01 10:36:21 0:29:29 0:18:57 0:10:32 smithi master centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream} tasks/rados_cls_all} 2
pass 6369650 2021-09-01 10:06:41 2021-09-01 10:06:52 2021-09-01 10:38:47 0:31:55 0:18:11 0:13:44 smithi master centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream} tasks/rados_cls_all} 2
fail 6369651 2021-09-01 10:06:41 2021-09-01 10:06:55 2021-09-01 10:39:02 0:32:07 0:18:25 0:13:42 smithi master centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream} tasks/rados_cls_all} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi071 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

fail 6369652 2021-09-01 10:06:42 2021-09-01 10:06:55 2021-09-01 11:00:57 0:54:02 0:40:48 0:13:14 smithi master ubuntu 20.04 rados/upgrade/parallel/{0-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi156 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

fail 6369653 2021-09-01 10:06:42 2021-09-01 10:06:55 2021-09-01 10:59:31 0:52:36 0:41:23 0:11:13 smithi master ubuntu 20.04 rados/upgrade/parallel/{0-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi086 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

fail 6369654 2021-09-01 10:06:42 2021-09-01 10:06:55 2021-09-01 11:00:54 0:53:59 0:41:02 0:12:57 smithi master ubuntu 20.04 rados/upgrade/parallel/{0-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi123 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

pass 6369655 2021-09-01 10:06:42 2021-09-01 10:06:56 2021-09-01 11:47:03 1:40:07 1:26:29 0:13:38 smithi master ubuntu 20.04 rados/upgrade/parallel/{0-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
pass 6369656 2021-09-01 10:06:42 2021-09-01 10:06:57 2021-09-01 11:45:52 1:38:55 1:26:54 0:12:01 smithi master ubuntu 20.04 rados/upgrade/parallel/{0-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
fail 6369657 2021-09-01 10:06:42 2021-09-01 10:06:58 2021-09-01 11:00:46 0:53:48 0:41:25 0:12:23 smithi master ubuntu 20.04 rados/upgrade/parallel/{0-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi035 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

fail 6369658 2021-09-01 10:06:43 2021-09-01 10:06:58 2021-09-01 11:02:51 0:55:53 0:41:55 0:13:58 smithi master ubuntu 20.04 rados/upgrade/parallel/{0-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi096 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

fail 6369659 2021-09-01 10:06:43 2021-09-01 10:06:59 2021-09-01 11:01:57 0:54:58 0:41:55 0:13:03 smithi master ubuntu 20.04 rados/upgrade/parallel/{0-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi013 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

pass 6369660 2021-09-01 10:06:43 2021-09-01 10:06:59 2021-09-01 11:46:50 1:39:51 1:25:54 0:13:57 smithi master ubuntu 20.04 rados/upgrade/parallel/{0-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
pass 6369661 2021-09-01 10:06:43 2021-09-01 10:07:01 2021-09-01 11:47:08 1:40:07 1:26:15 0:13:52 smithi master ubuntu 20.04 rados/upgrade/parallel/{0-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
pass 6369662 2021-09-01 10:06:43 2021-09-01 10:07:02 2021-09-01 11:02:49 0:55:47 0:45:12 0:10:35 smithi master ubuntu 20.04 rados/singleton/{all/recovery-preemption mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
pass 6369663 2021-09-01 10:06:43 2021-09-01 10:07:02 2021-09-01 10:54:11 0:47:09 0:38:54 0:08:15 smithi master ubuntu 20.04 rados/singleton/{all/recovery-preemption mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
pass 6369664 2021-09-01 10:06:44 2021-09-01 10:07:02 2021-09-01 10:55:49 0:48:47 0:38:29 0:10:18 smithi master ubuntu 20.04 rados/singleton/{all/recovery-preemption mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
pass 6369665 2021-09-01 10:06:44 2021-09-01 10:07:02 2021-09-01 10:36:40 0:29:38 0:20:30 0:09:08 smithi master ubuntu 20.04 rados/singleton/{all/recovery-preemption mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
pass 6369666 2021-09-01 10:06:44 2021-09-01 10:07:02 2021-09-01 11:23:17 1:16:15 1:03:31 0:12:44 smithi master ubuntu 20.04 rados/singleton/{all/recovery-preemption mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
pass 6369667 2021-09-01 10:06:44 2021-09-01 10:07:03 2021-09-01 10:51:17 0:44:14 0:33:45 0:10:29 smithi master ubuntu 20.04 rados/singleton/{all/recovery-preemption mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
pass 6369668 2021-09-01 10:06:44 2021-09-01 10:07:03 2021-09-01 11:14:54 1:07:51 0:56:37 0:11:14 smithi master ubuntu 20.04 rados/singleton/{all/recovery-preemption mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
pass 6369669 2021-09-01 10:06:44 2021-09-01 10:07:04 2021-09-01 11:17:16 1:10:12 0:45:11 0:25:01 smithi master ubuntu 20.04 rados/singleton/{all/recovery-preemption mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
pass 6369670 2021-09-01 10:06:44 2021-09-01 10:24:08 2021-09-01 11:13:23 0:49:15 0:40:17 0:08:58 smithi master ubuntu 20.04 rados/singleton/{all/recovery-preemption mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
fail 6369671 2021-09-01 10:06:44 2021-09-01 10:24:08 2021-09-01 12:03:41 1:39:33 1:29:18 0:10:15 smithi master ubuntu 20.04 rados/singleton/{all/recovery-preemption mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

wait_for_clean: failed before timeout expired

fail 6369672 2021-09-01 10:06:46 2021-09-01 10:24:19 2021-09-01 10:58:05 0:33:46 0:20:11 0:13:35 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed on smithi064 with status 1: 'sudo yum -y install ceph-radosgw'

fail 6369673 2021-09-01 10:06:46 2021-09-01 10:27:29 2021-09-01 10:58:34 0:31:05 0:22:04 0:09:01 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed on smithi164 with status 1: 'sudo yum -y install ceph-base-debuginfo'

fail 6369674 2021-09-01 10:06:46 2021-09-01 10:27:39 2021-09-01 10:58:51 0:31:12 0:22:15 0:08:57 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed on smithi117 with status 1: 'sudo yum -y install ceph-base-debuginfo'

fail 6369675 2021-09-01 10:06:46 2021-09-01 10:28:10 2021-09-01 10:53:47 0:25:37 0:14:57 0:10:40 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed on smithi186 with status 1: 'sudo yum -y install ceph-radosgw'

fail 6369676 2021-09-01 10:06:46 2021-09-01 10:28:30 2021-09-01 11:06:58 0:38:28 0:27:41 0:10:47 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed on smithi151 with status 1: 'sudo yum -y install ceph-base-debuginfo'

pass 6369677 2021-09-01 10:06:46 2021-09-01 10:29:51 2021-09-01 12:17:11 1:47:20 1:37:16 0:10:04 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
pass 6369678 2021-09-01 10:06:46 2021-09-01 10:30:11 2021-09-01 12:17:38 1:47:27 1:37:37 0:09:50 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
fail 6369679 2021-09-01 10:06:46 2021-09-01 10:30:41 2021-09-01 11:08:34 0:37:53 0:26:07 0:11:46 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed on smithi090 with status 1: 'sudo yum -y install ceph-test'

pass 6369680 2021-09-01 10:06:46 2021-09-01 10:30:51 2021-09-01 12:30:34 1:59:43 1:49:22 0:10:21 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
fail 6369681 2021-09-01 10:06:46 2021-09-01 10:31:02 2021-09-01 11:08:17 0:37:15 0:26:09 0:11:06 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed on smithi111 with status 1: 'sudo yum -y install ceph-test'

fail 6369682 2021-09-01 10:06:48 2021-09-01 10:31:02 2021-09-01 11:06:49 0:35:47 0:25:51 0:09:56 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

Command failed on smithi168 with status 1: 'sudo yum -y install ceph-test'

fail 6369683 2021-09-01 10:06:48 2021-09-01 10:31:52 2021-09-01 11:24:19 0:52:27 0:40:28 0:11:59 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi131 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

fail 6369684 2021-09-01 10:06:48 2021-09-01 10:33:23 2021-09-01 11:18:24 0:45:01 0:35:46 0:09:15 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi091 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

pass 6369685 2021-09-01 10:06:48 2021-09-01 10:33:23 2021-09-01 11:14:33 0:41:10 0:31:22 0:09:48 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
fail 6369686 2021-09-01 10:06:48 2021-09-01 10:33:53 2021-09-01 11:18:18 0:44:25 0:33:37 0:10:48 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi083 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

pass 6369687 2021-09-01 10:06:48 2021-09-01 10:35:24 2021-09-01 11:15:29 0:40:05 0:28:20 0:11:45 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
pass 6369688 2021-09-01 10:06:48 2021-09-01 10:36:24 2021-09-01 11:21:22 0:44:58 0:34:54 0:10:04 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
pass 6369689 2021-09-01 10:06:48 2021-09-01 10:36:34 2021-09-01 11:21:40 0:45:06 0:34:29 0:10:37 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
fail 6369690 2021-09-01 10:06:48 2021-09-01 10:36:44 2021-09-01 11:18:24 0:41:40 0:31:45 0:09:55 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi023 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

fail 6369691 2021-09-01 10:06:48 2021-09-01 10:37:15 2021-09-01 11:14:49 0:37:34 0:27:03 0:10:31 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi012 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

fail 6369692 2021-09-01 10:06:49 2021-09-01 10:37:15 2021-09-01 11:11:10 0:33:55 0:26:58 0:06:57 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi129 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6369693 2021-09-01 10:06:51 2021-09-01 10:38:16 2021-09-01 11:12:09 0:33:53 0:26:13 0:07:40 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi017 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6369694 2021-09-01 10:06:51 2021-09-01 10:38:27 2021-09-01 11:17:47 0:39:20 0:31:34 0:07:46 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi051 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6369695 2021-09-01 10:06:51 2021-09-01 10:38:48 2021-09-01 11:12:07 0:33:19 0:25:53 0:07:26 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi119 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6369696 2021-09-01 10:06:51 2021-09-01 10:38:48 2021-09-01 11:12:02 0:33:14 0:25:32 0:07:42 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi081 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6369697 2021-09-01 10:06:51 2021-09-01 10:38:48 2021-09-01 11:11:55 0:33:07 0:26:47 0:06:20 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi071 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6369698 2021-09-01 10:06:51 2021-09-01 10:39:09 2021-09-01 11:10:44 0:31:35 0:25:02 0:06:33 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi107 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6369699 2021-09-01 10:06:51 2021-09-01 10:39:49 2021-09-01 11:16:03 0:36:14 0:28:23 0:07:51 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi033 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6369700 2021-09-01 10:06:51 2021-09-01 10:40:59 2021-09-01 11:11:39 0:30:40 0:23:10 0:07:30 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi042 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6369701 2021-09-01 10:06:51 2021-09-01 10:42:30 2021-09-01 11:12:30 0:30:00 0:22:14 0:07:46 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi087 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=976cdbf8a500952ede15b67361cc0d0d95517ae5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6369702 2021-09-01 10:06:52 2021-09-01 10:43:21 2021-09-01 15:47:35 5:04:14 4:52:37 0:11:37 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

fail 6369703 2021-09-01 10:06:52 2021-09-01 10:43:42 2021-09-01 12:04:32 1:20:50 1:05:02 0:15:48 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

dead 6369704 2021-09-01 10:06:52 2021-09-01 10:46:52 2021-09-01 23:01:31 12:14:39 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

hit max job timeout

fail 6369705 2021-09-01 10:06:52 2021-09-01 10:51:23 2021-09-01 11:26:18 0:34:55 0:21:00 0:13:55 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6369706 2021-09-01 10:06:52 2021-09-01 10:52:53 2021-09-01 11:26:21 0:33:28 0:20:57 0:12:31 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6369707 2021-09-01 10:06:52 2021-09-01 10:52:54 2021-09-01 11:27:25 0:34:31 0:21:16 0:13:15 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6369708 2021-09-01 10:06:52 2021-09-01 10:53:54 2021-09-01 11:27:14 0:33:20 0:21:00 0:12:20 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6369709 2021-09-01 10:06:53 2021-09-01 10:53:54 2021-09-01 11:31:45 0:37:51 0:21:12 0:16:39 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6369710 2021-09-01 10:06:53 2021-09-01 10:58:15 2021-09-01 11:31:46 0:33:31 0:21:04 0:12:27 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6369711 2021-09-01 10:06:53 2021-09-01 10:58:25 2021-09-01 11:32:02 0:33:37 0:21:00 0:12:37 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6369712 2021-09-01 10:06:53 2021-09-01 10:58:55 2021-09-01 11:30:22 0:31:27 0:24:43 0:06:44 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Found coredumps on ubuntu@smithi086.front.sepia.ceph.com

pass 6369713 2021-09-01 10:06:54 2021-09-01 10:59:36 2021-09-01 11:33:39 0:34:03 0:25:56 0:08:07 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 6369714 2021-09-01 10:06:54 2021-09-01 11:00:56 2021-09-01 11:33:52 0:32:56 0:26:23 0:06:33 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Found coredumps on ubuntu@smithi035.front.sepia.ceph.com

pass 6369715 2021-09-01 10:06:54 2021-09-01 11:00:56 2021-09-01 11:32:05 0:31:09 0:25:08 0:06:01 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 6369716 2021-09-01 10:06:54 2021-09-01 11:01:07 2021-09-01 11:35:08 0:34:01 0:26:19 0:07:42 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Found coredumps on ubuntu@smithi013.front.sepia.ceph.com

pass 6369717 2021-09-01 10:06:54 2021-09-01 11:02:07 2021-09-01 11:35:55 0:33:48 0:25:18 0:08:30 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 6369718 2021-09-01 10:06:54 2021-09-01 11:02:58 2021-09-01 11:36:05 0:33:07 0:25:19 0:07:48 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 6369719 2021-09-01 10:06:54 2021-09-01 11:02:58 2021-09-01 11:40:28 0:37:30 0:26:23 0:11:07 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 6369720 2021-09-01 10:06:54 2021-09-01 11:06:59 2021-09-01 11:37:59 0:31:00 0:25:16 0:05:44 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 6369721 2021-09-01 10:06:54 2021-09-01 11:06:59 2021-09-01 11:23:37 0:16:38 0:07:25 0:09:13 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi152 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 62467c76-0b16-11ec-8c25-001a4aab830c -- ceph mon dump -f json'

pass 6369722 2021-09-01 10:06:55 2021-09-01 11:08:19 2021-09-01 11:36:09 0:27:50 0:16:09 0:11:41 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 1-start 2-services/rgw 3-final} 2
pass 6369723 2021-09-01 10:06:56 2021-09-01 11:08:39 2021-09-01 11:35:46 0:27:07 0:15:54 0:11:13 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 1-start 2-services/rgw 3-final} 2
pass 6369724 2021-09-01 10:06:56 2021-09-01 11:10:50 2021-09-01 11:36:17 0:25:27 0:16:11 0:09:16 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 1-start 2-services/rgw 3-final} 2
pass 6369725 2021-09-01 10:06:56 2021-09-01 11:11:20 2021-09-01 11:38:41 0:27:21 0:16:50 0:10:31 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 1-start 2-services/rgw 3-final} 2
pass 6369726 2021-09-01 10:06:56 2021-09-01 11:11:41 2021-09-01 11:38:55 0:27:14 0:16:20 0:10:54 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 1-start 2-services/rgw 3-final} 2
pass 6369727 2021-09-01 10:06:56 2021-09-01 11:12:01 2021-09-01 11:39:40 0:27:39 0:16:38 0:11:01 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 1-start 2-services/rgw 3-final} 2
pass 6369728 2021-09-01 10:06:56 2021-09-01 11:12:11 2021-09-01 11:39:30 0:27:19 0:16:37 0:10:42 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 1-start 2-services/rgw 3-final} 2
pass 6369729 2021-09-01 10:06:56 2021-09-01 11:12:11 2021-09-01 11:39:13 0:27:02 0:16:42 0:10:20 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 1-start 2-services/rgw 3-final} 2
pass 6369730 2021-09-01 10:06:56 2021-09-01 11:12:12 2021-09-01 11:39:28 0:27:16 0:16:28 0:10:48 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 1-start 2-services/rgw 3-final} 2
pass 6369731 2021-09-01 10:06:56 2021-09-01 11:12:32 2021-09-01 11:41:36 0:29:04 0:17:03 0:12:01 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 1-start 2-services/rgw 3-final} 2
fail 6369732 2021-09-01 10:06:57 2021-09-01 11:14:42 2021-09-01 11:49:03 0:34:21 0:21:52 0:12:29 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.7.0} 3
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6369733 2021-09-01 10:06:57 2021-09-01 11:14:53 2021-09-01 11:49:41 0:34:48 0:22:05 0:12:43 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.7.0} 3
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6369734 2021-09-01 10:06:57 2021-09-01 11:15:33 2021-09-01 11:50:49 0:35:16 0:21:18 0:13:58 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.7.0} 3
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6369735 2021-09-01 10:06:57 2021-09-01 11:17:23 2021-09-01 11:51:37 0:34:14 0:21:06 0:13:08 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.7.0} 3
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6369736 2021-09-01 10:06:57 2021-09-01 11:18:24 2021-09-01 11:52:01 0:33:37 0:20:54 0:12:43 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.7.0} 3
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6369737 2021-09-01 10:06:57 2021-09-01 11:18:34 2021-09-01 11:55:23 0:36:49 0:21:25 0:15:24 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.7.0} 3
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6369738 2021-09-01 10:06:57 2021-09-01 11:21:25 2021-09-01 11:56:59 0:35:34 0:21:14 0:14:20 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.7.0} 3
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6369739 2021-09-01 10:06:57 2021-09-01 11:21:45 2021-09-01 11:57:35 0:35:50 0:21:12 0:14:38 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.7.0} 3
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6369740 2021-09-01 10:06:57 2021-09-01 11:23:45 2021-09-01 11:59:41 0:35:56 0:21:18 0:14:38 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.7.0} 3
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6369741 2021-09-01 10:06:57 2021-09-01 11:26:27 2021-09-01 11:59:37 0:33:10 0:21:17 0:11:53 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.7.0} 3
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6369742 2021-09-01 10:06:59 2021-09-01 11:26:27 2021-09-01 12:10:13 0:43:46 0:33:01 0:10:45 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

No module named 'tasks.ceph'

fail 6369743 2021-09-01 10:06:59 2021-09-01 11:27:23 2021-09-01 12:10:25 0:43:02 0:31:57 0:11:05 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

No module named 'tasks.ceph'

fail 6369744 2021-09-01 10:06:59 2021-09-01 11:27:24 2021-09-01 12:12:55 0:45:31 0:35:31 0:10:00 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

No module named 'tasks.ceph'

fail 6369745 2021-09-01 10:06:59 2021-09-01 11:27:44 2021-09-01 12:17:12 0:49:28 0:37:18 0:12:10 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

No module named 'tasks.ceph'

fail 6369746 2021-09-01 10:06:59 2021-09-01 11:30:25 2021-09-01 12:19:06 0:48:41 0:37:07 0:11:34 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

No module named 'tasks.ceph'

fail 6369747 2021-09-01 10:06:59 2021-09-01 11:31:56 2021-09-01 12:16:25 0:44:29 0:34:38 0:09:51 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

No module named 'tasks.ceph'

fail 6369748 2021-09-01 10:06:59 2021-09-01 11:31:57 2021-09-01 12:18:56 0:46:59 0:36:49 0:10:10 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

No module named 'tasks.ceph'

fail 6369749 2021-09-01 10:06:59 2021-09-01 11:31:58 2021-09-01 12:19:13 0:47:15 0:36:10 0:11:05 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

No module named 'tasks.ceph'

fail 6369750 2021-09-01 10:06:59 2021-09-01 11:32:08 2021-09-01 12:16:57 0:44:49 0:34:22 0:10:27 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

No module named 'tasks.ceph'

fail 6369751 2021-09-01 10:06:59 2021-09-01 11:32:09 2021-09-01 12:16:45 0:44:36 0:33:18 0:11:18 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

No module named 'tasks.ceph'