Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6564173 2021-12-15 14:58:17 2021-12-15 15:35:38 2021-12-15 16:25:03 0:49:25 0:31:17 0:18:08 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6564174 2021-12-15 14:58:18 2021-12-15 15:35:38 2021-12-15 16:22:52 0:47:14 0:29:43 0:17:31 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6564175 2021-12-15 14:58:19 2021-12-15 15:35:39 2021-12-15 16:23:29 0:47:50 0:30:30 0:17:20 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6564176 2021-12-15 14:58:20 2021-12-15 15:36:19 2021-12-15 16:21:10 0:44:51 0:39:41 0:05:10 smithi master rhel 8.4 rados/standalone/{supported-random-distro$/{rhel_8} workloads/crush} 1
pass 6564177 2021-12-15 14:58:21 2021-12-15 15:36:19 2021-12-15 16:23:57 0:47:38 0:29:08 0:18:30 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6564178 2021-12-15 14:58:22 2021-12-15 15:36:40 2021-12-15 17:30:47 1:54:07 1:47:01 0:07:06 smithi master rhel 8.4 rados/standalone/{supported-random-distro$/{rhel_8} workloads/erasure-code} 1
pass 6564179 2021-12-15 14:58:23 2021-12-15 15:37:10 2021-12-15 16:24:32 0:47:22 0:31:16 0:16:06 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 6564180 2021-12-15 14:58:24 2021-12-15 15:37:20 2021-12-15 15:54:02 0:16:42 0:06:06 0:10:36 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/host rook/master} 3
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi109.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

dead 6564181 2021-12-15 14:58:25 2021-12-15 15:37:51 2021-12-15 15:53:19 0:15:28 smithi master centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/e2e} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 6564182 2021-12-15 14:58:26 2021-12-15 15:38:21 2021-12-15 16:28:32 0:50:11 0:33:53 0:16:18 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6564183 2021-12-15 14:58:27 2021-12-15 15:39:02 2021-12-15 16:24:29 0:45:27 0:31:20 0:14:07 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6564184 2021-12-15 14:58:28 2021-12-15 15:39:12 2021-12-15 16:23:44 0:44:32 0:28:35 0:15:57 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6564185 2021-12-15 14:58:29 2021-12-15 15:40:33 2021-12-15 16:51:48 1:11:15 0:58:45 0:12:30 smithi master centos 8.stream rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/misc} 1
pass 6564186 2021-12-15 14:58:30 2021-12-15 15:40:33 2021-12-15 16:24:07 0:43:34 0:30:36 0:12:58 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6564187 2021-12-15 14:58:31 2021-12-15 15:40:54 2021-12-15 16:24:31 0:43:37 0:29:36 0:14:01 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6564188 2021-12-15 14:58:32 2021-12-15 15:41:14 2021-12-15 16:25:05 0:43:51 0:31:06 0:12:45 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6564189 2021-12-15 14:58:33 2021-12-15 15:41:24 2021-12-15 16:22:53 0:41:29 0:29:39 0:11:50 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 6564190 2021-12-15 14:58:34 2021-12-15 15:41:25 2021-12-15 17:17:22 1:35:57 1:23:12 0:12:45 smithi master centos 8.stream rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/osd-backfill} 1
Failure Reason:

Command failed (workunit test osd-backfill/osd-backfill-space.sh) on smithi125 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5be37ec38826711c81acd15d756786689e3a538e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd-backfill/osd-backfill-space.sh'

pass 6564191 2021-12-15 14:58:35 2021-12-15 15:42:05 2021-12-15 16:25:41 0:43:36 0:32:18 0:11:18 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 6564192 2021-12-15 14:58:36 2021-12-15 15:42:15 2021-12-15 15:58:12 0:15:57 0:05:18 0:10:39 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/1-node k8s/1.21 net/flannel rook/master} 1
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi121.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

fail 6564193 2021-12-15 14:58:37 2021-12-15 15:42:16 2021-12-15 16:10:17 0:28:01 0:18:02 0:09:59 smithi master centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi099 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5be37ec38826711c81acd15d756786689e3a538e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6564194 2021-12-15 14:58:38 2021-12-15 15:42:36 2021-12-15 16:32:24 0:49:48 0:38:20 0:11:28 smithi master centos 8.stream rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/osd} 1
Failure Reason:

Command failed (workunit test osd/osd-bluefs-volume-ops.sh) on smithi079 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5be37ec38826711c81acd15d756786689e3a538e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-bluefs-volume-ops.sh'

pass 6564195 2021-12-15 14:58:39 2021-12-15 15:42:56 2021-12-15 16:23:12 0:40:16 0:29:49 0:10:27 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6564196 2021-12-15 14:58:40 2021-12-15 15:43:57 2021-12-15 16:25:24 0:41:27 0:30:57 0:10:30 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6564197 2021-12-15 14:58:41 2021-12-15 15:44:07 2021-12-15 17:15:58 1:31:51 1:22:00 0:09:51 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/radosbench} 2
fail 6564198 2021-12-15 14:58:42 2021-12-15 15:44:07 2021-12-15 16:36:36 0:52:29 0:41:09 0:11:20 smithi master centos 8.3 rados/standalone/{supported-random-distro$/{centos_8} workloads/scrub} 1
Failure Reason:

Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi072 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5be37ec38826711c81acd15d756786689e3a538e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh'