Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7306211 2023-06-16 14:22:55 2023-06-16 14:23:40 2023-06-16 15:41:51 1:18:11 1:06:28 0:11:43 smithi main rhel 8.4 upgrade:octopus-x/parallel/{0-distro/rhel_8.4_container_tools_3.0 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} 2
Failure Reason:

Command failed on smithi006 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:octopus shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ac610b90-0c55-11ee-9b2b-001a4aab830c -e sha1=e13c4c071e015f302e7603f906a16712eb60cafb -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

pass 7306212 2023-06-16 14:22:56 2023-06-16 14:23:41 2023-06-16 17:00:54 2:37:13 2:18:10 0:19:03 smithi main ubuntu 20.04 upgrade:octopus-x/stress-split-erasure-code-no-cephadm/{0-cluster/{openstack start} 1-octopus-install/octopus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/bluestore-hybrid 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-quincy 7-final-workload mon_election/connectivity thrashosds-health ubuntu_20.04} 5
fail 7306213 2023-06-16 14:22:57 2023-06-16 14:23:41 2023-06-16 16:06:19 1:42:38 1:27:34 0:15:04 smithi main ubuntu 20.04 upgrade:octopus-x/stress-split-no-cephadm/{0-cluster/{openstack start} 1-ceph-install/octopus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{radosbench rbd-cls rbd-import-export rbd_api readwrite rgw_ragweed_prepare snaps-few-objects} 5-finish-upgrade 6-quincy 8-final-workload/{rbd-python snaps-many-objects} mon_election/classic objectstore/filestore-xfs thrashosds-health ubuntu_20.04} 5
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi190 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

pass 7306214 2023-06-16 14:22:57 2023-06-16 14:23:42 2023-06-16 16:48:02 2:24:20 2:10:51 0:13:29 smithi main rhel 8.4 upgrade:octopus-x/stress-split/{0-distro/rhel_8.4_container_tools_3.0 0-roles 1-start 2-first-half-tasks/rbd-import-export 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/classic} 2
fail 7306215 2023-06-16 14:22:58 2023-06-16 14:23:42 2023-06-16 15:14:30 0:50:48 0:36:36 0:14:12 smithi main rhel 8.4 upgrade:octopus-x/parallel/{0-distro/rhel_8.4_container_tools_rhel8 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi152 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

fail 7306216 2023-06-16 14:22:59 2023-06-16 14:23:43 2023-06-16 15:17:00 0:53:17 0:36:50 0:16:27 smithi main ubuntu 20.04 upgrade:octopus-x/parallel/{0-distro/ubuntu_20.04 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi116 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

fail 7306217 2023-06-16 14:23:00 2023-06-16 14:23:43 2023-06-16 16:36:48 2:13:05 1:57:46 0:15:19 smithi main rhel 8.4 upgrade:octopus-x/stress-split/{0-distro/rhel_8.4_container_tools_rhel8 0-roles 1-start 2-first-half-tasks/rbd_api 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/connectivity} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi064 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

fail 7306218 2023-06-16 14:23:00 2023-06-16 14:23:43 2023-06-16 15:15:26 0:51:43 0:34:58 0:16:45 smithi main centos 8.stream upgrade:octopus-x/parallel/{0-distro/centos_8.stream_container_tools 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi046 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

fail 7306219 2023-06-16 14:23:01 2023-06-16 14:23:44 2023-06-16 14:49:15 0:25:31 0:08:20 0:17:11 smithi main upgrade:octopus-x/rgw-multisite/{clusters frontend overrides realm tasks upgrade/primary} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=octopus

pass 7306220 2023-06-16 14:23:02 2023-06-16 14:23:44 2023-06-16 17:15:09 2:51:25 2:32:37 0:18:48 smithi main ubuntu 20.04 upgrade:octopus-x/stress-split-erasure-code-no-cephadm/{0-cluster/{openstack start} 1-octopus-install/octopus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/bluestore-hybrid 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-quincy 7-final-workload mon_election/classic thrashosds-health ubuntu_20.04} 5
dead 7306221 2023-06-16 14:23:03 2023-06-16 14:23:45 2023-06-16 14:42:04 0:18:19 smithi main ubuntu 20.04 upgrade:octopus-x/stress-split-no-cephadm/{0-cluster/{openstack start} 1-ceph-install/octopus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{radosbench rbd-cls rbd-import-export rbd_api readwrite rgw_ragweed_prepare snaps-few-objects} 5-finish-upgrade 6-quincy 8-final-workload/{rbd-python snaps-many-objects} mon_election/classic objectstore/bluestore-hybrid thrashosds-health ubuntu_20.04} 5
Failure Reason:

SSH connection to smithi161 was lost: 'sudo apt-get update'

dead 7306222 2023-06-16 14:23:03 2023-06-16 14:23:45 2023-06-16 14:23:47 0:00:02 smithi main ubuntu 20.04 upgrade:octopus-x/stress-split/{0-distro/ubuntu_20.04 0-roles 1-start 2-first-half-tasks/readwrite 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/classic} 2
Failure Reason:

Error reimaging machines: 500 Server Error: Internal Server Error for url: http://fog.front.sepia.ceph.com/fog/host/266/task

fail 7306223 2023-06-16 14:23:04 2023-06-16 14:23:46 2023-06-16 15:21:18 0:57:32 0:39:26 0:18:06 smithi main centos 8.stream upgrade:octopus-x/parallel/{0-distro/centos_8.stream_container_tools_crun 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi086 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

fail 7306224 2023-06-16 14:23:05 2023-06-16 14:23:46 2023-06-16 15:20:07 0:56:21 0:38:41 0:17:40 smithi main rhel 8.4 upgrade:octopus-x/parallel/{0-distro/rhel_8.4_container_tools_3.0 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi019 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

fail 7306225 2023-06-16 14:23:06 2023-06-16 14:23:46 2023-06-16 16:38:00 2:14:14 1:55:17 0:18:57 smithi main centos 8.stream upgrade:octopus-x/stress-split/{0-distro/centos_8.stream_container_tools 0-roles 1-start 2-first-half-tasks/snaps-few-objects 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/connectivity} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi131 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

pass 7306226 2023-06-16 14:23:07 2023-06-16 14:23:47 2023-06-16 17:01:33 2:37:46 2:17:14 0:20:32 smithi main ubuntu 20.04 upgrade:octopus-x/stress-split-erasure-code-no-cephadm/{0-cluster/{openstack start} 1-octopus-install/octopus 1.1-pg-log-overrides/normal_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/filestore-xfs 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-quincy 7-final-workload mon_election/connectivity thrashosds-health ubuntu_20.04} 5
fail 7306227 2023-06-16 14:23:07 2023-06-16 14:23:47 2023-06-16 14:42:19 0:18:32 smithi main ubuntu 20.04 upgrade:octopus-x/stress-split-no-cephadm/{0-cluster/{openstack start} 1-ceph-install/octopus 1.1-pg-log-overrides/normal_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{radosbench rbd-cls rbd-import-export rbd_api readwrite rgw_ragweed_prepare snaps-few-objects} 5-finish-upgrade 6-quincy 8-final-workload/{rbd-python snaps-many-objects} mon_election/connectivity objectstore/filestore-xfs thrashosds-health ubuntu_20.04} 5
Failure Reason:

Command failed on smithi161 with status 100: 'sudo apt-get clean'

fail 7306228 2023-06-16 14:23:08 2023-06-16 14:33:59 2023-06-16 15:25:02 0:51:03 0:38:38 0:12:25 smithi main rhel 8.4 upgrade:octopus-x/parallel/{0-distro/rhel_8.4_container_tools_rhel8 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi060 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

fail 7306229 2023-06-16 14:23:09 2023-06-16 14:37:20 2023-06-16 16:42:37 2:05:17 1:55:16 0:10:01 smithi main centos 8.stream upgrade:octopus-x/stress-split/{0-distro/centos_8.stream_container_tools_crun 0-roles 1-start 2-first-half-tasks/radosbench 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/classic} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi153 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

fail 7306230 2023-06-16 14:23:10 2023-06-16 14:37:21 2023-06-16 15:27:03 0:49:42 0:37:17 0:12:25 smithi main ubuntu 20.04 upgrade:octopus-x/parallel/{0-distro/ubuntu_20.04 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi018 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

fail 7306231 2023-06-16 14:23:10 2023-06-16 14:39:22 2023-06-16 16:36:29 1:57:07 1:49:40 0:07:27 smithi main rhel 8.4 upgrade:octopus-x/stress-split/{0-distro/rhel_8.4_container_tools_3.0 0-roles 1-start 2-first-half-tasks/rbd-cls 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/connectivity} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi134 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

fail 7306232 2023-06-16 14:23:11 2023-06-16 14:39:22 2023-06-16 15:24:33 0:45:11 0:35:05 0:10:06 smithi main centos 8.stream upgrade:octopus-x/parallel/{0-distro/centos_8.stream_container_tools 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi038 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

fail 7306233 2023-06-16 14:23:12 2023-06-16 14:39:33 2023-06-16 14:57:03 0:17:30 0:07:48 0:09:42 smithi main upgrade:octopus-x/rgw-multisite/{clusters frontend overrides realm tasks upgrade/secondary} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=octopus

pass 7306234 2023-06-16 14:23:13 2023-06-16 14:39:33 2023-06-16 16:39:50 2:00:17 1:46:25 0:13:52 smithi main ubuntu 20.04 upgrade:octopus-x/stress-split-erasure-code-no-cephadm/{0-cluster/{openstack start} 1-octopus-install/octopus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/filestore-xfs 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-quincy 7-final-workload mon_election/classic thrashosds-health ubuntu_20.04} 5
dead 7306235 2023-06-16 14:23:13 2023-06-16 14:40:44 2023-06-16 14:47:50 0:07:06 smithi main ubuntu 20.04 upgrade:octopus-x/stress-split-no-cephadm/{0-cluster/{openstack start} 1-ceph-install/octopus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{radosbench rbd-cls rbd-import-export rbd_api readwrite rgw_ragweed_prepare snaps-few-objects} 5-finish-upgrade 6-quincy 8-final-workload/{rbd-python snaps-many-objects} mon_election/connectivity objectstore/bluestore-hybrid thrashosds-health ubuntu_20.04} 5
Failure Reason:

Error reimaging machines: Expected smithi161's OS to be ubuntu 20.04 but found centos 8

dead 7306236 2023-06-16 14:23:14 2023-06-16 14:42:15 2023-06-16 14:49:49 0:07:34 smithi main centos 8.stream upgrade:octopus-x/parallel/{0-distro/centos_8.stream_container_tools_crun 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} 2
Failure Reason:

SSH connection to smithi161 was lost: 'sudo yum install -y kernel'

fail 7306237 2023-06-16 14:23:15 2023-06-16 14:42:25 2023-06-16 16:28:43 1:46:18 1:35:30 0:10:48 smithi main rhel 8.4 upgrade:octopus-x/stress-split/{0-distro/rhel_8.4_container_tools_rhel8 0-roles 1-start 2-first-half-tasks/rbd-import-export 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/classic} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi057 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'