Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7371829 2023-08-17 14:38:32 2023-08-17 15:19:32 2023-08-17 18:47:40 3:28:08 3:19:06 0:09:02 smithi main ubuntu 20.04 rbd/cli_v1/{base/install clusters/{fixed-1 openstack} features/format-1 msgr-failures/few objectstore/bluestore-stupid pool/small-cache-pool supported-random-distro$/{ubuntu_latest} workloads/rbd_cli_generic} 1
Failure Reason:

Command failed (workunit test rbd/cli_generic.sh) on smithi142 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e78f2dc97637da188e6292122efedae3d18948ca TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/cli_generic.sh'

pass 7371830 2023-08-17 14:38:33 2023-08-17 15:19:32 2023-08-17 16:32:54 1:13:22 1:04:44 0:08:38 smithi main rhel 8.4 rbd/pwl-cache/tmpfs/{1-base/install 2-cluster/{fix-2 openstack} 3-supported-random-distro$/{rhel_8} 4-cache-path 5-cache-mode/ssd 6-cache-size/5G 7-workloads/qemu_xfstests} 2
fail 7371831 2023-08-17 14:38:34 2023-08-17 15:19:42 2023-08-17 18:53:29 3:33:47 3:24:27 0:09:20 smithi main rhel 8.4 rbd/cli/{base/install clusters/{fixed-1 openstack} features/layering msgr-failures/few objectstore/filestore-xfs pool/small-cache-pool supported-random-distro$/{rhel_8} workloads/rbd_cli_generic} 1
Failure Reason:

Command failed (workunit test rbd/cli_generic.sh) on smithi179 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e78f2dc97637da188e6292122efedae3d18948ca TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/cli_generic.sh'

fail 7371832 2023-08-17 14:38:35 2023-08-17 15:20:03 2023-08-17 16:23:43 1:03:40 0:54:44 0:08:56 smithi main centos 8.stream rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-stupid policy/none rbd-mirror/four-per-cluster supported-random-distro$/{centos_8} workloads/rbd-mirror-snapshot-stress-workunit-fast-diff} 2
Failure Reason:

"2023-08-17T16:12:41.951791+0000 mon.a (mon.0) 1515 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log