Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 7348020 2023-07-22 15:03:53 2023-07-22 15:04:36 2023-07-23 03:17:31 12:12:55 smithi main rhel 8.6 rbd/cli/{base/install clusters/{fixed-1 openstack} features/layering msgr-failures/few objectstore/bluestore-comp-zstd pool/small-cache-pool supported-random-distro$/{rhel_8} workloads/rbd_cli_migration} 1
Failure Reason:

hit max job timeout

pass 7348021 2023-07-22 15:03:53 2023-07-22 15:04:37 2023-07-22 15:49:26 0:44:49 0:34:20 0:10:29 smithi main centos 9.stream rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-bitmap policy/none rbd-mirror/four-per-cluster supported-random-distro$/{centos_latest} workloads/rbd-mirror-journal-workunit} 2
fail 7348022 2023-07-22 15:03:54 2023-07-22 15:04:37 2023-07-22 18:46:51 3:42:14 3:21:38 0:20:36 smithi main ubuntu 22.04 rbd/nbd/{base/install cluster/{fixed-3 openstack} msgr-failures/few objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} thrashers/cache thrashosds-health workloads/rbd_nbd_diff_continuous} 3
Failure Reason:

Command failed (workunit test rbd/diff_continuous.sh) on smithi158 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1bf364b918a7ab4708130a64bf96639942959f6d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_DEVICE_TYPE=nbd adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/diff_continuous.sh'

pass 7348023 2023-07-22 15:03:55 2023-07-22 15:04:37 2023-07-22 15:39:36 0:34:59 0:21:12 0:13:47 smithi main centos 9.stream rbd/thrash/{base/install clusters/{fixed-2 openstack} msgr-failures/few objectstore/bluestore-stupid supported-random-distro$/{centos_latest} thrashers/cache thrashosds-health workloads/rbd_fsx_deep_copy} 2
pass 7348024 2023-07-22 15:03:56 2023-07-22 15:04:38 2023-07-22 15:56:20 0:51:42 0:37:55 0:13:47 smithi main centos 8.stream rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-zlib pool/ec-data-pool supported-random-distro$/{centos_8} workloads/rbd_fio} 3
pass 7348025 2023-07-22 15:03:57 2023-07-22 15:04:38 2023-07-22 15:41:57 0:37:19 0:27:24 0:09:55 smithi main rhel 8.6 rbd/singleton-bluestore/{all/issue-20295 objectstore/bluestore-bitmap openstack supported-random-distro$/{rhel_8}} 4
pass 7348026 2023-07-22 15:03:58 2023-07-22 15:04:39 2023-07-22 16:24:46 1:20:07 1:07:22 0:12:45 smithi main centos 9.stream rbd/pwl-cache/tmpfs/{1-base/install 2-cluster/{fix-2 openstack} 3-supported-random-distro$/{centos_latest} 4-cache-path 5-cache-mode/ssd 6-cache-size/5G 7-workloads/qemu_xfstests} 2
fail 7348027 2023-07-22 15:03:59 2023-07-22 15:04:39 2023-07-22 15:26:02 0:21:23 0:10:28 0:10:55 smithi main centos 9.stream rbd/singleton/{all/qemu-iotests-writeback objectstore/bluestore-comp-zstd openstack supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed (workunit test rbd/qemu-iotests.sh) on smithi060 with status 13: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1bf364b918a7ab4708130a64bf96639942959f6d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/qemu-iotests.sh'

dead 7348028 2023-07-22 15:04:00 2023-07-22 15:04:40 2023-07-23 03:14:11 12:09:31 smithi main centos 8.stream rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-stupid policy/simple rbd-mirror/four-per-cluster supported-random-distro$/{centos_8} workloads/rbd-mirror-snapshot-stress-workunit-exclusive-lock} 2
Failure Reason:

hit max job timeout