Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 7374027 2023-08-21 14:20:05 2023-08-21 14:20:49 2023-08-22 02:34:20 12:13:31 smithi main ubuntu 22.04 rbd/cli/{base/install clusters/{fixed-1 openstack} features/layering msgr-failures/few objectstore/bluestore-comp-zstd pool/small-cache-pool supported-random-distro$/{ubuntu_latest} workloads/rbd_cli_migration} 1
Failure Reason:

hit max job timeout

fail 7374028 2023-08-21 14:20:06 2023-08-21 14:20:50 2023-08-21 17:51:21 3:30:31 3:18:08 0:12:23 smithi main centos 8.stream rbd/nbd/{base/install cluster/{fixed-3 openstack} msgr-failures/few objectstore/bluestore-bitmap supported-random-distro$/{centos_8} thrashers/cache thrashosds-health workloads/rbd_nbd_diff_continuous} 3
Failure Reason:

Command failed (workunit test rbd/diff_continuous.sh) on smithi158 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=98da3bc80b7f94a387fd066563cc3beb2e965ebe TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_DEVICE_TYPE=nbd adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/diff_continuous.sh'

pass 7374029 2023-08-21 14:20:07 2023-08-21 14:20:50 2023-08-21 14:55:49 0:34:59 0:24:48 0:10:11 smithi main centos 9.stream rbd/mirror/{base/install clients/{mirror-extra mirror} cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-stupid supported-random-distro$/{centos_latest} workloads/rbd-mirror-workunit-policy-none} 2
pass 7374030 2023-08-21 14:20:08 2023-08-21 14:20:50 2023-08-21 15:56:40 1:35:50 1:26:22 0:09:28 smithi main ubuntu 22.04 rbd/pwl-cache/tmpfs/{1-base/install 2-cluster/{fix-2 openstack} 3-supported-random-distro$/{ubuntu_latest} 4-cache-path 5-cache-mode/ssd 6-cache-size/5G 7-workloads/qemu_xfstests} 2
fail 7374031 2023-08-21 14:20:08 2023-08-21 14:20:51 2023-08-21 15:29:02 1:08:11 0:54:55 0:13:16 smithi main ubuntu 20.04 rbd/nbd/{base/install cluster/{fixed-3 openstack} msgr-failures/few objectstore/bluestore-low-osd-mem-target supported-random-distro$/{ubuntu_20.04} thrashers/cache thrashosds-health workloads/rbd_nbd_diff_continuous} 3
Failure Reason:

Command failed on smithi093 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json'

fail 7374032 2023-08-21 14:20:09 2023-08-21 14:20:51 2023-08-22 00:40:57 10:20:06 10:11:58 0:08:08 smithi main rhel 8.6 rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-stupid policy/simple rbd-mirror/four-per-cluster supported-random-distro$/{rhel_8} workloads/rbd-mirror-snapshot-stress-workunit-exclusive-lock} 2
Failure Reason:

Command failed (workunit test rbd/rbd_mirror_stress.sh) on smithi119 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && cd -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=98da3bc80b7f94a387fd066563cc3beb2e965ebe TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster cluster1" CEPH_ID="mirror" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.cluster1.client.mirror CEPH_ROOT=/home/ubuntu/cephtest/clone.cluster1.client.mirror CEPH_MNT=/home/ubuntu/cephtest/mnt.cluster1.mirror CEPH_ARGS=\'\' MIRROR_IMAGE_MODE=snapshot MIRROR_POOL_MODE=image RBD_IMAGE_FEATURES=layering,exclusive-lock RBD_MIRROR_INSTANCES=4 RBD_MIRROR_USE_EXISTING_CLUSTER=1 RBD_MIRROR_USE_RBD_MIRROR=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.cluster1.client.mirror/qa/workunits/rbd/rbd_mirror_stress.sh'