Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7328079 2023-07-06 13:20:02 2023-07-06 13:20:46 2023-07-06 15:09:14 1:48:28 1:37:22 0:11:06 smithi main ubuntu 20.04 rbd/maintenance/{base/install clusters/{fixed-3 openstack} objectstore/bluestore-bitmap qemu/xfstests supported-random-distro$/{ubuntu_20.04} workloads/rebuild_object_map} 3
dead 7328080 2023-07-06 13:20:03 2023-07-06 13:20:46 2023-07-07 01:35:18 12:14:32 smithi main centos 8.stream rbd/cli/{base/install clusters/{fixed-1 openstack} features/layering msgr-failures/few objectstore/bluestore-comp-zstd pool/small-cache-pool supported-random-distro$/{centos_8} workloads/rbd_cli_migration} 1
Failure Reason:

hit max job timeout

fail 7328081 2023-07-06 13:20:04 2023-07-06 13:20:46 2023-07-06 16:51:47 3:31:01 3:18:00 0:13:01 smithi main centos 8.stream rbd/nbd/{base/install cluster/{fixed-3 openstack} msgr-failures/few objectstore/bluestore-bitmap supported-random-distro$/{centos_8} thrashers/cache thrashosds-health workloads/rbd_nbd_diff_continuous} 3
Failure Reason:

Command failed (workunit test rbd/diff_continuous.sh) on smithi173 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b75c542210c04254e2c94e13c158a5b74292e5f0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_DEVICE_TYPE=nbd adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/diff_continuous.sh'

fail 7328082 2023-07-06 13:20:05 2023-07-06 13:20:47 2023-07-06 14:14:46 0:53:59 0:38:46 0:15:13 smithi main centos 8.stream rbd/iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} workloads/cephadm_iscsi} 3
Failure Reason:

Command failed on smithi008 with status 1: 'CEPH_REF=master CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/virtualenv/bin/cram -v -- /home/ubuntu/cephtest/archive/cram.client.0/*.t'

pass 7328083 2023-07-06 13:20:05 2023-07-06 13:20:47 2023-07-06 15:16:49 1:56:02 1:47:53 0:08:09 smithi main rhel 8.6 rbd/maintenance/{base/install clusters/{fixed-3 openstack} objectstore/bluestore-bitmap qemu/xfstests supported-random-distro$/{rhel_8} workloads/dynamic_features_no_cache} 3
fail 7328084 2023-07-06 13:20:06 2023-07-06 13:20:48 2023-07-06 14:38:48 1:18:00 1:10:19 0:07:41 smithi main rhel 8.6 rbd/pwl-cache/tmpfs/{1-base/install 2-cluster/{fix-2 openstack} 3-supported-random-distro$/{rhel_8} 4-cache-path 5-cache-mode/ssd 6-cache-size/5G 7-workloads/qemu_xfstests} 2
Failure Reason:

Command failed on smithi111 with status 1: 'test -f /home/ubuntu/cephtest/archive/qemu/client.0/success'

fail 7328085 2023-07-06 13:20:07 2023-07-06 13:20:48 2023-07-06 14:41:00 1:20:12 1:08:24 0:11:48 smithi main centos 8.stream rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-stupid policy/simple rbd-mirror/four-per-cluster supported-random-distro$/{centos_8} workloads/rbd-mirror-snapshot-stress-workunit-exclusive-lock} 2
Failure Reason:

Command failed on smithi192 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster cluster2 pg dump --format=json'