Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6886808 2022-06-19 14:39:13 2022-06-19 14:39:24 2022-06-19 15:10:57 0:31:33 0:24:37 0:06:56 smithi main ubuntu 20.04 rbd/thrash/{base/install clusters/{fixed-2 openstack} msgr-failures/few objectstore/bluestore-hybrid supported-random-distro$/{ubuntu_latest} thrashers/cache thrashosds-health workloads/rbd_api_tests_no_locking} 2
fail 6886810 2022-06-19 14:39:14 2022-06-19 14:39:25 2022-06-19 18:16:14 3:36:49 3:29:41 0:07:08 smithi main ubuntu 20.04 rbd/cli_v1/{base/install clusters/{fixed-1 openstack} features/format-1 msgr-failures/few objectstore/bluestore-comp-zlib pool/small-cache-pool supported-random-distro$/{ubuntu_latest} workloads/rbd_cli_generic} 1
Failure Reason:

Command failed (workunit test rbd/cli_generic.sh) on smithi156 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eb0eac1a195f1d8e9e3c472c7b1ca1e9add581c2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/cli_generic.sh'

fail 6886812 2022-06-19 14:39:15 2022-06-19 14:39:26 2022-06-19 18:24:29 3:45:03 3:38:35 0:06:28 smithi main rhel 8.4 rbd/cli_v1/{base/install clusters/{fixed-1 openstack} features/format-1 msgr-failures/few objectstore/bluestore-hybrid pool/small-cache-pool supported-random-distro$/{rhel_8} workloads/rbd_cli_generic} 1
Failure Reason:

Command failed (workunit test rbd/cli_generic.sh) on smithi145 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eb0eac1a195f1d8e9e3c472c7b1ca1e9add581c2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/cli_generic.sh'

fail 6886813 2022-06-19 14:39:17 2022-06-19 14:39:26 2022-06-19 18:13:02 3:33:36 3:26:54 0:06:42 smithi main ubuntu 20.04 rbd/cli_v1/{base/install clusters/{fixed-1 openstack} features/format-1 msgr-failures/few objectstore/bluestore-stupid pool/small-cache-pool supported-random-distro$/{ubuntu_latest} workloads/rbd_cli_generic} 1
Failure Reason:

Command failed (workunit test rbd/cli_generic.sh) on smithi191 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eb0eac1a195f1d8e9e3c472c7b1ca1e9add581c2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/cli_generic.sh'

pass 6886814 2022-06-19 14:39:18 2022-06-19 14:39:27 2022-06-19 16:23:32 1:44:05 1:37:14 0:06:51 smithi main centos 8.stream rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/bluestore-stupid 4-supported-random-distro$/{centos_8} 5-pool/replicated-data-pool 6-prepare/qcow2-file 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup} 3
fail 6886815 2022-06-19 14:39:19 2022-06-19 14:39:27 2022-06-19 18:00:35 3:21:08 3:13:43 0:07:25 smithi main ubuntu 20.04 rbd/maintenance/{base/install clusters/{fixed-3 openstack} objectstore/bluestore-hybrid qemu/xfstests supported-random-distro$/{ubuntu_latest} workloads/dynamic_features_no_cache} 3
Failure Reason:

Command failed (workunit test rbd/qemu_dynamic_features.sh) on smithi138 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eb0eac1a195f1d8e9e3c472c7b1ca1e9add581c2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 IMAGE_NAME=client.0.1-clone adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/qemu_dynamic_features.sh'

fail 6886816 2022-06-19 14:39:20 2022-06-19 14:39:27 2022-06-19 15:11:28 0:32:01 0:24:30 0:07:31 smithi main ubuntu 20.04 rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/bluestore-low-osd-mem-target pool/replicated-data-pool supported-random-distro$/{ubuntu_latest} workloads/c_api_tests_with_defaults} 3
Failure Reason:

"2022-06-19T15:01:40.863960+0000 mon.a (mon.0) 744 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log