Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6977150 2022-08-17 15:35:32 2022-08-17 15:36:47 2022-08-17 19:15:27 3:38:40 3:29:41 0:08:59 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 deploy/ceph tasks/rados_api_tests} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi096 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5d6f4627169c73a4bc30d06309669d6160cb0710 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 6977151 2022-08-17 15:35:33 2022-08-17 15:38:07 2022-08-17 15:55:22 0:17:15 0:08:11 0:09:04 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 deploy/ceph tasks/rbd_api_tests} 1
Failure Reason:

Command failed on smithi071 with status 1: 'yes | sudo mkfs.xfs -f -i size=2048 -f /dev/vg_nvme/lv_1'

pass 6977152 2022-08-17 15:35:35 2022-08-17 15:40:38 2022-08-17 16:01:04 0:20:26 0:14:36 0:05:50 smithi main centos 8.stream crimson-rados/seastore/basic/{centos_latest clusters/fixed-1 deploy/ceph objectstore/seastore tasks/readwrite} 1
fail 6977153 2022-08-17 15:35:36 2022-08-17 15:40:38 2022-08-17 16:14:22 0:33:44 0:24:48 0:08:56 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 10 --op write_excl 50 --pool unique_pool_0'

fail 6977154 2022-08-17 15:35:37 2022-08-17 15:41:18 2022-08-17 19:19:13 3:37:55 3:29:54 0:08:01 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 deploy/ceph tasks/rados_python} 2
Failure Reason:

Command failed (workunit test rados/test_python.sh) on smithi038 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5d6f4627169c73a4bc30d06309669d6160cb0710 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh --eval-attr \'not (tier or snap or ec or bench or stats)\''

fail 6977155 2022-08-17 15:35:38 2022-08-17 15:41:59 2022-08-17 15:58:59 0:17:00 0:10:09 0:06:51 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 deploy/ceph tasks/rbd_cls_tests} 1
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi196 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5d6f4627169c73a4bc30d06309669d6160cb0710 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

pass 6977156 2022-08-17 15:35:39 2022-08-17 15:42:09 2022-08-17 16:04:29 0:22:20 0:13:53 0:08:27 smithi main centos 8.stream crimson-rados/seastore/basic/{centos_latest clusters/fixed-2 deploy/ceph objectstore/seastore tasks/readwrite} 2
pass 6977157 2022-08-17 15:35:40 2022-08-17 15:43:00 2022-08-17 16:07:14 0:24:14 0:13:52 0:10:22 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 deploy/ceph tasks/readwrite} 2
fail 6977158 2022-08-17 15:35:41 2022-08-17 15:46:20 2022-08-17 19:03:31 3:17:11 3:10:55 0:06:16 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 deploy/ceph tasks/rbd_lock_and_fence} 1
Failure Reason:

Command failed (workunit test rbd/test_lock_fence.sh) on smithi125 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5d6f4627169c73a4bc30d06309669d6160cb0710 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_lock_fence.sh'