Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6987773 2022-08-23 12:24:32 2022-08-23 12:25:33 2022-08-23 16:02:55 3:37:22 3:30:23 0:06:59 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 deploy/ceph tasks/rados_api_tests} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi104 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=709e714fbee112d5f1c71cab40048ffbc7456916 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 6987774 2022-08-23 12:24:33 2022-08-23 12:25:34 2022-08-23 12:40:42 0:15:08 0:08:33 0:06:35 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 deploy/ceph tasks/rbd_api_tests} 1
Failure Reason:

Command failed on smithi087 with status 1: 'yes | sudo mkfs.xfs -f -i size=2048 -f /dev/vg_nvme/lv_1'

pass 6987775 2022-08-23 12:24:35 2022-08-23 12:25:34 2022-08-23 12:46:29 0:20:55 0:14:44 0:06:11 smithi main centos 8.stream crimson-rados/seastore/basic/{centos_latest clusters/fixed-1 deploy/ceph objectstore/seastore tasks/readwrite} 1
fail 6987776 2022-08-23 12:24:36 2022-08-23 12:25:35 2022-08-23 12:54:55 0:29:20 0:22:28 0:06:52 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 10 --op write_excl 50 --pool unique_pool_0'

fail 6987777 2022-08-23 12:24:38 2022-08-23 12:25:35 2022-08-23 14:02:53 1:37:18 1:30:40 0:06:38 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 deploy/ceph tasks/rados_python} 2
Failure Reason:

Command failed (workunit test rados/test_python.sh) on smithi062 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=709e714fbee112d5f1c71cab40048ffbc7456916 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 1h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh --eval-attr \'not (wait or tier or snap or ec or bench or stats)\''

fail 6987778 2022-08-23 12:24:39 2022-08-23 12:25:36 2022-08-23 12:47:04 0:21:28 0:13:26 0:08:02 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 deploy/ceph tasks/rbd_cls_tests} 1
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi133 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=709e714fbee112d5f1c71cab40048ffbc7456916 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

pass 6987779 2022-08-23 12:24:41 2022-08-23 12:26:06 2022-08-23 12:47:01 0:20:55 0:14:53 0:06:02 smithi main centos 8.stream crimson-rados/seastore/basic/{centos_latest clusters/fixed-2 deploy/ceph objectstore/seastore tasks/readwrite} 2
pass 6987780 2022-08-23 12:24:43 2022-08-23 12:26:06 2022-08-23 12:47:20 0:21:14 0:13:05 0:08:09 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 deploy/ceph tasks/readwrite} 2
fail 6987781 2022-08-23 12:24:45 2022-08-23 12:27:47 2022-08-23 15:44:32 3:16:45 3:10:22 0:06:23 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 deploy/ceph tasks/rbd_lock_and_fence} 1
Failure Reason:

Command failed (workunit test rbd/test_lock_fence.sh) on smithi050 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=709e714fbee112d5f1c71cab40048ffbc7456916 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_lock_fence.sh'