Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7078455 2022-10-23 09:53:08 2022-10-23 09:59:39 2022-10-23 13:41:43 3:42:04 3:34:29 0:07:35 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 deploy/ceph tasks/rados_api_tests} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi120 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6fb859258928a5d76fd5f6b6802b10593037f421 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 7078456 2022-10-23 09:53:09 2022-10-23 10:00:19 2022-10-23 13:20:59 3:20:40 3:14:48 0:05:52 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 deploy/ceph tasks/rbd_api_tests_old_format} 1
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi036 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6fb859258928a5d76fd5f6b6802b10593037f421 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

pass 7078457 2022-10-23 09:53:11 2022-10-23 10:00:20 2022-10-23 10:25:46 0:25:26 0:17:35 0:07:51 smithi main centos 8.stream crimson-rados/seastore/basic/{centos_latest clusters/fixed-1 deploy/ceph objectstore/seastore tasks/readwrite} 1
fail 7078458 2022-10-23 09:53:13 2022-10-23 10:01:00 2022-10-23 10:22:36 0:21:36 0:13:49 0:07:47 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 10 --op write_excl 50 --pool unique_pool_0'

pass 7078459 2022-10-23 09:53:14 2022-10-23 10:01:01 2022-10-23 10:26:31 0:25:30 0:17:54 0:07:36 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 deploy/ceph tasks/rbd_cls_tests} 1
pass 7078460 2022-10-23 09:53:16 2022-10-23 10:01:11 2022-10-23 10:26:50 0:25:39 0:17:42 0:07:57 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 deploy/ceph tasks/rados_python} 2
fail 7078461 2022-10-23 09:53:17 2022-10-23 10:01:42 2022-10-23 13:21:51 3:20:09 3:12:42 0:07:27 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 deploy/ceph tasks/rbd_lock_and_fence} 1
Failure Reason:

Command failed (workunit test rbd/test_lock_fence.sh) on smithi088 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6fb859258928a5d76fd5f6b6802b10593037f421 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_lock_fence.sh'

pass 7078462 2022-10-23 09:53:18 2022-10-23 10:02:02 2022-10-23 10:27:02 0:25:00 0:16:58 0:08:02 smithi main centos 8.stream crimson-rados/seastore/basic/{centos_latest clusters/fixed-2 deploy/ceph objectstore/seastore tasks/readwrite} 2
pass 7078463 2022-10-23 09:53:20 2022-10-23 10:03:03 2022-10-23 10:26:39 0:23:36 0:16:32 0:07:04 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 deploy/ceph tasks/readwrite} 2
fail 7078464 2022-10-23 09:53:22 2022-10-23 10:03:33 2022-10-23 13:25:44 3:22:11 3:13:39 0:08:32 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 deploy/ceph tasks/rbd_python_api_tests_old_format} 1
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi133 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6fb859258928a5d76fd5f6b6802b10593037f421 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'