Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7092654 2022-11-27 15:08:09 2022-11-27 15:20:36 2022-11-27 15:49:31 0:28:55 0:21:57 0:06:58 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 crimson_qa_overrides deploy/ceph tasks/rados_api_tests} 2
fail 7092655 2022-11-27 15:08:11 2022-11-27 15:20:36 2022-11-27 18:43:48 3:23:12 3:15:14 0:07:58 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 crimson_qa_overrides deploy/ceph tasks/rbd_api_tests_old_format} 1
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi039 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2d33a8c43477fc105a04fd2008dd063ba929ed14 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

pass 7092656 2022-11-27 15:08:12 2022-11-27 15:20:36 2022-11-27 16:14:50 0:54:14 0:47:09 0:07:05 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} 2
pass 7092657 2022-11-27 15:08:13 2022-11-27 15:20:37 2022-11-27 17:54:30 2:33:53 2:24:54 0:08:59 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench-high-concurrency} 2
pass 7092658 2022-11-27 15:08:14 2022-11-27 15:21:37 2022-11-27 15:48:49 0:27:12 0:18:25 0:08:47 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 crimson_qa_overrides deploy/ceph tasks/rbd_cls_tests} 1
pass 7092659 2022-11-27 15:08:15 2022-11-27 15:21:37 2022-11-27 15:50:37 0:29:00 0:18:09 0:10:51 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 crimson_qa_overrides deploy/ceph tasks/rados_python} 2
pass 7092660 2022-11-27 15:08:16 2022-11-27 15:23:28 2022-11-27 17:03:33 1:40:05 1:26:27 0:13:38 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench} 2
fail 7092661 2022-11-27 15:08:17 2022-11-27 15:24:08 2022-11-27 18:44:42 3:20:34 3:14:09 0:06:25 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 crimson_qa_overrides deploy/ceph tasks/rbd_lock_and_fence} 1
Failure Reason:

Command failed (workunit test rbd/test_lock_fence.sh) on smithi003 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2d33a8c43477fc105a04fd2008dd063ba929ed14 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_lock_fence.sh'

pass 7092662 2022-11-27 15:08:18 2022-11-27 15:24:08 2022-11-27 16:03:56 0:39:48 0:24:09 0:15:39 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects} 2
pass 7092663 2022-11-27 15:08:19 2022-11-27 15:25:08 2022-11-27 15:54:54 0:29:46 0:17:09 0:12:37 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 crimson_qa_overrides deploy/ceph tasks/readwrite} 2
fail 7092664 2022-11-27 15:08:20 2022-11-27 15:30:09 2022-11-27 16:13:53 0:43:44 0:36:01 0:07:43 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 crimson_qa_overrides deploy/ceph tasks/rbd_python_api_tests_old_format} 1
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi018 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2d33a8c43477fc105a04fd2008dd063ba929ed14 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 7092665 2022-11-27 15:08:21 2022-11-27 15:31:29 2022-11-27 16:27:32 0:56:03 0:43:25 0:12:38 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/write_fadvise_dontneed} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --write-fadvise-dontneed --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op delete 10 --op write_excl 50 --pool unique_pool_0'