Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7092626 2022-11-27 07:53:21 2022-11-27 07:54:48 2022-11-27 08:49:47 0:54:59 0:48:41 0:06:18 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 crimson_qa_overrides deploy/ceph tasks/rados_api_tests} 2
fail 7092627 2022-11-27 07:53:22 2022-11-27 07:54:48 2022-11-27 11:18:13 3:23:25 3:14:35 0:08:50 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 crimson_qa_overrides deploy/ceph tasks/rbd_api_tests_old_format} 1
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi189 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=734b41620da4332cbe52b532996cfbdce2946e6b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

pass 7092628 2022-11-27 07:53:23 2022-11-27 07:56:39 2022-11-27 08:23:44 0:27:05 0:19:45 0:07:20 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} 2
pass 7092629 2022-11-27 07:53:24 2022-11-27 07:56:49 2022-11-27 10:01:43 2:04:54 1:56:17 0:08:37 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench-high-concurrency} 2
pass 7092630 2022-11-27 07:53:25 2022-11-27 07:58:20 2022-11-27 08:23:31 0:25:11 0:18:31 0:06:40 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 crimson_qa_overrides deploy/ceph tasks/rbd_cls_tests} 1
pass 7092631 2022-11-27 07:53:26 2022-11-27 07:58:40 2022-11-27 08:24:17 0:25:37 0:18:12 0:07:25 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 crimson_qa_overrides deploy/ceph tasks/rados_python} 2
pass 7092632 2022-11-27 07:53:27 2022-11-27 07:59:11 2022-11-27 08:38:11 0:39:00 0:31:08 0:07:52 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench} 2
fail 7092633 2022-11-27 07:53:28 2022-11-27 08:01:11 2022-11-27 11:21:48 3:20:37 3:13:13 0:07:24 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 crimson_qa_overrides deploy/ceph tasks/rbd_lock_and_fence} 1
Failure Reason:

Command failed (workunit test rbd/test_lock_fence.sh) on smithi174 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=734b41620da4332cbe52b532996cfbdce2946e6b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_lock_fence.sh'

pass 7092634 2022-11-27 07:53:29 2022-11-27 08:02:52 2022-11-27 08:38:43 0:35:51 0:24:12 0:11:39 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects} 2
pass 7092635 2022-11-27 07:53:30 2022-11-27 08:07:23 2022-11-27 08:31:12 0:23:49 0:16:57 0:06:52 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 crimson_qa_overrides deploy/ceph tasks/readwrite} 2
fail 7092636 2022-11-27 07:53:31 2022-11-27 08:07:23 2022-11-27 08:51:47 0:44:24 0:35:25 0:08:59 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 crimson_qa_overrides deploy/ceph tasks/rbd_python_api_tests_old_format} 1
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi105 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=734b41620da4332cbe52b532996cfbdce2946e6b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 7092637 2022-11-27 07:53:32 2022-11-27 08:08:54 2022-11-27 09:00:10 0:51:16 0:43:06 0:08:10 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/write_fadvise_dontneed} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --write-fadvise-dontneed --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op delete 10 --op write_excl 50 --pool unique_pool_0'