Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7130242 2023-01-19 17:42:21 2023-01-19 19:10:33 2023-01-19 22:42:07 3:31:34 3:15:09 0:16:25 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 crimson_qa_overrides deploy/ceph tasks/rados_api_tests} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi112 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4d6e0cf122cdf54f4483e232ac10b981e3b26d18 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 7130243 2023-01-19 17:42:22 2023-01-19 19:16:54 2023-01-19 22:44:47 3:27:53 3:14:10 0:13:43 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 crimson_qa_overrides deploy/ceph tasks/rbd_api_tests_old_format} 1
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi170 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4d6e0cf122cdf54f4483e232ac10b981e3b26d18 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

pass 7130244 2023-01-19 17:42:23 2023-01-19 19:21:45 2023-01-19 19:50:34 0:28:49 0:18:39 0:10:10 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} 2
fail 7130245 2023-01-19 17:42:24 2023-01-19 19:23:05 2023-01-19 21:17:06 1:54:01 1:43:19 0:10:42 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench-high-concurrency} 2
Failure Reason:

reached maximum tries (500) after waiting for 3000 seconds

pass 7130246 2023-01-19 17:42:25 2023-01-19 19:23:46 2023-01-19 19:53:01 0:29:15 0:19:03 0:10:12 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 crimson_qa_overrides deploy/ceph tasks/rbd_cls_tests} 1
pass 7130247 2023-01-19 17:42:26 2023-01-19 19:23:46 2023-01-19 20:03:01 0:39:15 0:26:59 0:12:16 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 crimson_qa_overrides deploy/ceph tasks/rados_python} 2
fail 7130248 2023-01-19 17:42:27 2023-01-19 19:25:46 2023-01-19 21:17:18 1:51:32 1:40:34 0:10:58 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench} 2
Failure Reason:

reached maximum tries (500) after waiting for 3000 seconds

fail 7130249 2023-01-19 17:42:28 2023-01-19 19:27:47 2023-01-19 22:50:11 3:22:24 3:13:41 0:08:43 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 crimson_qa_overrides deploy/ceph tasks/rbd_lock_and_fence} 1
Failure Reason:

Command failed (workunit test rbd/test_lock_fence.sh) on smithi053 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4d6e0cf122cdf54f4483e232ac10b981e3b26d18 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_lock_fence.sh'

fail 7130250 2023-01-19 17:42:29 2023-01-19 19:27:47 2023-01-19 20:21:16 0:53:29 0:43:29 0:10:00 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op snap_create 0 --op snap_remove 0 --op rollback 0 --op setattr 25 --op rmattr 25 --op copy_from 0 --op write_excl 50 --pool unique_pool_0'

pass 7130251 2023-01-19 17:42:30 2023-01-19 19:27:47 2023-01-19 19:55:06 0:27:19 0:17:03 0:10:16 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 crimson_qa_overrides deploy/ceph tasks/readwrite} 2
fail 7130252 2023-01-19 17:42:31 2023-01-19 19:28:38 2023-01-19 22:52:00 3:23:22 3:13:18 0:10:04 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 crimson_qa_overrides deploy/ceph tasks/rbd_python_api_tests_old_format} 1
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi182 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4d6e0cf122cdf54f4483e232ac10b981e3b26d18 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

pass 7130253 2023-01-19 17:42:32 2023-01-19 19:28:48 2023-01-19 19:59:20 0:30:32 0:18:12 0:12:20 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/write_fadvise_dontneed} 2