Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7283246 2023-05-23 06:48:50 2023-05-23 06:49:58 2023-05-23 10:23:10 3:33:12 3:21:55 0:11:17 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 crimson_qa_overrides deploy/ceph tasks/rados_api_tests} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi002 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=031fa64016bfbb21467aeca1fc33541e74a27742 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 7283247 2023-05-23 06:48:51 2023-05-23 06:49:58 2023-05-23 10:30:26 3:40:28 3:21:42 0:18:46 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 crimson_qa_overrides deploy/ceph tasks/rbd_api_tests_old_format} 1
Failure Reason:

Command failed (workunit test rbd/crimson/test_crimson_librbd.sh) on smithi183 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=031fa64016bfbb21467aeca1fc33541e74a27742 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/crimson/test_crimson_librbd.sh'

fail 7283248 2023-05-23 06:48:51 2023-05-23 06:49:59 2023-05-23 09:38:49 2:48:50 2:29:07 0:19:43 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} 2
Failure Reason:

reached maximum tries (800) after waiting for 4800 seconds

fail 7283249 2023-05-23 06:48:52 2023-05-23 06:49:59 2023-05-23 09:13:02 2:23:03 2:03:14 0:19:49 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench-high-concurrency} 2
Failure Reason:

reached maximum tries (500) after waiting for 3000 seconds

fail 7283250 2023-05-23 06:48:52 2023-05-23 06:50:00 2023-05-23 07:24:52 0:34:52 0:14:07 0:20:45 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 crimson_qa_overrides deploy/ceph tasks/rbd_cls_tests} 1
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi155 with status 139: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=031fa64016bfbb21467aeca1fc33541e74a27742 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

fail 7283251 2023-05-23 06:48:53 2023-05-23 06:50:00 2023-05-23 08:20:57 1:30:57 1:13:29 0:17:28 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench} 2
Failure Reason:

Command failed on smithi026 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json'

fail 7283252 2023-05-23 06:48:54 2023-05-23 06:50:00 2023-05-23 08:33:57 1:43:57 1:24:07 0:19:50 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 crimson_qa_overrides deploy/ceph tasks/rados_python} 2
Failure Reason:

Command failed (workunit test rados/test_python.sh) on smithi060 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=031fa64016bfbb21467aeca1fc33541e74a27742 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 1h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh --eval-attr \'not (wait or tier or ec or bench or stats)\''

fail 7283253 2023-05-23 06:48:54 2023-05-23 06:50:01 2023-05-23 08:21:01 1:31:00 1:10:55 0:20:05 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects-balanced} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --balance-reads --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 0 --op rollback 0 --op setattr 25 --op rmattr 25 --op copy_from 0 --op write_excl 50 --pool unique_pool_0'

fail 7283254 2023-05-23 06:48:55 2023-05-23 06:50:01 2023-05-23 07:30:45 0:40:44 0:21:11 0:19:33 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 crimson_qa_overrides deploy/ceph tasks/rbd_lock_and_fence} 1
Failure Reason:

Command failed on smithi189 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json'

fail 7283255 2023-05-23 06:48:55 2023-05-23 06:50:02 2023-05-23 08:20:07 1:30:05 1:09:37 0:20:28 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects-localized} 2
Failure Reason:

Command failed on smithi038 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph status --format=json'

fail 7283256 2023-05-23 06:48:56 2023-05-23 06:50:02 2023-05-23 07:26:09 0:36:07 0:15:53 0:20:14 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 crimson_qa_overrides deploy/ceph tasks/readwrite} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 45 --op write 22 --op delete 10 --op write_excl 22 --pool unique_pool_0'

fail 7283257 2023-05-23 06:48:57 2023-05-23 06:50:02 2023-05-23 08:17:56 1:27:54 1:09:12 0:18:42 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects} 2
Failure Reason:

Command failed on smithi071 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph status --format=json'

fail 7283258 2023-05-23 06:48:57 2023-05-23 06:50:03 2023-05-23 10:24:26 3:34:23 3:14:36 0:19:47 smithi main centos 8.stream crimson-rados/rbd/{centos_latest clusters/fixed-1 crimson_qa_overrides deploy/ceph tasks/rbd_python_api_tests_old_format} 1
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi192 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=031fa64016bfbb21467aeca1fc33541e74a27742 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 7283259 2023-05-23 06:48:58 2023-05-23 06:50:03 2023-05-23 07:23:39 0:33:36 0:13:25 0:20:11 smithi main centos 8.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/write_fadvise_dontneed} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --write-fadvise-dontneed --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op delete 10 --op write_excl 50 --pool unique_pool_0'