User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
sjust | 2022-12-01 22:28:44 | 2022-12-03 13:10:06 | 2022-12-03 16:33:40 | 3:23:34 | crimson-rados | wip-sjust-testing-2022-11-30-141821 | smithi | 4a79121 | 5 | 7 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7100474 | 2022-12-01 22:28:57 | 2022-12-03 13:10:06 | 2022-12-03 14:03:14 | 0:53:08 | 0:45:43 | 0:07:25 | smithi | main | centos | 8.stream | crimson-rados/basic/{centos_latest clusters/fixed-2 crimson_qa_overrides deploy/ceph tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi159 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4a79121cfd1ee1750ffed71ede253747b30e6436 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 7100475 | 2022-12-01 22:28:58 | 2022-12-03 13:10:07 | 2022-12-03 16:31:56 | 3:21:49 | 3:15:09 | 0:06:40 | smithi | main | centos | 8.stream | crimson-rados/rbd/{centos_latest clusters/fixed-1 crimson_qa_overrides deploy/ceph tasks/rbd_api_tests_old_format} | 1 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd.sh) on smithi072 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4a79121cfd1ee1750ffed71ede253747b30e6436 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh' |
||||||||||||||
pass | 7100476 | 2022-12-01 22:28:59 | 2022-12-03 13:10:07 | 2022-12-03 14:03:55 | 0:53:48 | 0:46:55 | 0:06:53 | smithi | main | centos | 8.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
pass | 7100477 | 2022-12-01 22:29:00 | 2022-12-03 13:10:37 | 2022-12-03 15:42:39 | 2:32:02 | 2:23:59 | 0:08:03 | smithi | main | centos | 8.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
pass | 7100478 | 2022-12-01 22:29:02 | 2022-12-03 13:11:38 | 2022-12-03 13:36:28 | 0:24:50 | 0:18:25 | 0:06:25 | smithi | main | centos | 8.stream | crimson-rados/rbd/{centos_latest clusters/fixed-1 crimson_qa_overrides deploy/ceph tasks/rbd_cls_tests} | 1 | |
pass | 7100479 | 2022-12-01 22:29:03 | 2022-12-03 13:11:38 | 2022-12-03 13:37:32 | 0:25:54 | 0:18:24 | 0:07:30 | smithi | main | centos | 8.stream | crimson-rados/basic/{centos_latest clusters/fixed-2 crimson_qa_overrides deploy/ceph tasks/rados_python} | 2 | |
fail | 7100480 | 2022-12-01 22:29:04 | 2022-12-03 13:11:49 | 2022-12-03 14:24:53 | 1:13:04 | 1:04:35 | 0:08:29 | smithi | main | centos | 8.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench} | 2 | |
Failure Reason:
reached maximum tries (500) after waiting for 3000 seconds |
||||||||||||||
fail | 7100481 | 2022-12-01 22:29:05 | 2022-12-03 13:13:59 | 2022-12-03 16:33:40 | 3:19:41 | 3:13:40 | 0:06:01 | smithi | main | centos | 8.stream | crimson-rados/rbd/{centos_latest clusters/fixed-1 crimson_qa_overrides deploy/ceph tasks/rbd_lock_and_fence} | 1 | |
Failure Reason:
Command failed (workunit test rbd/test_lock_fence.sh) on smithi132 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4a79121cfd1ee1750ffed71ede253747b30e6436 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_lock_fence.sh' |
||||||||||||||
fail | 7100482 | 2022-12-01 22:29:07 | 2022-12-03 13:14:00 | 2022-12-03 14:09:18 | 0:55:18 | 0:43:51 | 0:11:27 | smithi | main | centos | 8.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op snap_create 0 --op snap_remove 0 --op rollback 0 --op setattr 25 --op rmattr 25 --op copy_from 0 --op write_excl 50 --pool unique_pool_0' |
||||||||||||||
pass | 7100483 | 2022-12-01 22:29:08 | 2022-12-03 13:18:00 | 2022-12-03 13:42:00 | 0:24:00 | 0:16:54 | 0:07:06 | smithi | main | centos | 8.stream | crimson-rados/basic/{centos_latest clusters/fixed-2 crimson_qa_overrides deploy/ceph tasks/readwrite} | 2 | |
fail | 7100484 | 2022-12-01 22:29:09 | 2022-12-03 13:18:31 | 2022-12-03 14:01:38 | 0:43:07 | 0:35:24 | 0:07:43 | smithi | main | centos | 8.stream | crimson-rados/rbd/{centos_latest clusters/fixed-1 crimson_qa_overrides deploy/ceph tasks/rbd_python_api_tests_old_format} | 1 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd_python.sh) on smithi138 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4a79121cfd1ee1750ffed71ede253747b30e6436 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh' |
||||||||||||||
fail | 7100485 | 2022-12-01 22:29:10 | 2022-12-03 13:18:31 | 2022-12-03 14:10:48 | 0:52:17 | 0:43:45 | 0:08:32 | smithi | main | centos | 8.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/write_fadvise_dontneed} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --write-fadvise-dontneed --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op delete 10 --op write_excl 50 --pool unique_pool_0' |