User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2023-07-23 14:01:03 | 2023-07-23 14:02:03 | 2023-07-24 02:27:14 | 12:25:11 | powercycle | reef | smithi | c2f1083 | 11 | 4 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 7348177 | 2023-07-23 14:01:09 | 2023-07-23 14:01:57 | 2023-07-24 02:27:14 | 12:25:17 | smithi | main | centos | 8.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-bitmap powercycle/default supported-all-distro/centos_8 tasks/snaps-many-objects thrashosds-health} | 4 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7348179 | 2023-07-23 14:01:10 | 2023-07-23 14:01:58 | 2023-07-23 15:25:45 | 1:23:47 | 1:04:05 | 0:19:42 | smithi | main | rhel | 8.6 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-lz4 powercycle/default supported-all-distro/rhel_8 tasks/admin_socket_objecter_requests thrashosds-health} | 4 | |
Failure Reason:
reached maximum tries (351) after waiting for 2100 seconds |
||||||||||||||
fail | 7348181 | 2023-07-23 14:01:10 | 2023-07-23 14:01:59 | 2023-07-23 14:48:57 | 0:46:58 | 0:25:38 | 0:21:20 | smithi | main | ubuntu | 20.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_kernel_untar_build thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test kernel_untar_build.sh) on smithi117 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c2f10834dee12ea9c20f072e7ad4b3d7cb4d5f63 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh' |
||||||||||||||
fail | 7348183 | 2023-07-23 14:01:11 | 2023-07-23 14:02:00 | 2023-07-23 14:53:09 | 0:51:09 | 0:29:07 | 0:22:02 | smithi | main | centos | 8.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zlib powercycle/default supported-all-distro/centos_8 tasks/cfuse_workunit_misc thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test fs/misc/dirfrag.sh) on smithi052 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c2f10834dee12ea9c20f072e7ad4b3d7cb4d5f63 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/dirfrag.sh' |
||||||||||||||
fail | 7348185 | 2023-07-23 14:01:12 | 2023-07-23 14:02:01 | 2023-07-23 14:48:20 | 0:46:19 | 0:26:30 | 0:19:49 | smithi | main | rhel | 8.6 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-all-distro/rhel_8 tasks/cfuse_workunit_suites_ffsb thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test suites/ffsb.sh) on smithi002 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c2f10834dee12ea9c20f072e7ad4b3d7cb4d5f63 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh' |
||||||||||||||
fail | 7348187 | 2023-07-23 14:01:13 | 2023-07-23 14:02:02 | 2023-07-23 14:50:40 | 0:48:38 | 0:26:58 | 0:21:40 | smithi | main | ubuntu | 20.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-hybrid powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_suites_fsstress thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test suites/fsstress.sh) on smithi037 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c2f10834dee12ea9c20f072e7ad4b3d7cb4d5f63 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh' |
||||||||||||||
fail | 7348189 | 2023-07-23 14:01:14 | 2023-07-23 14:02:03 | 2023-07-23 14:53:01 | 0:50:58 | 0:25:08 | 0:25:50 | smithi | main | centos | 8.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-all-distro/centos_8 tasks/cfuse_workunit_suites_fsx thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test suites/fsx.sh) on smithi092 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c2f10834dee12ea9c20f072e7ad4b3d7cb4d5f63 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh' |
||||||||||||||
fail | 7348191 | 2023-07-23 14:01:14 | 2023-07-23 14:02:04 | 2023-07-23 14:52:52 | 0:50:48 | 0:31:15 | 0:19:33 | smithi | main | rhel | 8.6 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-stupid powercycle/default supported-all-distro/rhel_8 tasks/cfuse_workunit_suites_fsync thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test suites/fsync-tester.sh) on smithi033 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c2f10834dee12ea9c20f072e7ad4b3d7cb4d5f63 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsync-tester.sh' |
||||||||||||||
fail | 7348193 | 2023-07-23 14:01:15 | 2023-07-23 14:02:05 | 2023-07-23 14:57:10 | 0:55:05 | 0:30:01 | 0:25:04 | smithi | main | ubuntu | 20.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-bitmap powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_suites_pjd thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi070 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c2f10834dee12ea9c20f072e7ad4b3d7cb4d5f63 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
fail | 7348195 | 2023-07-23 14:01:16 | 2023-07-23 14:02:06 | 2023-07-23 14:49:38 | 0:47:32 | 0:22:50 | 0:24:42 | smithi | main | centos | 8.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-lz4 powercycle/default supported-all-distro/centos_8 tasks/cfuse_workunit_suites_truncate_delay thrashosds-health} | 4 | |
Failure Reason:
Command failed on smithi003 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 7348197 | 2023-07-23 14:01:17 | 2023-07-23 14:02:06 | 2023-07-23 17:57:00 | 3:54:54 | 3:32:08 | 0:22:46 | smithi | main | rhel | 8.6 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-all-distro/rhel_8 tasks/rados_api_tests thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi006 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c2f10834dee12ea9c20f072e7ad4b3d7cb4d5f63 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 7348199 | 2023-07-23 14:01:18 | 2023-07-23 14:02:07 | 2023-07-23 15:40:44 | 1:38:37 | 1:13:32 | 0:25:05 | smithi | main | ubuntu | 20.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zlib powercycle/default supported-all-distro/ubuntu_latest tasks/radosbench thrashosds-health} | 4 | |
Failure Reason:
reached maximum tries (501) after waiting for 3000 seconds |
||||||||||||||
dead | 7348201 | 2023-07-23 14:01:19 | 2023-07-23 14:02:08 | 2023-07-24 02:23:35 | 12:21:27 | smithi | main | centos | 8.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-all-distro/centos_8 tasks/readwrite thrashosds-health} | 4 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7348203 | 2023-07-23 14:01:19 | 2023-07-23 14:02:09 | 2023-07-24 02:24:39 | 12:22:30 | smithi | main | rhel | 8.6 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-hybrid powercycle/default supported-all-distro/rhel_8 tasks/snaps-few-objects thrashosds-health} | 4 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7348205 | 2023-07-23 14:01:20 | 2023-07-23 14:02:10 | 2023-07-24 02:25:16 | 12:23:06 | smithi | main | ubuntu | 20.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-all-distro/ubuntu_latest tasks/snaps-many-objects thrashosds-health} | 4 | |||
Failure Reason:
hit max job timeout |