User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2023-08-01 19:17:42 | 2023-08-01 19:18:36 | 2023-08-02 07:40:04 | 12:21:28 | powercycle | reef | smithi | d6d42aa | 2 | 9 | 4 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 7358018 | 2023-08-01 19:17:50 | 2023-08-01 19:18:35 | 2023-08-02 07:34:27 | 12:15:52 | smithi | main | centos | 8.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-bitmap powercycle/default supported-all-distro/centos_8 tasks/snaps-many-objects thrashosds-health} | 4 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7358019 | 2023-08-01 19:17:50 | 2023-08-01 19:18:35 | 2023-08-01 20:02:02 | 0:43:27 | 0:25:50 | 0:17:37 | smithi | main | rhel | 8.6 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-lz4 powercycle/default supported-all-distro/rhel_8 tasks/admin_socket_objecter_requests thrashosds-health} | 4 | |
fail | 7358020 | 2023-08-01 19:17:51 | 2023-08-01 19:18:35 | 2023-08-01 20:01:04 | 0:42:29 | 0:22:25 | 0:20:04 | smithi | main | ubuntu | 20.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_kernel_untar_build thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test kernel_untar_build.sh) on smithi022 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=40f7fe608979750e0b6fbfbcba004b5c5f7d4522 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh' |
||||||||||||||
fail | 7358021 | 2023-08-01 19:17:52 | 2023-08-01 19:18:36 | 2023-08-01 19:59:59 | 0:41:23 | 0:22:11 | 0:19:12 | smithi | main | centos | 8.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zlib powercycle/default supported-all-distro/centos_8 tasks/cfuse_workunit_misc thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test fs/misc/dirfrag.sh) on smithi092 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=40f7fe608979750e0b6fbfbcba004b5c5f7d4522 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/dirfrag.sh' |
||||||||||||||
fail | 7358022 | 2023-08-01 19:17:53 | 2023-08-01 19:18:36 | 2023-08-01 19:53:14 | 0:34:38 | 0:24:58 | 0:09:40 | smithi | main | rhel | 8.6 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-all-distro/rhel_8 tasks/cfuse_workunit_suites_ffsb thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test suites/ffsb.sh) on smithi103 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=40f7fe608979750e0b6fbfbcba004b5c5f7d4522 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh' |
||||||||||||||
fail | 7358023 | 2023-08-01 19:17:54 | 2023-08-01 19:18:36 | 2023-08-01 19:51:18 | 0:32:42 | 0:17:23 | 0:15:19 | smithi | main | ubuntu | 20.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-hybrid powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_suites_fsstress thrashosds-health} | 4 | |
Failure Reason:
Command failed on smithi043 with status 1: 'sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp' |
||||||||||||||
fail | 7358024 | 2023-08-01 19:17:54 | 2023-08-01 19:18:37 | 2023-08-01 20:05:45 | 0:47:08 | 0:22:13 | 0:24:55 | smithi | main | centos | 8.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-all-distro/centos_8 tasks/cfuse_workunit_suites_fsx thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test suites/fsx.sh) on smithi019 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=40f7fe608979750e0b6fbfbcba004b5c5f7d4522 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh' |
||||||||||||||
fail | 7358025 | 2023-08-01 19:17:55 | 2023-08-01 19:18:37 | 2023-08-01 20:08:03 | 0:49:26 | 0:35:00 | 0:14:26 | smithi | main | rhel | 8.6 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-stupid powercycle/default supported-all-distro/rhel_8 tasks/cfuse_workunit_suites_fsync thrashosds-health} | 4 | |
Failure Reason:
reached maximum tries (51) after waiting for 300 seconds |
||||||||||||||
fail | 7358026 | 2023-08-01 19:17:56 | 2023-08-01 19:18:38 | 2023-08-01 20:03:00 | 0:44:22 | 0:23:02 | 0:21:20 | smithi | main | ubuntu | 20.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-bitmap powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_suites_pjd thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi083 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=40f7fe608979750e0b6fbfbcba004b5c5f7d4522 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
pass | 7358027 | 2023-08-01 19:17:57 | 2023-08-01 19:18:38 | 2023-08-01 20:03:01 | 0:44:23 | 0:20:54 | 0:23:29 | smithi | main | centos | 8.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-lz4 powercycle/default supported-all-distro/centos_8 tasks/cfuse_workunit_suites_truncate_delay thrashosds-health} | 4 | |
fail | 7358028 | 2023-08-01 19:17:57 | 2023-08-01 19:18:38 | 2023-08-01 22:58:44 | 3:40:06 | 3:20:13 | 0:19:53 | smithi | main | rhel | 8.6 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-all-distro/rhel_8 tasks/rados_api_tests thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi012 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=40f7fe608979750e0b6fbfbcba004b5c5f7d4522 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 7358029 | 2023-08-01 19:17:58 | 2023-08-01 19:18:39 | 2023-08-01 20:49:59 | 1:31:20 | 1:10:21 | 0:20:59 | smithi | main | ubuntu | 20.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zlib powercycle/default supported-all-distro/ubuntu_latest tasks/radosbench thrashosds-health} | 4 | |
Failure Reason:
reached maximum tries (501) after waiting for 3000 seconds |
||||||||||||||
dead | 7358030 | 2023-08-01 19:17:59 | 2023-08-01 19:18:39 | 2023-08-02 07:36:14 | 12:17:35 | smithi | main | centos | 8.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-all-distro/centos_8 tasks/readwrite thrashosds-health} | 4 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7358031 | 2023-08-01 19:18:00 | 2023-08-01 19:18:39 | 2023-08-01 19:42:40 | 0:24:01 | smithi | main | rhel | 8.6 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-hybrid powercycle/default supported-all-distro/rhel_8 tasks/snaps-few-objects thrashosds-health} | 4 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds |
||||||||||||||
dead | 7358032 | 2023-08-01 19:18:00 | 2023-08-01 19:18:40 | 2023-08-02 07:40:04 | 12:21:24 | smithi | main | ubuntu | 20.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-all-distro/ubuntu_latest tasks/snaps-many-objects thrashosds-health} | 4 | |||
Failure Reason:
hit max job timeout |