User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2023-07-29 14:04:17 | 2023-07-29 16:14:28 | 2023-07-30 04:58:32 | 12:44:04 | powercycle | reef-release | smithi | 0484097 | 11 | 4 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 7356885 | 2023-07-29 14:04:23 | 2023-07-29 16:14:28 | 2023-07-30 04:28:30 | 12:14:02 | smithi | main | centos | 8.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-bitmap powercycle/default supported-all-distro/centos_8 tasks/snaps-many-objects thrashosds-health} | 4 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7356886 | 2023-07-29 14:04:24 | 2023-07-29 16:19:39 | 2023-07-29 16:47:43 | 0:28:04 | 0:21:07 | 0:06:57 | smithi | main | rhel | 8.6 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-lz4 powercycle/default supported-all-distro/rhel_8 tasks/admin_socket_objecter_requests thrashosds-health} | 4 | |
Failure Reason:
Command failed on smithi070 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 7356887 | 2023-07-29 14:04:25 | 2023-07-29 16:20:30 | 2023-07-29 16:49:07 | 0:28:37 | 0:17:34 | 0:11:03 | smithi | main | ubuntu | 20.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_kernel_untar_build thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test kernel_untar_build.sh) on smithi069 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0484097b0572e96f42cf6402cbe0ac8dcb046577 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh' |
||||||||||||||
fail | 7356889 | 2023-07-29 14:04:26 | 2023-07-29 16:21:12 | 2023-07-29 16:52:54 | 0:31:42 | 0:20:44 | 0:10:58 | smithi | main | centos | 8.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zlib powercycle/default supported-all-distro/centos_8 tasks/cfuse_workunit_misc thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi037 with status 23: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0484097b0572e96f42cf6402cbe0ac8dcb046577 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/multiple_rsync.sh' |
||||||||||||||
fail | 7356891 | 2023-07-29 14:04:27 | 2023-07-29 16:21:53 | 2023-07-29 16:56:17 | 0:34:24 | 0:26:39 | 0:07:45 | smithi | main | rhel | 8.6 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-all-distro/rhel_8 tasks/cfuse_workunit_suites_ffsb thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test suites/ffsb.sh) on smithi087 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0484097b0572e96f42cf6402cbe0ac8dcb046577 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh' |
||||||||||||||
fail | 7356893 | 2023-07-29 14:04:28 | 2023-07-29 16:25:04 | 2023-07-29 16:55:26 | 0:30:22 | 0:16:54 | 0:13:28 | smithi | main | ubuntu | 20.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-hybrid powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_suites_fsstress thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test suites/fsstress.sh) on smithi099 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0484097b0572e96f42cf6402cbe0ac8dcb046577 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh' |
||||||||||||||
fail | 7356895 | 2023-07-29 14:04:29 | 2023-07-29 16:27:17 | 2023-07-29 17:01:21 | 0:34:04 | 0:18:55 | 0:15:09 | smithi | main | centos | 8.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-all-distro/centos_8 tasks/cfuse_workunit_suites_fsx thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test suites/fsx.sh) on smithi057 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0484097b0572e96f42cf6402cbe0ac8dcb046577 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh' |
||||||||||||||
fail | 7356897 | 2023-07-29 14:04:30 | 2023-07-29 16:34:29 | 2023-07-29 17:02:36 | 0:28:07 | 0:20:44 | 0:07:23 | smithi | main | rhel | 8.6 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-stupid powercycle/default supported-all-distro/rhel_8 tasks/cfuse_workunit_suites_fsync thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test suites/fsync-tester.sh) on smithi006 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0484097b0572e96f42cf6402cbe0ac8dcb046577 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsync-tester.sh' |
||||||||||||||
fail | 7356899 | 2023-07-29 14:04:30 | 2023-07-29 16:35:20 | 2023-07-29 17:07:09 | 0:31:49 | 0:18:19 | 0:13:30 | smithi | main | ubuntu | 20.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-bitmap powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_suites_pjd thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi055 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0484097b0572e96f42cf6402cbe0ac8dcb046577 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
fail | 7356901 | 2023-07-29 14:04:31 | 2023-07-29 16:38:51 | 2023-07-29 17:17:12 | 0:38:21 | 0:18:12 | 0:20:09 | smithi | main | centos | 8.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-lz4 powercycle/default supported-all-distro/centos_8 tasks/cfuse_workunit_suites_truncate_delay thrashosds-health} | 4 | |
Failure Reason:
Command failed on smithi073 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 7356903 | 2023-07-29 14:04:32 | 2023-07-29 16:42:02 | 2023-07-29 20:19:33 | 3:37:31 | 3:28:34 | 0:08:57 | smithi | main | rhel | 8.6 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-all-distro/rhel_8 tasks/rados_api_tests thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi018 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0484097b0572e96f42cf6402cbe0ac8dcb046577 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 7356905 | 2023-07-29 14:04:33 | 2023-07-29 16:44:34 | 2023-07-29 18:03:24 | 1:18:50 | 1:07:03 | 0:11:47 | smithi | main | ubuntu | 20.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zlib powercycle/default supported-all-distro/ubuntu_latest tasks/radosbench thrashosds-health} | 4 | |
Failure Reason:
reached maximum tries (501) after waiting for 3000 seconds |
||||||||||||||
dead | 7356907 | 2023-07-29 14:04:34 | 2023-07-29 16:45:35 | 2023-07-30 04:55:39 | 12:10:04 | smithi | main | centos | 8.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-all-distro/centos_8 tasks/readwrite thrashosds-health} | 4 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7356909 | 2023-07-29 14:04:35 | 2023-07-29 16:46:36 | 2023-07-30 04:56:19 | 12:09:43 | smithi | main | rhel | 8.6 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-hybrid powercycle/default supported-all-distro/rhel_8 tasks/snaps-few-objects thrashosds-health} | 4 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7356911 | 2023-07-29 14:04:36 | 2023-07-29 16:47:57 | 2023-07-30 04:58:32 | 12:10:35 | smithi | main | ubuntu | 20.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-all-distro/ubuntu_latest tasks/snaps-many-objects thrashosds-health} | 4 | |||
Failure Reason:
hit max job timeout |