Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 7354539 2023-07-27 14:01:07 2023-07-27 14:18:37 2023-07-28 02:32:27 12:13:50 smithi main centos 8.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-bitmap powercycle/default supported-all-distro/centos_8 tasks/snaps-many-objects thrashosds-health} 4
Failure Reason:

hit max job timeout

dead 7354540 2023-07-27 14:01:11 2023-07-27 14:22:24 2023-07-27 14:48:09 0:25:45 smithi main rhel 8.6 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-lz4 powercycle/default supported-all-distro/rhel_8 tasks/admin_socket_objecter_requests thrashosds-health} 4
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

fail 7354541 2023-07-27 14:01:17 2023-07-27 14:22:40 2023-07-27 14:58:48 0:36:08 0:20:59 0:15:09 smithi main ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_kernel_untar_build thrashosds-health} 4
Failure Reason:

Command failed (workunit test kernel_untar_build.sh) on smithi016 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c1444501ab7918ce42bdc26b9d860ad26e34dd69 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh'

fail 7354542 2023-07-27 14:01:22 2023-07-27 15:11:01 1759 smithi main centos 8.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zlib powercycle/default supported-all-distro/centos_8 tasks/cfuse_workunit_misc thrashosds-health} 4
Failure Reason:

Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi061 with status 23: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c1444501ab7918ce42bdc26b9d860ad26e34dd69 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/multiple_rsync.sh'

fail 7354543 2023-07-27 14:01:23 2023-07-27 14:25:10 2023-07-27 15:06:35 0:41:25 0:24:48 0:16:37 smithi main rhel 8.6 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-all-distro/rhel_8 tasks/cfuse_workunit_suites_ffsb thrashosds-health} 4
Failure Reason:

Command failed (workunit test suites/ffsb.sh) on smithi053 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c1444501ab7918ce42bdc26b9d860ad26e34dd69 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'

fail 7354544 2023-07-27 14:01:29 2023-07-27 15:07:00 1254 smithi main ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-hybrid powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_suites_fsstress thrashosds-health} 4
Failure Reason:

Command failed on smithi049 with status 1: 'sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp'

fail 7354545 2023-07-27 14:01:35 2023-07-27 14:32:23 2023-07-27 15:08:20 0:35:57 0:20:55 0:15:02 smithi main centos 8.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-all-distro/centos_8 tasks/cfuse_workunit_suites_fsx thrashosds-health} 4
Failure Reason:

Command failed (workunit test suites/fsx.sh) on smithi064 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c1444501ab7918ce42bdc26b9d860ad26e34dd69 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh'

fail 7354546 2023-07-27 14:01:46 2023-07-27 14:33:25 2023-07-27 15:25:20 0:51:55 0:38:10 0:13:45 smithi main rhel 8.6 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-stupid powercycle/default supported-all-distro/rhel_8 tasks/cfuse_workunit_suites_fsync thrashosds-health} 4
Failure Reason:

Command failed on smithi029 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

fail 7354547 2023-07-27 14:02:02 2023-07-27 14:37:08 2023-07-27 15:25:20 0:48:12 0:27:55 0:20:17 smithi main ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-bitmap powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_suites_pjd thrashosds-health} 4
Failure Reason:

Command failed on smithi018 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

pass 7354548 2023-07-27 14:02:13 2023-07-27 14:41:39 2023-07-27 15:17:17 0:35:38 0:21:41 0:13:57 smithi main centos 8.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-lz4 powercycle/default supported-all-distro/centos_8 tasks/cfuse_workunit_suites_truncate_delay thrashosds-health} 4
dead 7354549 2023-07-27 14:02:23 2023-07-27 14:41:50 2023-07-27 15:58:10 1:16:20 smithi main rhel 8.6 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-all-distro/rhel_8 tasks/rados_api_tests thrashosds-health} 4
Failure Reason:

Error reimaging machines: Expected smithi191's OS to be rhel 8.6 but found centos 9

dead 7354550 2023-07-27 14:02:38 2023-07-27 14:42:55 2023-07-27 14:48:14 0:05:19 smithi main ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zlib powercycle/default supported-all-distro/ubuntu_latest tasks/radosbench thrashosds-health} 4
Failure Reason:

Error reimaging machines: Expected smithi146's OS to be ubuntu 20.04 but found centos 8

fail 7354551 2023-07-27 14:02:39 2023-07-27 14:43:01 2023-07-27 15:54:58 1:11:57 smithi main centos 8.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-all-distro/centos_8 tasks/readwrite thrashosds-health} 4
Failure Reason:

machine smithi146.front.sepia.ceph.com is locked by scheduled_rfriedma@teuthology, not scheduled_yuriw@teuthology

dead 7354552 2023-07-27 14:02:40 2023-07-27 14:43:01 2023-07-27 15:53:14 1:10:13 smithi main rhel 8.6 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-hybrid powercycle/default supported-all-distro/rhel_8 tasks/snaps-few-objects thrashosds-health} 4
Failure Reason:

Error reimaging machines: Expected smithi042's OS to be rhel 8.6 but found ubuntu 22.04

dead 7354553 2023-07-27 14:02:41 2023-07-27 14:43:02 2023-07-27 14:57:13 0:14:11 smithi main ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-all-distro/ubuntu_latest tasks/snaps-many-objects thrashosds-health} 4
Failure Reason:

Error reimaging machines: Expected smithi204's OS to be ubuntu 20.04 but found centos 8