Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 7356409 2023-07-28 23:32:03 2023-07-29 08:51:49 2023-07-29 21:02:42 12:10:53 smithi main centos 8.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-bitmap powercycle/default supported-all-distro/centos_8 tasks/snaps-many-objects thrashosds-health} 4
Failure Reason:

hit max job timeout

fail 7356411 2023-07-28 23:32:14 2023-07-29 08:54:30 2023-07-29 10:06:18 1:11:48 0:59:38 0:12:10 smithi main rhel 8.6 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-lz4 powercycle/default supported-all-distro/rhel_8 tasks/admin_socket_objecter_requests thrashosds-health} 4
Failure Reason:

reached maximum tries (351) after waiting for 2100 seconds

fail 7356415 2023-07-28 23:32:23 2023-07-29 09:00:12 2023-07-29 09:33:10 0:32:58 0:18:16 0:14:42 smithi main ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_kernel_untar_build thrashosds-health} 4
Failure Reason:

Command failed (workunit test kernel_untar_build.sh) on smithi029 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0484097b0572e96f42cf6402cbe0ac8dcb046577 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh'

fail 7356417 2023-07-28 23:32:28 2023-07-29 09:03:24 2023-07-29 09:41:55 0:38:31 0:26:14 0:12:17 smithi main centos 8.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zlib powercycle/default supported-all-distro/centos_8 tasks/cfuse_workunit_misc thrashosds-health} 4
Failure Reason:

Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi044 with status 11: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0484097b0572e96f42cf6402cbe0ac8dcb046577 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/multiple_rsync.sh'

fail 7356419 2023-07-28 23:32:29 2023-07-29 09:06:35 2023-07-29 09:36:45 0:30:10 0:21:46 0:08:24 smithi main rhel 8.6 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-all-distro/rhel_8 tasks/cfuse_workunit_suites_ffsb thrashosds-health} 4
Failure Reason:

Command failed (workunit test suites/ffsb.sh) on smithi002 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0484097b0572e96f42cf6402cbe0ac8dcb046577 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'

fail 7356422 2023-07-28 23:32:35 2023-07-29 09:09:17 2023-07-29 09:39:33 0:30:16 0:16:26 0:13:50 smithi main ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-hybrid powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_suites_fsstress thrashosds-health} 4
Failure Reason:

Command failed on smithi139 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

fail 7356425 2023-07-28 23:32:40 2023-07-29 09:13:19 2023-07-29 09:42:54 0:29:35 0:18:10 0:11:25 smithi main centos 8.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-all-distro/centos_8 tasks/cfuse_workunit_suites_fsx thrashosds-health} 4
Failure Reason:

Command failed (workunit test suites/fsx.sh) on smithi007 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0484097b0572e96f42cf6402cbe0ac8dcb046577 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh'

fail 7356427 2023-07-28 23:32:56 2023-07-29 09:18:10 2023-07-29 09:47:17 0:29:07 0:23:13 0:05:54 smithi main rhel 8.6 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-stupid powercycle/default supported-all-distro/rhel_8 tasks/cfuse_workunit_suites_fsync thrashosds-health} 4
Failure Reason:

Command failed (workunit test suites/fsync-tester.sh) on smithi032 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0484097b0572e96f42cf6402cbe0ac8dcb046577 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsync-tester.sh'

fail 7356431 2023-07-28 23:33:02 2023-07-29 09:21:22 2023-07-29 09:49:29 0:28:07 0:17:47 0:10:20 smithi main ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-bitmap powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_suites_pjd thrashosds-health} 4
Failure Reason:

Command failed on smithi012 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

fail 7356434 2023-07-28 23:33:12 2023-07-29 09:28:45 2023-07-29 10:00:55 0:32:10 0:20:44 0:11:26 smithi main centos 8.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-lz4 powercycle/default supported-all-distro/centos_8 tasks/cfuse_workunit_suites_truncate_delay thrashosds-health} 4
Failure Reason:

Command failed on smithi018 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

fail 7356437 2023-07-28 23:33:18 2023-07-29 09:30:46 2023-07-29 12:56:54 3:26:08 3:18:06 0:08:02 smithi main rhel 8.6 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-all-distro/rhel_8 tasks/rados_api_tests thrashosds-health} 4
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi005 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0484097b0572e96f42cf6402cbe0ac8dcb046577 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 7356440 2023-07-28 23:33:24 2023-07-29 09:33:18 2023-07-29 10:51:52 1:18:34 1:05:57 0:12:37 smithi main ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zlib powercycle/default supported-all-distro/ubuntu_latest tasks/radosbench thrashosds-health} 4
Failure Reason:

reached maximum tries (501) after waiting for 3000 seconds

fail 7356443 2023-07-28 23:33:30 2023-07-29 09:34:59 2023-07-29 10:12:22 0:37:23 0:20:36 0:16:47 smithi main centos 8.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-all-distro/centos_8 tasks/readwrite thrashosds-health} 4
Failure Reason:

Command failed on smithi002 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

dead 7356446 2023-07-28 23:33:46 2023-07-29 09:39:41 2023-07-29 21:51:19 12:11:38 smithi main rhel 8.6 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-hybrid powercycle/default supported-all-distro/rhel_8 tasks/snaps-few-objects thrashosds-health} 4
Failure Reason:

hit max job timeout

dead 7356448 2023-07-28 23:33:53 2023-07-29 09:41:13 2023-07-29 21:53:10 12:11:57 smithi main ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-all-distro/ubuntu_latest tasks/snaps-many-objects thrashosds-health} 4
Failure Reason:

hit max job timeout