Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 7358035 2023-08-01 19:18:24 2023-08-01 19:19:26 2023-08-02 07:41:16 12:21:50 smithi main ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-bitmap powercycle/default supported-all-distro/ubuntu_latest tasks/snaps-many-objects thrashosds-health} 4
Failure Reason:

hit max job timeout

fail 7358036 2023-08-01 19:18:25 2023-08-01 19:19:26 2023-08-01 20:02:46 0:43:20 0:21:55 0:21:25 smithi main centos 8.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-lz4 powercycle/default supported-all-distro/centos_8 tasks/admin_socket_objecter_requests thrashosds-health} 4
Failure Reason:

Command failed on smithi006 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

fail 7358037 2023-08-01 19:18:25 2023-08-01 19:19:27 2023-08-01 20:09:03 0:49:36 0:28:30 0:21:06 smithi main rhel 8.4 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-all-distro/rhel_8 tasks/cfuse_workunit_kernel_untar_build thrashosds-health} 4
Failure Reason:

Command failed (workunit test kernel_untar_build.sh) on smithi032 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a185a2e2933b21246ce41dd13f03fd6f8f84f03d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh'

fail 7358038 2023-08-01 19:18:26 2023-08-01 19:19:27 2023-08-01 20:04:26 0:44:59 0:20:24 0:24:35 smithi main ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zlib powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_misc thrashosds-health} 4
Failure Reason:

Command failed (workunit test fs/misc/dirfrag.sh) on smithi039 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a185a2e2933b21246ce41dd13f03fd6f8f84f03d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/dirfrag.sh'

fail 7358039 2023-08-01 19:18:27 2023-08-01 19:19:28 2023-08-01 20:02:55 0:43:27 0:23:06 0:20:21 smithi main centos 8.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-all-distro/centos_8 tasks/cfuse_workunit_suites_ffsb thrashosds-health} 4
Failure Reason:

Command failed (workunit test suites/ffsb.sh) on smithi045 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a185a2e2933b21246ce41dd13f03fd6f8f84f03d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'

fail 7358040 2023-08-01 19:18:28 2023-08-01 19:19:28 2023-08-01 20:04:22 0:44:54 0:24:25 0:20:29 smithi main rhel 8.4 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-hybrid powercycle/default supported-all-distro/rhel_8 tasks/cfuse_workunit_suites_fsstress thrashosds-health} 4
Failure Reason:

Command failed on smithi066 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

fail 7358041 2023-08-01 19:18:29 2023-08-01 19:19:28 2023-08-01 19:57:08 0:37:40 0:18:35 0:19:05 smithi main ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_suites_fsx thrashosds-health} 4
Failure Reason:

Command failed (workunit test suites/fsx.sh) on smithi017 with status 128: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a185a2e2933b21246ce41dd13f03fd6f8f84f03d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh'

fail 7358042 2023-08-01 19:18:29 2023-08-01 19:19:29 2023-08-01 20:05:17 0:45:48 0:18:54 0:26:54 smithi main centos 8.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-stupid powercycle/default supported-all-distro/centos_8 tasks/cfuse_workunit_suites_fsync thrashosds-health} 4
Failure Reason:

Command failed on smithi061 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

fail 7358043 2023-08-01 19:18:30 2023-08-01 19:19:29 2023-08-01 20:06:28 0:46:59 0:25:35 0:21:24 smithi main rhel 8.4 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/filestore-xfs powercycle/default supported-all-distro/rhel_8 tasks/cfuse_workunit_suites_pjd thrashosds-health} 4
Failure Reason:

Command failed on smithi055 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

fail 7358044 2023-08-01 19:18:31 2023-08-01 19:19:29 2023-08-01 20:01:46 0:42:17 0:19:02 0:23:15 smithi main ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-bitmap powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_suites_truncate_delay thrashosds-health} 4
Failure Reason:

Command failed on smithi046 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

fail 7358045 2023-08-01 19:18:31 2023-08-01 19:19:30 2023-08-01 23:04:15 3:44:45 3:23:08 0:21:37 smithi main centos 8.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-lz4 powercycle/default supported-all-distro/centos_8 tasks/rados_api_tests thrashosds-health} 4
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi073 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a185a2e2933b21246ce41dd13f03fd6f8f84f03d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 7358046 2023-08-01 19:18:32 2023-08-01 19:19:30 2023-08-01 21:01:02 1:41:32 1:21:00 0:20:32 smithi main rhel 8.4 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-all-distro/rhel_8 tasks/radosbench thrashosds-health} 4
Failure Reason:

Command failed on smithi084 with status 1: "/bin/sh -c 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage rados --no-log-to-stderr --name client.0 -b 65536 --object-size 65536 -p unique_pool_1 bench 90 write'"

fail 7358047 2023-08-01 19:18:33 2023-08-01 19:19:31 2023-08-01 20:04:22 0:44:51 0:21:37 0:23:14 smithi main ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zlib powercycle/default supported-all-distro/ubuntu_latest tasks/readwrite thrashosds-health} 4
Failure Reason:

Command failed on smithi088 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

dead 7358048 2023-08-01 19:18:34 2023-08-01 19:19:31 2023-08-02 07:39:07 12:19:36 smithi main centos 8.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-all-distro/centos_8 tasks/snaps-few-objects thrashosds-health} 4
Failure Reason:

hit max job timeout

dead 7358049 2023-08-01 19:18:34 2023-08-01 19:19:31 2023-08-02 07:36:37 12:17:06 smithi main rhel 8.4 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-hybrid powercycle/default supported-all-distro/rhel_8 tasks/snaps-many-objects thrashosds-health} 4
Failure Reason:

hit max job timeout