Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 7358277 2023-08-02 17:14:22 2023-08-02 17:17:14 2023-08-03 05:26:32 12:09:18 smithi wip-62286 centos 8.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-bitmap powercycle/default supported-all-distro/centos_8 tasks/snaps-many-objects thrashosds-health} 4
Failure Reason:

hit max job timeout

fail 7358278 2023-08-02 17:14:23 2023-08-02 17:17:15 2023-08-02 17:51:46 0:34:31 0:18:12 0:16:19 smithi wip-62286 ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_kernel_untar_build thrashosds-health} 4
Failure Reason:

Command failed (workunit test kernel_untar_build.sh) on smithi044 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=141d30dd0cfc1a5d03cbcad973791fb755fde0b1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh'

fail 7358279 2023-08-02 17:14:23 2023-08-02 17:17:15 2023-08-02 18:00:02 0:42:47 0:29:43 0:13:04 smithi wip-62286 centos 8.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zlib powercycle/default supported-all-distro/centos_8 tasks/cfuse_workunit_misc thrashosds-health} 4
Failure Reason:

Command failed (workunit test fs/misc/filelock_deadlock.py) on smithi022 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=141d30dd0cfc1a5d03cbcad973791fb755fde0b1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/filelock_deadlock.py'

fail 7358280 2023-08-02 17:14:24 2023-08-02 17:17:15 2023-08-02 17:52:58 0:35:43 0:24:05 0:11:38 smithi wip-62286 rhel 8.6 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-all-distro/rhel_8 tasks/cfuse_workunit_suites_ffsb thrashosds-health} 4
Failure Reason:

Command failed (workunit test suites/ffsb.sh) on smithi042 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=141d30dd0cfc1a5d03cbcad973791fb755fde0b1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'

fail 7358281 2023-08-02 17:14:25 2023-08-02 17:17:16 2023-08-02 17:47:12 0:29:56 0:16:39 0:13:17 smithi wip-62286 ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-hybrid powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_suites_fsstress thrashosds-health} 4
Failure Reason:

Command failed on smithi043 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

fail 7358282 2023-08-02 17:14:26 2023-08-02 17:17:16 2023-08-02 17:55:12 0:37:56 0:23:38 0:14:18 smithi wip-62286 centos 8.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-all-distro/centos_8 tasks/cfuse_workunit_suites_fsx thrashosds-health} 4
Failure Reason:

Command failed (workunit test suites/fsx.sh) on smithi114 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=141d30dd0cfc1a5d03cbcad973791fb755fde0b1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh'

fail 7358283 2023-08-02 17:14:26 2023-08-02 17:17:17 2023-08-02 17:49:05 0:31:48 0:21:07 0:10:41 smithi wip-62286 rhel 8.6 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-stupid powercycle/default supported-all-distro/rhel_8 tasks/cfuse_workunit_suites_fsync thrashosds-health} 4
Failure Reason:

Command failed (workunit test suites/fsync-tester.sh) on smithi019 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=141d30dd0cfc1a5d03cbcad973791fb755fde0b1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsync-tester.sh'

dead 7358284 2023-08-02 17:14:27 2023-08-02 17:17:17 2023-08-02 17:22:00 0:04:43 smithi wip-62286 ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-bitmap powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_suites_pjd thrashosds-health} 4
Failure Reason:

Error reimaging machines: Expected smithi018's OS to be ubuntu 20.04 but found rhel 8.6

dead 7358285 2023-08-02 17:14:28 2023-08-02 17:17:17 2023-08-02 17:36:27 0:19:10 smithi wip-62286 rhel 8.6 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-all-distro/rhel_8 tasks/rados_api_tests thrashosds-health} 4
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

fail 7358286 2023-08-02 17:14:29 2023-08-02 17:17:18 2023-08-02 18:41:14 1:23:56 1:06:36 0:17:20 smithi wip-62286 ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zlib powercycle/default supported-all-distro/ubuntu_latest tasks/radosbench thrashosds-health} 4
Failure Reason:

reached maximum tries (501) after waiting for 3000 seconds

dead 7358287 2023-08-02 17:14:29 2023-08-02 17:17:18 2023-08-03 05:30:17 12:12:59 smithi wip-62286 centos 8.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-all-distro/centos_8 tasks/readwrite thrashosds-health} 4
Failure Reason:

hit max job timeout

dead 7358288 2023-08-02 17:14:30 2023-08-02 17:17:18 2023-08-03 05:29:14 12:11:56 smithi wip-62286 rhel 8.6 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-hybrid powercycle/default supported-all-distro/rhel_8 tasks/snaps-few-objects thrashosds-health} 4
Failure Reason:

hit max job timeout

dead 7358289 2023-08-02 17:14:31 2023-08-02 17:17:19 2023-08-03 05:32:32 12:15:13 smithi wip-62286 ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-all-distro/ubuntu_latest tasks/snaps-many-objects thrashosds-health} 4
Failure Reason:

hit max job timeout