Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7637920 2024-04-03 08:00:01 2024-04-03 08:00:23 2024-04-03 08:56:29 0:56:06 0:43:46 0:12:20 smithi main centos 8.stream fs:upgrade:mds_upgrade_sequence/{bluestore-bitmap centos_8.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
dead 7637921 2024-04-03 08:00:02 2024-04-03 08:01:33 2024-04-03 20:11:04 12:09:31 smithi main centos 8.stream fs:upgrade:mds_upgrade_sequence/{bluestore-bitmap centos_8.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

hit max job timeout

fail 7637922 2024-04-03 08:00:03 2024-04-03 08:02:24 2024-04-03 12:02:51 4:00:27 3:43:50 0:16:37 smithi main centos 8.stream fs:upgrade:mds_upgrade_sequence/{bluestore-bitmap centos_8.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi167 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2be5a1775aca69fcf233a1e71f017149db5077bf TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/fsstress.sh'

dead 7637923 2024-04-03 08:00:04 2024-04-03 08:03:24 2024-04-03 20:11:42 12:08:18 smithi main centos 8.stream fs:upgrade:mds_upgrade_sequence/{bluestore-bitmap centos_8.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

hit max job timeout

fail 7637924 2024-04-03 08:00:05 2024-04-03 08:03:55 2024-04-03 09:01:17 0:57:22 0:46:09 0:11:13 smithi main centos 8.stream fs:upgrade:mds_upgrade_sequence/{bluestore-bitmap centos_8.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"2024-04-03T08:50:00.000212+0000 mon.smithi012 (mon.0) 261 : cluster 3 [WRN] PG_DEGRADED: Degraded data redundancy: 43/219 objects degraded (19.635%), 17 pgs degraded" in cluster log