User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2024-09-21 03:08:26 | 2024-09-21 09:03:11 | 2024-09-21 11:37:50 | 2:34:39 | upgrade | main | smithi | 62da54b | 13 | 27 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7913633 | 2024-09-21 03:11:06 | 2024-09-21 09:02:40 | 2024-09-21 09:32:20 | 0:29:40 | 0:19:07 | 0:10:33 | smithi | main | centos | 9.stream | upgrade/cephfs/featureful_client/old_client/{bluestore-bitmap centos_9.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no multimds/yes pg-warn pg_health} tasks/{0-from/reef 1-client 2-upgrade 3-compat_client/no}} | 3 | |
fail | 7913634 | 2024-09-21 03:11:07 | 2024-09-21 09:03:11 | 2024-09-21 09:20:46 | 0:17:35 | 0:07:40 | 0:09:55 | smithi | main | ubuntu | 22.04 | upgrade/quincy-x/filestore-remove-check/{0-cluster/{openstack start} 1-ceph-install/quincy 2 - upgrade objectstore/filestore-xfs ubuntu_latest} | 1 | |
Failure Reason:
Command failed on smithi187 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=17.2.7-1jammy ceph-mds=17.2.7-1jammy ceph-mgr=17.2.7-1jammy ceph-common=17.2.7-1jammy ceph-fuse=17.2.7-1jammy ceph-test=17.2.7-1jammy ceph-volume=17.2.7-1jammy radosgw=17.2.7-1jammy python3-rados=17.2.7-1jammy python3-rgw=17.2.7-1jammy python3-cephfs=17.2.7-1jammy python3-rbd=17.2.7-1jammy libcephfs2=17.2.7-1jammy librados2=17.2.7-1jammy librbd1=17.2.7-1jammy rbd-fuse=17.2.7-1jammy' |
||||||||||||||
fail | 7913635 | 2024-09-21 03:11:08 | 2024-09-21 09:03:11 | 2024-09-21 09:48:18 | 0:45:07 | 0:35:55 | 0:09:12 | smithi | main | centos | 9.stream | upgrade/reef-x/parallel/{0-random-distro$/{centos_9.stream} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi042 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=reef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
pass | 7913636 | 2024-09-21 03:11:09 | 2024-09-21 09:03:21 | 2024-09-21 09:42:38 | 0:39:17 | 0:28:47 | 0:10:30 | smithi | main | centos | 9.stream | upgrade/telemetry-upgrade/quincy-x/{0-random-distro$/{centos_9.stream_runc} 0-start 1-tasks} | 2 | |
pass | 7913637 | 2024-09-21 03:11:10 | 2024-09-21 09:03:42 | 2024-09-21 09:48:02 | 0:44:20 | 0:31:44 | 0:12:36 | smithi | main | centos | 9.stream | upgrade/cephfs/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
fail | 7913638 | 2024-09-21 03:11:11 | 2024-09-21 09:07:03 | 2024-09-21 09:52:02 | 0:44:59 | 0:35:56 | 0:09:03 | smithi | main | centos | 9.stream | upgrade/quincy-x/parallel/{0-random-distro$/{centos_9.stream_runc} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi022 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=quincy TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
pass | 7913639 | 2024-09-21 03:11:12 | 2024-09-21 09:07:03 | 2024-09-21 09:23:55 | 0:16:52 | 0:07:57 | 0:08:55 | smithi | main | centos | 9.stream | upgrade/cephfs/nofs/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} no-mds-cluster overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn pg_health} tasks/{0-from/reef 1-upgrade}} | 1 | |
fail | 7913640 | 2024-09-21 03:11:13 | 2024-09-21 09:07:03 | 2024-09-21 11:35:16 | 2:28:13 | 2:18:50 | 0:09:23 | smithi | main | ubuntu | 22.04 | upgrade/reef-x/stress-split/{0-distro/ubuntu_22.04 0-roles 1-start 2-first-half-tasks/rbd_api 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/classic} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi001 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=reef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 7913641 | 2024-09-21 03:11:14 | 2024-09-21 09:07:34 | 2024-09-21 09:25:15 | 0:17:41 | 0:07:48 | 0:09:53 | smithi | main | ubuntu | 22.04 | upgrade/quincy-x/stress-split/{0-distro/ubuntu_22.04 0-roles 1-start 2-first-half-tasks/rbd_api 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi179 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=17.2.7-1jammy cephadm=17.2.7-1jammy ceph-mds=17.2.7-1jammy ceph-mgr=17.2.7-1jammy ceph-common=17.2.7-1jammy ceph-fuse=17.2.7-1jammy ceph-test=17.2.7-1jammy radosgw=17.2.7-1jammy python3-rados=17.2.7-1jammy python3-rgw=17.2.7-1jammy python3-cephfs=17.2.7-1jammy python3-rbd=17.2.7-1jammy libcephfs2=17.2.7-1jammy libcephfs-dev=17.2.7-1jammy librados2=17.2.7-1jammy librbd1=17.2.7-1jammy rbd-fuse=17.2.7-1jammy' |
||||||||||||||
fail | 7913642 | 2024-09-21 03:11:15 | 2024-09-21 09:07:44 | 2024-09-21 10:40:08 | 1:32:24 | 1:21:17 | 0:11:07 | smithi | main | centos | 9.stream | upgrade/reef-x/stress-split/{0-distro/centos_9.stream 0-roles 1-start 2-first-half-tasks/readwrite 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/connectivity} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi178 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=reef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
pass | 7913643 | 2024-09-21 03:11:16 | 2024-09-21 09:08:35 | 2024-09-21 09:37:32 | 0:28:57 | 0:17:29 | 0:11:28 | smithi | main | centos | 9.stream | upgrade/cephfs/featureful_client/upgraded_client/{bluestore-bitmap centos_9.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no multimds/yes pg-warn pg_health} tasks/{0-from/reef 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} | 3 | |
fail | 7913644 | 2024-09-21 03:11:18 | 2024-09-21 09:08:45 | 2024-09-21 10:51:23 | 1:42:38 | 1:31:12 | 0:11:26 | smithi | main | centos | 9.stream | upgrade/quincy-x/stress-split/{0-distro/centos_9.stream 0-roles 1-start 2-first-half-tasks/readwrite 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/connectivity} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi137 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=quincy TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 7913645 | 2024-09-21 03:11:19 | 2024-09-21 09:10:15 | 2024-09-21 09:28:58 | 0:18:43 | 0:09:11 | 0:09:32 | smithi | main | centos | 9.stream | upgrade/cephfs/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (30) after waiting for 300 seconds |
||||||||||||||
fail | 7913646 | 2024-09-21 03:11:20 | 2024-09-21 09:10:26 | 2024-09-21 10:53:01 | 1:42:35 | 1:29:49 | 0:12:46 | smithi | main | centos | 9.stream | upgrade/reef-x/stress-split/{0-distro/centos_9.stream_runc 0-roles 1-start 2-first-half-tasks/snaps-few-objects 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/classic} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi076 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=reef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 7913647 | 2024-09-21 03:11:21 | 2024-09-21 09:13:37 | 2024-09-21 10:57:22 | 1:43:45 | 1:32:45 | 0:11:00 | smithi | main | centos | 9.stream | upgrade/quincy-x/stress-split/{0-distro/centos_9.stream_runc 0-roles 1-start 2-first-half-tasks/snaps-few-objects 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/classic} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi017 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=quincy TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
pass | 7913648 | 2024-09-21 03:11:22 | 2024-09-21 09:14:27 | 2024-09-21 09:42:23 | 0:27:56 | 0:16:00 | 0:11:56 | smithi | main | centos | 9.stream | upgrade/cephfs/featureful_client/old_client/{bluestore-bitmap centos_9.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no multimds/yes pg-warn pg_health} tasks/{0-from/squid 1-client 2-upgrade 3-compat_client/yes}} | 3 | |
fail | 7913649 | 2024-09-21 03:11:23 | 2024-09-21 09:17:08 | 2024-09-21 09:46:10 | 0:29:02 | 0:19:27 | 0:09:35 | smithi | main | centos | 9.stream | upgrade/reef-x/stress-split/{0-distro/centos_9.stream_runc 0-roles 1-start 2-first-half-tasks/radosbench 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/connectivity} | 2 | |
Failure Reason:
Command failed on smithi053 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 41b6d1ee-77fc-11ef-baf6-efdc52797490 -e sha1=62da54b4b83a96ca9976b7e1d0c8272a564a2208 -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mon --limit 1'" |
||||||||||||||
fail | 7913650 | 2024-09-21 03:11:24 | 2024-09-21 09:17:18 | 2024-09-21 09:58:22 | 0:41:04 | 0:31:04 | 0:10:00 | smithi | main | centos | 9.stream | upgrade/cephfs/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
"2024-09-21T09:50:00.000124+0000 mon.smithi006 (mon.0) 470 : cluster [ERR] fs cephfs is offline because no MDS is active for it." in cluster log |
||||||||||||||
fail | 7913651 | 2024-09-21 03:11:25 | 2024-09-21 09:17:39 | 2024-09-21 11:09:52 | 1:52:13 | 1:41:12 | 0:11:01 | smithi | main | centos | 9.stream | upgrade/quincy-x/stress-split/{0-distro/centos_9.stream_runc 0-roles 1-start 2-first-half-tasks/radosbench 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/connectivity} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi107 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=quincy TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 7913652 | 2024-09-21 03:11:26 | 2024-09-21 09:18:39 | 2024-09-21 11:13:07 | 1:54:28 | 1:42:22 | 0:12:06 | smithi | main | ubuntu | 22.04 | upgrade/reef-x/stress-split/{0-distro/ubuntu_22.04 0-roles 1-start 2-first-half-tasks/rbd-cls 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/classic} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi118 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=reef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 7913653 | 2024-09-21 03:11:27 | 2024-09-21 09:20:40 | 2024-09-21 09:38:42 | 0:18:02 | 0:07:39 | 0:10:23 | smithi | main | ubuntu | 22.04 | upgrade/quincy-x/stress-split/{0-distro/ubuntu_22.04 0-roles 1-start 2-first-half-tasks/rbd-cls 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi187 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=17.2.7-1jammy cephadm=17.2.7-1jammy ceph-mds=17.2.7-1jammy ceph-mgr=17.2.7-1jammy ceph-common=17.2.7-1jammy ceph-fuse=17.2.7-1jammy ceph-test=17.2.7-1jammy radosgw=17.2.7-1jammy python3-rados=17.2.7-1jammy python3-rgw=17.2.7-1jammy python3-cephfs=17.2.7-1jammy python3-rbd=17.2.7-1jammy libcephfs2=17.2.7-1jammy libcephfs-dev=17.2.7-1jammy librados2=17.2.7-1jammy librbd1=17.2.7-1jammy rbd-fuse=17.2.7-1jammy' |
||||||||||||||
fail | 7913654 | 2024-09-21 03:11:28 | 2024-09-21 09:21:00 | 2024-09-21 10:05:33 | 0:44:33 | 0:34:06 | 0:10:27 | smithi | main | ubuntu | 22.04 | upgrade/telemetry-upgrade/reef-x/{0-random-distro$/{ubuntu_22.04} 0-start 1-tasks} | 2 | |
Failure Reason:
Command failed on smithi038 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6f2365a6-77fd-11ef-baf6-efdc52797490 -e sha1=62da54b4b83a96ca9976b7e1d0c8272a564a2208 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 7913655 | 2024-09-21 03:11:30 | 2024-09-21 09:21:51 | 2024-09-21 09:56:00 | 0:34:09 | 0:19:57 | 0:14:12 | smithi | main | centos | 9.stream | upgrade/cephfs/featureful_client/old_client/{bluestore-bitmap centos_9.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no multimds/yes pg-warn pg_health} tasks/{0-from/squid 1-client 2-upgrade 3-compat_client/no}} | 3 | |
fail | 7913656 | 2024-09-21 03:11:31 | 2024-09-21 09:23:51 | 2024-09-21 10:25:46 | 1:01:55 | 0:51:37 | 0:10:18 | smithi | main | ubuntu | 22.04 | upgrade/reef-x/parallel/{0-random-distro$/{ubuntu_22.04} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi037 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=reef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
dead | 7913657 | 2024-09-21 03:11:32 | 2024-09-21 09:24:12 | 2024-09-21 09:33:20 | 0:09:08 | smithi | main | centos | 9.stream | upgrade/cephfs/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |||
Failure Reason:
SSH connection to smithi179 was lost: 'sudo yum install -y kernel' |
||||||||||||||
fail | 7913658 | 2024-09-21 03:11:33 | 2024-09-21 09:25:32 | 2024-09-21 10:10:41 | 0:45:09 | 0:34:39 | 0:10:30 | smithi | main | centos | 9.stream | upgrade/quincy-x/parallel/{0-random-distro$/{centos_9.stream_runc} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi033 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=quincy TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
pass | 7913659 | 2024-09-21 03:11:34 | 2024-09-21 09:27:43 | 2024-09-21 09:44:43 | 0:17:00 | 0:08:09 | 0:08:51 | smithi | main | centos | 9.stream | upgrade/cephfs/nofs/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} no-mds-cluster overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn pg_health} tasks/{0-from/squid 1-upgrade}} | 1 | |
fail | 7913660 | 2024-09-21 03:11:35 | 2024-09-21 09:27:53 | 2024-09-21 10:55:52 | 1:27:59 | 1:18:12 | 0:09:47 | smithi | main | centos | 9.stream | upgrade/reef-x/stress-split/{0-distro/centos_9.stream 0-roles 1-start 2-first-half-tasks/rbd-import-export 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/connectivity} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi032 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=reef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 7913661 | 2024-09-21 03:11:36 | 2024-09-21 09:29:14 | 2024-09-21 11:01:57 | 1:32:43 | 1:21:00 | 0:11:43 | smithi | main | centos | 9.stream | upgrade/quincy-x/stress-split/{0-distro/centos_9.stream 0-roles 1-start 2-first-half-tasks/rbd-import-export 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/connectivity} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=quincy TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
pass | 7913662 | 2024-09-21 03:11:37 | 2024-09-21 09:30:54 | 2024-09-21 09:59:32 | 0:28:38 | 0:17:32 | 0:11:06 | smithi | main | centos | 9.stream | upgrade/cephfs/featureful_client/upgraded_client/{bluestore-bitmap centos_9.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no multimds/yes pg-warn pg_health} tasks/{0-from/squid 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} | 3 | |
fail | 7913663 | 2024-09-21 03:11:38 | 2024-09-21 09:32:35 | 2024-09-21 11:37:50 | 2:05:15 | 1:56:02 | 0:09:13 | smithi | main | centos | 9.stream | upgrade/reef-x/stress-split/{0-distro/centos_9.stream_runc 0-roles 1-start 2-first-half-tasks/rbd_api 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/classic} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi070 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=reef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
pass | 7913664 | 2024-09-21 03:11:39 | 2024-09-21 09:32:35 | 2024-09-21 10:24:54 | 0:52:19 | 0:32:18 | 0:20:01 | smithi | main | centos | 9.stream | upgrade/cephfs/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
fail | 7913665 | 2024-09-21 03:11:41 | 2024-09-21 09:33:36 | 2024-09-21 11:28:11 | 1:54:35 | 1:43:50 | 0:10:45 | smithi | main | centos | 9.stream | upgrade/quincy-x/stress-split/{0-distro/centos_9.stream_runc 0-roles 1-start 2-first-half-tasks/rbd_api 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/classic} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi119 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=quincy TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 7913666 | 2024-09-21 03:11:42 | 2024-09-21 09:35:07 | 2024-09-21 11:29:40 | 1:54:33 | 1:43:39 | 0:10:54 | smithi | main | ubuntu | 22.04 | upgrade/reef-x/stress-split/{0-distro/ubuntu_22.04 0-roles 1-start 2-first-half-tasks/readwrite 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/connectivity} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi003 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=reef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
pass | 7913667 | 2024-09-21 03:11:43 | 2024-09-21 09:35:47 | 2024-09-21 09:59:18 | 0:23:31 | 0:12:33 | 0:10:58 | smithi | main | centos | 9.stream | upgrade/cephfs/upgraded_client/{bluestore-bitmap centos_9.stream clusters/{1-mds-1-client-micro} conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn pg_health} tasks/{0-install/reef 1-mount/mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} 2-clients/kclient 3-workload/stress_tests/iozone}} | 2 | |
fail | 7913668 | 2024-09-21 03:11:44 | 2024-09-21 09:36:08 | 2024-09-21 09:55:17 | 0:19:09 | 0:07:41 | 0:11:28 | smithi | main | ubuntu | 22.04 | upgrade/quincy-x/stress-split/{0-distro/ubuntu_22.04 0-roles 1-start 2-first-half-tasks/readwrite 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/connectivity} | 2 | |
Failure Reason:
Command failed on smithi104 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=17.2.7-1jammy cephadm=17.2.7-1jammy ceph-mds=17.2.7-1jammy ceph-mgr=17.2.7-1jammy ceph-common=17.2.7-1jammy ceph-fuse=17.2.7-1jammy ceph-test=17.2.7-1jammy radosgw=17.2.7-1jammy python3-rados=17.2.7-1jammy python3-rgw=17.2.7-1jammy python3-cephfs=17.2.7-1jammy python3-rbd=17.2.7-1jammy libcephfs2=17.2.7-1jammy libcephfs-dev=17.2.7-1jammy librados2=17.2.7-1jammy librbd1=17.2.7-1jammy rbd-fuse=17.2.7-1jammy' |
||||||||||||||
pass | 7913669 | 2024-09-21 03:11:45 | 2024-09-21 09:37:48 | 2024-09-21 10:04:04 | 0:26:16 | 0:14:56 | 0:11:20 | smithi | main | centos | 9.stream | upgrade/cephfs/featureful_client/old_client/{bluestore-bitmap centos_9.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no multimds/yes pg-warn pg_health} tasks/{0-from/reef 1-client 2-upgrade 3-compat_client/yes}} | 3 | |
fail | 7913670 | 2024-09-21 03:11:46 | 2024-09-21 09:38:59 | 2024-09-21 11:31:31 | 1:52:32 | 1:39:09 | 0:13:23 | smithi | main | centos | 9.stream | upgrade/reef-x/stress-split/{0-distro/centos_9.stream 0-roles 1-start 2-first-half-tasks/snaps-few-objects 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/classic} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi007 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=reef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 7913671 | 2024-09-21 03:11:47 | 2024-09-21 09:42:40 | 2024-09-21 10:02:04 | 0:19:24 | 0:08:55 | 0:10:29 | smithi | main | centos | 9.stream | upgrade/cephfs/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (30) after waiting for 300 seconds |
||||||||||||||
fail | 7913672 | 2024-09-21 03:11:48 | 2024-09-21 09:42:50 | 2024-09-21 11:33:34 | 1:50:44 | 1:38:56 | 0:11:48 | smithi | main | centos | 9.stream | upgrade/quincy-x/stress-split/{0-distro/centos_9.stream 0-roles 1-start 2-first-half-tasks/snaps-few-objects 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/classic} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi100 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=quincy TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
pass | 7913673 | 2024-09-21 03:11:49 | 2024-09-21 09:44:21 | 2024-09-21 11:20:45 | 1:36:24 | 1:24:43 | 0:11:41 | smithi | main | centos | 9.stream | upgrade/cephfs/upgraded_client/{bluestore-bitmap centos_9.stream clusters/{1-mds-1-client-micro} conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn pg_health} tasks/{0-install/quincy 1-mount/mount/fuse 2-clients/fuse-upgrade 3-workload/stress_tests/kernel_untar_build}} | 2 |