Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
smithi130.front.sepia.ceph.com | smithi | True | False | ubuntu | 22.04 | x86_64 | /home/teuthworker/archive/teuthology-2024-04-23_05:00:14-smoke-main-distro-default-smithi/7669484 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7669484 | 2024-04-23 05:01:32 | 2024-04-23 05:01:32 | 2024-04-23 05:36:12 | 0:34:40 | 0:21:32 | 0:13:08 | smithi | main | ubuntu | 22.04 | smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/{0-install test/rados_python}} | 3 | |
fail | 7669262 | 2024-04-22 22:47:18 | 2024-04-22 23:39:19 | 2024-04-22 23:52:15 | 0:12:56 | 0:06:25 | 0:06:31 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nvmeof 3-final} | 2 | |
Failure Reason:
Command failed on smithi130 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d1bc11ee-0102-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi032:172.21.15.32=smithi032;smithi130:172.21.15.130=smithi130'" |
||||||||||||||
fail | 7669211 | 2024-04-22 22:46:24 | 2024-04-22 23:08:15 | 2024-04-22 23:26:13 | 0:17:58 | 0:09:55 | 0:08:03 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi064 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f33eecb4-00fe-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 7669156 | 2024-04-22 22:11:17 | 2024-04-23 02:12:59 | 2024-04-23 02:58:59 | 0:46:00 | 0:35:22 | 0:10:38 | smithi | main | centos | 8.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
pass | 7669111 | 2024-04-22 22:10:34 | 2024-04-23 01:48:29 | 2024-04-23 02:13:48 | 0:25:19 | 0:15:41 | 0:09:38 | smithi | main | centos | 8.stream | orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
pass | 7669082 | 2024-04-22 22:10:07 | 2024-04-23 01:23:23 | 2024-04-23 01:48:30 | 0:25:07 | 0:18:41 | 0:06:26 | smithi | main | rhel | 8.6 | orch/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
pass | 7669041 | 2024-04-22 22:09:29 | 2024-04-23 00:55:24 | 2024-04-23 01:23:23 | 0:27:59 | 0:17:03 | 0:10:56 | smithi | main | centos | 8.stream | orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
pass | 7668989 | 2024-04-22 22:08:40 | 2024-04-23 00:20:31 | 2024-04-23 00:56:11 | 0:35:40 | 0:26:18 | 0:09:22 | smithi | main | centos | 8.stream | orch/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rotate-keys} | 2 | |
pass | 7668959 | 2024-04-22 21:32:49 | 2024-04-22 23:54:39 | 2024-04-23 00:20:57 | 0:26:18 | 0:16:53 | 0:09:25 | smithi | main | ubuntu | 22.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-stupid powercycle/default supported-distros/ubuntu_latest tasks/cfuse_workunit_suites_fsstress thrashosds-health} | 4 | |
fail | 7668932 | 2024-04-22 21:32:23 | 2024-04-22 22:37:17 | 2024-04-22 22:58:40 | 0:21:23 | 0:11:13 | 0:10:10 | smithi | main | centos | 9.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-distros/centos_latest tasks/cfuse_workunit_suites_fsx thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test suites/fsx.sh) on smithi032 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c66b8bf2efd3f3988ac1851474c2f98eb2ca30d9 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh' |
||||||||||||||
fail | 7668858 | 2024-04-22 21:10:44 | 2024-04-22 22:05:55 | 2024-04-22 22:25:36 | 0:19:41 | 0:07:24 | 0:12:17 | smithi | main | ubuntu | 22.04 | orch/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
Command failed on smithi066 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:430e09df97c8fc7dc2b2ae424f68ed11366c540f pull' |
||||||||||||||
fail | 7668826 | 2024-04-22 21:10:11 | 2024-04-22 21:50:21 | 2024-04-22 22:03:13 | 0:12:52 | 0:04:03 | 0:08:49 | smithi | main | centos | 9.stream | orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 3 | |
Failure Reason:
Command failed on smithi066 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:430e09df97c8fc7dc2b2ae424f68ed11366c540f pull' |
||||||||||||||
fail | 7668712 | 2024-04-22 20:12:59 | 2024-04-23 02:57:23 | 2024-04-23 03:44:04 | 0:46:41 | 0:38:13 | 0:08:28 | smithi | main | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (51) after waiting for 300 seconds |
||||||||||||||
pass | 7668628 | 2024-04-22 20:11:39 | 2024-04-22 20:54:54 | 2024-04-22 21:50:15 | 0:55:21 | 0:47:44 | 0:07:37 | smithi | main | centos | 9.stream | orch/cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 | |
pass | 7668602 | 2024-04-22 20:11:14 | 2024-04-22 20:34:52 | 2024-04-22 20:54:52 | 0:20:00 | 0:12:46 | 0:07:14 | smithi | main | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/connectivity task/test_rgw_multisite} | 3 | |
pass | 7668560 | 2024-04-22 20:10:35 | 2024-04-22 20:12:03 | 2024-04-22 20:34:55 | 0:22:52 | 0:13:57 | 0:08:55 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nvmeof 3-final} | 2 | |
pass | 7668504 | 2024-04-22 19:31:51 | 2024-04-22 19:32:41 | 2024-04-22 20:05:03 | 0:32:22 | 0:17:41 | 0:14:41 | smithi | main | ubuntu | 22.04 | rgw/upgrade/{1-install/reef/{distro$/{ubuntu_latest} install overrides} 2-setup 3-upgrade-sequence/rgws-then-osds cluster frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides} | 2 | |
pass | 7668451 | 2024-04-22 18:21:52 | 2024-04-22 18:22:08 | 2024-04-22 18:37:43 | 0:15:35 | 0:09:14 | 0:06:21 | smithi | main | centos | 9.stream | rados/singleton/{all/test-noautoscale-flag mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_latest}} | 1 | |
pass | 7668411 | 2024-04-22 14:52:29 | 2024-04-22 15:39:59 | 2024-04-22 16:29:32 | 0:49:33 | 0:36:57 | 0:12:36 | smithi | main | ubuntu | 22.04 | orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/connectivity task/test_orch_cli_mon} | 5 | |
fail | 7668383 | 2024-04-22 14:52:00 | 2024-04-22 15:18:46 | 2024-04-22 15:39:21 | 0:20:35 | 0:11:39 | 0:08:56 | smithi | main | centos | 9.stream | orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 3 | |
Failure Reason:
"2024-04-22T15:34:12.906343+0000 mon.a (mon.0) 452 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon osd.1 on smithi130 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log |