Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
smithi193.front.sepia.ceph.com | smithi | True | True | 2024-04-23 22:46:45.329601 | scheduled_teuthology@teuthology | ubuntu | 22.04 | x86_64 | /home/teuthworker/archive/teuthology-2024-04-23_21:16:02-rbd-squid-distro-default-smithi/7670448 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
running | 7670448 | 2024-04-23 21:18:11 | 2024-04-23 22:45:45 | 2024-04-24 03:08:29 | 4:22:53 | smithi | main | ubuntu | 22.04 | rbd/encryption/{cache/writethrough clusters/{fixed-3 openstack} conf/{disable-pool-app} data-pool/ec features/defaults msgr-failures/few objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} workloads/qemu_xfstests_none_luks2} | 3 | |||
fail | 7670211 | 2024-04-23 20:19:22 | 2024-04-23 22:44:09 | 5282 | smithi | main | ubuntu | 22.04 | rbd/pwl-cache/tmpfs/{1-base/install 2-cluster/{fix-2 openstack} 3-supported-random-distro$/{ubuntu_latest} 4-cache-path 5-cache-mode/ssd 6-cache-size/5G 7-workloads/qemu_xfstests conf/{disable-pool-app}} | 2 | ||||
Failure Reason:
Command failed on smithi193 with status 1: 'test -f /home/ubuntu/cephtest/archive/qemu/client.0/success' |
||||||||||||||
pass | 7670157 | 2024-04-23 20:18:27 | 2024-04-23 20:19:26 | 2024-04-23 21:06:19 | 0:46:53 | 0:36:54 | 0:09:59 | smithi | main | ubuntu | 22.04 | rbd/mirror/{base/install clients/{mirror-extra mirror} cluster/{2-node openstack} conf/{disable-pool-app} msgr-failures/few objectstore/bluestore-comp-zstd supported-random-distro$/{ubuntu_latest} workloads/rbd-mirror-snapshot-workunit-minimum} | 2 | |
pass | 7670109 | 2024-04-23 17:45:35 | 2024-04-23 17:59:56 | 2024-04-23 18:40:27 | 0:40:31 | 0:32:31 | 0:08:00 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/fsx}} | 3 | |
pass | 7669996 | 2024-04-23 15:04:53 | 2024-04-23 19:03:23 | 2024-04-23 19:38:36 | 0:35:13 | 0:22:25 | 0:12:48 | smithi | main | ubuntu | 22.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/replicated s3tests-branch tasks/rgw_s3tests ubuntu_latest} | 2 | |
pass | 7669961 | 2024-04-23 15:04:25 | 2024-04-23 18:40:35 | 2024-04-23 19:06:33 | 0:25:58 | 0:15:18 | 0:10:40 | smithi | main | ubuntu | 22.04 | rgw/thrash/{clusters/fixed-2 frontend/beast ignore-pg-availability install objectstore/bluestore-bitmap s3tests-branch thrasher/default thrashosds-health ubuntu_latest workload/rgw_bucket_quota} | 2 | |
pass | 7669901 | 2024-04-23 14:20:47 | 2024-04-23 17:22:01 | 2024-04-23 18:01:15 | 0:39:14 | 0:28:15 | 0:10:59 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 4 | |
fail | 7669879 | 2024-04-23 14:20:24 | 2024-04-23 17:06:26 | 2024-04-23 17:21:51 | 0:15:25 | 0:07:07 | 0:08:18 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/redirect_set_object} | 4 | |
Failure Reason:
Command failed on smithi204 with status 1: 'sudo yum -y install ceph-mgr-dashboard' |
||||||||||||||
pass | 7669837 | 2024-04-23 14:19:39 | 2024-04-23 16:41:04 | 2024-04-23 17:06:36 | 0:25:32 | 0:14:13 | 0:11:19 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
dead | 7669824 | 2024-04-23 14:19:25 | 2024-04-23 16:35:18 | 2024-04-23 16:44:14 | 0:08:56 | smithi | main | ubuntu | 22.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/misc} | 1 | |||
Failure Reason:
SSH connection to smithi193 was lost: 'sudo apt-get update' |
||||||||||||||
dead | 7669823 | 2024-04-23 14:19:24 | 2024-04-23 16:35:18 | 2024-04-23 16:42:14 | 0:06:56 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |||
Failure Reason:
Error reimaging machines: Expected smithi193's OS to be centos 9 but found ubuntu 22.04 |
||||||||||||||
fail | 7669791 | 2024-04-23 14:18:51 | 2024-04-23 16:18:55 | 2024-04-23 16:30:38 | 0:11:43 | 0:05:02 | 0:06:41 | smithi | main | centos | 9.stream | rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_latest}} | 2 | |
Failure Reason:
Command failed on smithi191 with status 1: 'sudo yum -y install ceph-mgr-dashboard' |
||||||||||||||
fail | 7669684 | 2024-04-23 14:16:57 | 2024-04-23 15:19:15 | 2024-04-23 16:06:16 | 0:47:01 | 0:35:36 | 0:11:25 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
"2024-04-23T15:50:00.000082+0000 mon.a (mon.0) 1047 : cluster [WRN] Health detail: HEALTH_WARN noscrub flag(s) set" in cluster log |
||||||||||||||
pass | 7669641 | 2024-04-23 14:16:12 | 2024-04-23 15:01:56 | 2024-04-23 15:20:50 | 0:18:54 | 0:12:53 | 0:06:01 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
fail | 7669626 | 2024-04-23 14:15:56 | 2024-04-23 14:45:31 | 2024-04-23 14:58:49 | 0:13:18 | 0:04:57 | 0:08:21 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/normal ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/cache-snaps} | 4 | |
Failure Reason:
Command failed on smithi162 with status 1: 'sudo yum -y install ceph-mgr-dashboard' |
||||||||||||||
pass | 7669594 | 2024-04-23 14:05:16 | 2024-04-23 14:05:45 | 2024-04-23 14:41:04 | 0:35:19 | 0:26:36 | 0:08:43 | smithi | main | centos | 9.stream | fs/snaps/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/workunit/snaps} | 2 | |
pass | 7669534 | 2024-04-23 09:50:16 | 2024-04-23 09:51:20 | 2024-04-23 10:39:18 | 0:47:58 | 0:37:32 | 0:10:26 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/direct_io}} | 3 | |
fail | 7669244 | 2024-04-22 22:46:59 | 2024-04-22 23:24:00 | 2024-04-22 23:39:33 | 0:15:33 | 0:07:27 | 0:08:06 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
Command failed on smithi193 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c80a7cb4-0100-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi110:172.21.15.110=smithi110;smithi193:172.21.15.193=smithi193'" |
||||||||||||||
fail | 7669192 | 2024-04-22 22:46:04 | 2024-04-22 23:01:17 | 2024-04-22 23:22:11 | 0:20:54 | 0:10:20 | 0:10:34 | smithi | main | centos | 9.stream | orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/connectivity task/test_orch_cli_mon} | 5 | |
Failure Reason:
Command failed on smithi026 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7875b7e2-00fe-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7669122 | 2024-04-22 22:10:45 | 2024-04-23 01:53:24 | 2024-04-23 03:14:26 | 1:21:02 | 1:09:19 | 0:11:43 | smithi | main | centos | 8.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (51) after waiting for 300 seconds |