Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
smithi148.front.sepia.ceph.com | smithi | True | True | 2024-04-24 03:10:17.425872 | scheduled_teuthology@teuthology | centos | 8 | x86_64 | /home/teuthworker/archive/teuthology-2024-04-24_01:08:04-upgrade:pacific-x-reef-distro-default-smithi/7670925 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7671141 | 2024-04-24 01:33:03 | 2024-04-24 01:44:36 | 2024-04-24 03:07:50 | 1:23:14 | 1:13:22 | 0:09:52 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/simple thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
Failure Reason:
reached maximum tries (501) after waiting for 3000 seconds |
||||||||||||||
running | 7670925 | 2024-04-24 01:09:36 | 2024-04-24 03:10:17 | 2024-04-24 03:51:28 | 0:42:07 | smithi | main | centos | 8.stream | upgrade:pacific-x/stress-split/{0-distro/centos_8.stream_container_tools 0-roles 1-start 2-first-half-tasks/rbd_api 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/classic} | 2 | |||
pass | 7670563 | 2024-04-23 21:20:03 | 2024-04-24 00:52:06 | 2024-04-24 01:44:29 | 0:52:23 | 0:40:50 | 0:11:33 | smithi | main | ubuntu | 22.04 | rbd/device/{base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} msgr-failures/few objectstore/bluestore-low-osd-mem-target supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/diff-continuous-nbd} | 3 | |
pass | 7670467 | 2024-04-23 21:18:29 | 2024-04-23 23:09:35 | 2024-04-24 00:52:24 | 1:42:49 | 1:33:14 | 0:09:35 | smithi | main | centos | 9.stream | rbd/maintenance/{base/install clusters/{fixed-3 openstack} conf/{disable-pool-app} objectstore/bluestore-bitmap qemu/xfstests supported-random-distro$/{centos_latest} workloads/dynamic_features_no_cache} | 3 | |
pass | 7670449 | 2024-04-23 21:18:12 | 2024-04-23 22:46:45 | 2024-04-23 23:12:28 | 0:25:43 | 0:18:49 | 0:06:54 | smithi | main | centos | 9.stream | rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} conf/{disable-pool-app} data-pool/ec extra-conf/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-low-osd-mem-target supported-random-distro$/{centos_latest} workloads/python_api_tests_with_defaults} | 3 | |
pass | 7670401 | 2024-04-23 21:17:26 | 2024-04-23 22:09:42 | 2024-04-23 22:45:08 | 0:35:26 | 0:23:59 | 0:11:27 | smithi | main | ubuntu | 22.04 | rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} conf/{disable-pool-app} data-pool/none extra-conf/none min-compat-client/default msgr-failures/few objectstore/bluestore-stupid supported-random-distro$/{ubuntu_latest} workloads/python_api_tests_with_defaults} | 3 | |
pass | 7670210 | 2024-04-23 20:19:21 | 2024-04-23 21:05:46 | 2024-04-23 22:11:07 | 1:05:21 | 0:58:27 | 0:06:54 | smithi | main | centos | 9.stream | rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} conf/{disable-pool-app} msgr-failures/few objectstore/bluestore-comp-zstd policy/simple rbd-mirror/four-per-cluster supported-random-distro$/{centos_latest} workloads/rbd-mirror-journal-stress-workunit} | 2 | |
pass | 7670167 | 2024-04-23 20:18:37 | 2024-04-23 20:19:29 | 2024-04-23 21:04:58 | 0:45:29 | 0:31:05 | 0:14:24 | smithi | main | ubuntu | 22.04 | rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} conf/{disable-pool-app} data-pool/ec extra-conf/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-snappy supported-random-distro$/{ubuntu_latest} workloads/rbd_fio} | 3 | |
pass | 7670122 | 2024-04-23 17:45:39 | 2024-04-23 18:07:23 | 2024-04-23 18:51:37 | 0:44:14 | 0:31:19 | 0:12:55 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/direct_io}} | 3 | |
pass | 7670049 | 2024-04-23 17:19:06 | 2024-04-23 17:36:33 | 2024-04-23 18:11:49 | 0:35:16 | 0:22:59 | 0:12:17 | smithi | main | ubuntu | 22.04 | rgw/website/{clusters/fixed-2 frontend/beast http ignore-pg-availability overrides s3tests-branch tasks/s3tests-website ubuntu_latest} | 2 | |
pass | 7669973 | 2024-04-23 15:04:35 | 2024-04-23 18:51:21 | 2024-04-23 19:25:30 | 0:34:09 | 0:22:50 | 0:11:19 | smithi | main | ubuntu | 22.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec-profile s3tests-branch tasks/rgw_s3tests ubuntu_latest} | 2 | |
fail | 7669905 | 2024-04-23 14:20:51 | 2024-04-23 17:22:02 | 2024-04-23 17:34:20 | 0:12:18 | 0:04:13 | 0:08:05 | smithi | main | centos | 9.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_latest} tasks/repair_test} | 2 | |
Failure Reason:
Command failed on smithi148 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd sqlite-devel sqlite-devel sqlite-devel sqlite-devel' |
||||||||||||||
fail | 7669729 | 2024-04-23 14:17:45 | 2024-04-23 15:48:06 | 2024-04-23 17:12:41 | 1:24:35 | 1:13:54 | 0:10:41 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
"2024-04-23T16:30:00.000089+0000 mon.a (mon.0) 1644 : cluster [WRN] Health detail: HEALTH_WARN nodeep-scrub flag(s) set" in cluster log |
||||||||||||||
fail | 7669687 | 2024-04-23 14:17:00 | 2024-04-23 15:29:08 | 2024-04-23 15:42:25 | 0:13:17 | 0:05:03 | 0:08:14 | smithi | main | centos | 9.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-low-osd-mem-target} tasks/dashboard} | 2 | |
Failure Reason:
Command failed on smithi148 with status 1: 'sudo yum -y install ceph-mgr-dashboard' |
||||||||||||||
pass | 7669647 | 2024-04-23 14:16:18 | 2024-04-23 15:01:58 | 2024-04-23 15:31:07 | 0:29:09 | 0:16:19 | 0:12:50 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/dedup-io-snaps} | 4 | |
pass | 7669614 | 2024-04-23 14:15:44 | 2024-04-23 14:30:15 | 2024-04-23 15:02:31 | 0:32:16 | 0:22:14 | 0:10:02 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/cache-pool-snaps} | 4 | |
fail | 7669608 | 2024-04-23 14:15:37 | 2024-04-23 14:16:20 | 2024-04-23 14:29:49 | 0:13:29 | 0:05:23 | 0:08:06 | smithi | main | centos | 9.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-snappy} tasks/e2e} | 2 | |
Failure Reason:
Command failed on smithi157 with status 1: 'sudo yum -y install ceph-mgr-dashboard' |
||||||||||||||
fail | 7669248 | 2024-04-22 22:47:04 | 2024-04-22 23:24:01 | 2024-04-22 23:37:52 | 0:13:51 | 0:07:17 | 0:06:34 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} | 2 | |
Failure Reason:
Command failed on smithi148 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c832ada6-0100-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi131:172.21.15.131=smithi131;smithi148:172.21.15.148=smithi148'" |
||||||||||||||
fail | 7669201 | 2024-04-22 22:46:14 | 2024-04-22 23:08:02 | 2024-04-22 23:21:08 | 0:13:06 | 0:06:22 | 0:06:44 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
Command failed on smithi148 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 80e50022-00fe-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi131:172.21.15.131=smithi131;smithi148:172.21.15.148=smithi148'" |
||||||||||||||
fail | 7669187 | 2024-04-22 22:45:59 | 2024-04-22 22:54:34 | 2024-04-22 23:07:49 | 0:13:15 | 0:06:34 | 0:06:41 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
Command failed on smithi148 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9ac59ea4-00fc-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi131:172.21.15.131=smithi131;smithi148:172.21.15.148=smithi148'" |