Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi113.front.sepia.ceph.com smithi True True 2024-04-24 22:46:29.506655 scheduled_teuthology@teuthology x86_64 /home/teuthworker/archive/teuthology-2024-04-24_21:24:02-fs-squid-distro-default-smithi/7672170
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
waiting 7672170 2024-04-24 21:25:19 2024-04-24 22:44:49 2024-04-24 22:46:30 0:03:32 0:03:32 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/postgres}} 3
pass 7672135 2024-04-24 21:24:51 2024-04-24 22:19:04 2024-04-24 22:45:18 0:26:14 0:13:56 0:12:18 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/iozone}} 2
fail 7671848 2024-04-24 15:50:48 2024-04-24 17:08:04 2024-04-24 17:46:40 0:38:36 0:27:57 0:10:39 smithi main ubuntu 22.04 orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

"2024-04-24T17:28:24.010817+0000 mon.smithi113 (mon.0) 121 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7671814 2024-04-24 15:50:14 2024-04-24 16:46:18 2024-04-24 17:00:24 0:14:06 0:06:28 0:07:38 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

Command failed on smithi139 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 97666382-025b-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi113:172.21.15.113=smithi113;smithi139:172.21.15.139=smithi139'"

pass 7671750 2024-04-24 15:14:49 2024-04-24 19:25:12 2024-04-24 19:59:45 0:34:33 0:18:49 0:15:44 smithi main centos 8.stream krbd/rbd/{bluestore-bitmap clusters/fixed-3 conf ms_mode/secure msgr-failures/many tasks/rbd_workunit_suites_fsx} 3
pass 7671720 2024-04-24 15:14:17 2024-04-24 18:55:12 2024-04-24 19:27:14 0:32:02 0:15:27 0:16:35 smithi main centos 8.stream krbd/rbd-nomount/{bluestore-bitmap clusters/fixed-3 conf install/ceph ms_mode/secure msgr-failures/few tasks/krbd_huge_osdmap} 3
fail 7671698 2024-04-24 15:13:54 2024-04-24 18:23:49 2024-04-24 18:47:46 0:23:57 0:12:50 0:11:07 smithi main centos 8.stream krbd/basic/{bluestore-bitmap ceph/ceph clusters/fixed-1 conf ms_mode/legacy$/{legacy-rxbounce} tasks/krbd_whole_object_zeroout} 1
Failure Reason:

Command failed on smithi113 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/ceph/ceph-ci.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 330067dbce47655d1f762ef14d546d2fde16a6a9'

pass 7671654 2024-04-24 14:13:08 2024-04-24 17:54:39 2024-04-24 18:24:05 0:29:26 0:23:12 0:06:14 smithi main rhel 8.6 rbd/device/{base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} msgr-failures/few objectstore/bluestore-comp-zstd supported-random-distro$/{rhel_8} thrashers/cache thrashosds-health workloads/rbd_nbd} 3
pass 7671507 2024-04-24 14:10:28 2024-04-24 14:11:37 2024-04-24 16:46:49 2:35:12 2:23:44 0:11:28 smithi main ubuntu 20.04 rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/bluestore-hybrid 4-supported-random-distro$/{ubuntu_latest} 5-pool/ec-data-pool 6-prepare/qcow2-http 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup conf/{disable-pool-app}} 3
pass 7671286 2024-04-24 11:30:38 2024-04-24 11:37:27 2024-04-24 14:12:06 2:34:39 2:24:30 0:10:09 smithi main centos 9.stream rados:standalone/{supported-random-distro$/{centos_latest} workloads/osd-backfill} 1
pass 7671098 2024-04-24 01:17:57 2024-04-24 19:56:07 2024-04-24 22:21:40 2:25:33 2:15:39 0:09:54 smithi main rhel 8.6 upgrade:quincy-x/stress-split/{0-distro/rhel_8.6_container_tools_3.0 0-roles 1-start 2-first-half-tasks/rbd-import-export 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/connectivity} 2
fail 7669549 2024-04-23 12:16:55 2024-04-23 12:18:08 2024-04-23 12:47:27 0:29:19 0:21:33 0:07:46 smithi main centos 9.stream rgw/notifications/{beast bluestore-bitmap fixed-2 ignore-pg-availability overrides tasks/kafka/{0-install supported-distros/{centos_latest} test_kafka}} 2
Failure Reason:

Command failed (bucket notification tests against different endpoints) on smithi070 with status 1: 'BNTESTS_CONF=/home/ubuntu/cephtest/ceph/src/test/rgw/bucket_notification/bn-tests.client.0.conf /home/ubuntu/cephtest/ceph/src/test/rgw/bucket_notification/virtualenv/bin/python -m nose -s /home/ubuntu/cephtest/ceph/src/test/rgw/bucket_notification/test_bn.py -v -a kafka_test'

fail 7669502 2024-04-23 09:50:06 2024-04-23 09:50:57 2024-04-23 10:51:52 1:00:55 0:51:04 0:09:51 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/fs/misc}} 3
Failure Reason:

error during quiesce thrashing: Error quiescing root: 110 (ETIMEDOUT)

pass 7669470 2024-04-23 05:01:18 2024-04-23 05:01:19 2024-04-23 05:28:55 0:27:36 0:15:37 0:11:59 smithi main ubuntu 22.04 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/{0-install test/cfuse_workunit_suites_fsstress}} 3
pass 7669459 2024-04-23 01:24:10 2024-04-23 01:28:11 2024-04-23 02:01:28 0:33:17 0:23:35 0:09:42 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/simple thrashosds-health workloads/radosbench-high-concurrency} 2
fail 7669256 2024-04-22 22:47:12 2024-04-22 23:39:17 2024-04-22 23:53:39 0:14:22 0:06:06 0:08:16 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

Command failed on smithi186 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 00653638-0103-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi113:172.21.15.113=smithi113;smithi186:172.21.15.186=smithi186'"

fail 7669216 2024-04-22 22:46:29 2024-04-22 23:08:17 2024-04-22 23:33:24 0:25:07 0:14:30 0:10:37 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi113 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=43be020184947e53516056c9931e1ac5bdbbb1a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

dead 7669203 2024-04-22 22:46:16 2024-04-22 23:08:03 2024-04-22 23:09:17 0:01:14 smithi main ubuntu 22.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi113

pass 7669136 2024-04-22 22:10:58 2024-04-23 02:01:00 2024-04-23 02:27:12 0:26:12 0:15:51 0:10:21 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/classic task/test_extra_daemon_features} 2
pass 7669033 2024-04-22 22:09:21 2024-04-23 00:51:50 2024-04-23 01:29:34 0:37:44 0:25:51 0:11:53 smithi main ubuntu 20.04 orch/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2