Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
smithi029.front.sepia.ceph.com | smithi | True | True | 2024-04-25 07:25:56.321159 | scheduled_teuthology@teuthology | ubuntu | 22.04 | x86_64 | /home/teuthworker/archive/teuthology-2024-04-23_22:16:02-rbd-reef-distro-default-smithi/7670736 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7672828 | 2024-04-25 03:55:35 | 2024-04-25 05:11:27 | 2024-04-25 06:00:24 | 0:48:57 | 0:40:45 | 0:08:12 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (51) after waiting for 300 seconds |
||||||||||||||
pass | 7672798 | 2024-04-25 03:55:05 | 2024-04-25 04:47:32 | 2024-04-25 05:11:18 | 0:23:46 | 0:14:05 | 0:09:41 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} | 2 | |
pass | 7672409 | 2024-04-24 21:48:27 | 2024-04-25 02:26:47 | 2024-04-25 03:41:15 | 1:14:28 | 0:54:18 | 0:20:10 | smithi | main | centos | 8.stream | krbd/thrash/{bluestore-bitmap ceph/ceph clusters/fixed-3 conf ms_mode$/{legacy} thrashers/backoff thrashosds-health workloads/krbd_diff_continuous} | 3 | |
pass | 7672327 | 2024-04-24 21:27:50 | 2024-04-25 01:19:40 | 2024-04-25 02:24:36 | 1:04:56 | 0:53:45 | 0:11:11 | smithi | main | ubuntu | 22.04 | fs/verify/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu/{overrides ubuntu_latest}} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health session_timeout} ranks/1 tasks/dbench validater/lockdep} | 2 | |
pass | 7672294 | 2024-04-24 21:27:18 | 2024-04-25 00:44:01 | 2024-04-25 01:19:37 | 0:35:36 | 0:28:48 | 0:06:48 | smithi | main | centos | 9.stream | fs/volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/snapshot}} | 2 | |
pass | 7672260 | 2024-04-24 21:26:43 | 2024-04-25 00:09:22 | 2024-04-25 00:44:12 | 0:34:50 | 0:22:38 | 0:12:12 | smithi | main | ubuntu | 22.04 | fs/volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/misc}} | 2 | |
pass | 7672211 | 2024-04-24 21:26:00 | 2024-04-24 23:27:34 | 2024-04-25 00:09:20 | 0:41:46 | 0:32:07 | 0:09:39 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/direct_io}} | 3 | |
pass | 7672190 | 2024-04-24 21:25:39 | 2024-04-24 23:07:00 | 2024-04-24 23:29:46 | 0:22:46 | 0:14:24 | 0:08:22 | smithi | main | centos | 9.stream | fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/multimds_misc} | 2 | |
fail | 7671854 | 2024-04-24 15:50:54 | 2024-04-24 17:08:06 | 2024-04-24 17:36:52 | 0:28:46 | 0:16:51 | 0:11:55 | smithi | main | ubuntu | 22.04 | orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/connectivity task/test_orch_cli_mon} | 5 | |
Failure Reason:
"2024-04-24T17:32:27.505046+0000 mon.a (mon.0) 101 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7671824 | 2024-04-24 15:50:24 | 2024-04-24 16:51:03 | 2024-04-24 17:04:23 | 0:13:20 | 0:06:48 | 0:06:32 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
Failure Reason:
Command failed on smithi107 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a4d0912-025c-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi029:172.21.15.29=smithi029;smithi107:172.21.15.107=smithi107'" |
||||||||||||||
pass | 7671794 | 2024-04-24 15:49:53 | 2024-04-24 16:25:18 | 2024-04-24 16:51:00 | 0:25:42 | 0:12:57 | 0:12:45 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_rgw_multisite} | 3 | |
pass | 7671668 | 2024-04-24 14:13:23 | 2024-04-24 18:08:47 | 2024-04-24 20:34:43 | 2:25:56 | 2:13:33 | 0:12:23 | smithi | main | ubuntu | 20.04 | rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/bluestore-hybrid 4-supported-random-distro$/{ubuntu_latest} 5-pool/ec-data-pool 6-prepare/qcow2-file 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup conf/{disable-pool-app}} | 3 | |
pass | 7671635 | 2024-04-24 14:12:48 | 2024-04-24 17:39:09 | 2024-04-24 18:08:56 | 0:29:47 | 0:16:59 | 0:12:48 | smithi | main | ubuntu | 20.04 | rbd/device/{base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} msgr-failures/few objectstore/bluestore-comp-zlib supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/rbd_fsx_nbd} | 3 | |
pass | 7671555 | 2024-04-24 14:11:20 | 2024-04-24 15:03:34 | 2024-04-24 16:31:37 | 1:28:03 | 1:18:25 | 0:09:38 | smithi | main | centos | 8.stream | rbd/encryption/{cache/writeback clusters/{fixed-3 openstack} conf/{disable-pool-app} features/defaults msgr-failures/few objectstore/bluestore-comp-zlib pool/ec-cache-pool supported-random-distro$/{centos_8} workloads/qemu_xfstests_luks1} | 3 | |
pass | 7671533 | 2024-04-24 14:10:56 | 2024-04-24 14:33:09 | 2024-04-24 15:03:49 | 0:30:40 | 0:17:39 | 0:13:01 | smithi | main | ubuntu | 20.04 | rbd/thrash/{base/install clusters/{fixed-2 openstack} conf/{disable-pool-app} msgr-failures/few objectstore/bluestore-comp-lz4 supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/rbd_fsx_deep_copy} | 2 | |
pass | 7671485 | 2024-04-24 13:01:08 | 2024-04-24 13:57:14 | 2024-04-24 14:33:05 | 0:35:51 | 0:23:53 | 0:11:58 | smithi | main | ubuntu | 22.04 | rgw/crypt/{0-cluster/fixed-1 1-ceph-install/install 2-kms/vault_transit 3-rgw/rgw 4-tests/{s3tests} ignore-pg-availability s3tests-branch ubuntu_latest} | 1 | |
pass | 7671441 | 2024-04-24 13:00:28 | 2024-04-24 13:13:54 | 2024-04-24 13:57:30 | 0:43:36 | 0:36:46 | 0:06:50 | smithi | main | centos | 9.stream | rgw/verify/{0-install accounts$/{main-tenant} clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability inline-data$/{on} msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/replicated s3tests-branch sharding$/{single} striping$/{stripe-greater-than-chunk} supported-random-distro$/{centos_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/lockdep} | 2 | |
fail | 7671324 | 2024-04-24 11:41:55 | 2024-04-24 12:57:12 | 2024-04-24 13:11:47 | 0:14:35 | 0:06:54 | 0:07:41 | smithi | main | centos | 9.stream | orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} | 3 | |
Failure Reason:
Command failed on smithi029 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:23fcfb96e7e1a49d12a94e3f87a8e3f06db2a1ec ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid adc509be-023b-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 7671266 | 2024-04-24 09:58:54 | 2024-04-24 09:59:28 | 2024-04-24 10:40:19 | 0:40:51 | 0:28:42 | 0:12:09 | smithi | main | centos | 8.stream | fs:upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} | 3 | |
pass | 7671239 | 2024-04-24 09:12:28 | 2024-04-24 09:29:13 | 2024-04-24 10:00:34 | 0:31:21 | 0:21:27 | 0:09:54 | smithi | main | centos | 9.stream | rados:thrash-erasure-code-crush-4-nodes/{arch/x86_64 ceph mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=8-m=6-crush} | 4 |