Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
smithi110.front.sepia.ceph.com | smithi | True | False | centos | 9 | x86_64 | /home/teuthworker/archive/teuthology-2024-04-26_20:40:14-rgw-main-distro-default-smithi/7675416 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7675710 | 2024-04-26 22:41:02 | 2024-04-27 04:26:57 | 2024-04-27 05:39:00 | 1:12:03 | 1:05:41 | 0:06:22 | smithi | main | rhel | 8.6 | rgw/verify/{0-install clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability inline-data$/{off} msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/ec-profile s3tests-branch sharding$/{single} striping$/{stripe-greater-than-chunk} supported-random-distro$/{rhel_8} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/valgrind} | 2 | |
pass | 7675684 | 2024-04-26 21:41:24 | 2024-04-27 03:22:51 | 2024-04-27 04:27:39 | 1:04:48 | 0:56:04 | 0:08:44 | smithi | main | centos | 9.stream | rgw/verify/{0-install accounts$/{tenant} clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability inline-data$/{off} msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec-profile s3tests-branch sharding$/{single} striping$/{stripe-equals-chunk} supported-random-distro$/{centos_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/valgrind} | 2 | |
pass | 7675650 | 2024-04-26 21:40:50 | 2024-04-27 02:59:25 | 2024-04-27 03:23:08 | 0:23:43 | 0:14:44 | 0:08:59 | smithi | main | ubuntu | 22.04 | rgw/singleton/{all/radosgw-admin frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec supported-random-distro$/{ubuntu_latest}} | 2 | |
pass | 7675606 | 2024-04-26 21:11:36 | 2024-04-27 02:36:25 | 2024-04-27 02:59:21 | 0:22:56 | 0:17:22 | 0:05:34 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
fail | 7675465 | 2024-04-26 21:09:15 | 2024-04-27 01:02:59 | 2024-04-27 02:23:58 | 1:20:59 | 1:11:37 | 0:09:22 | smithi | main | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi110 with status 1: "sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7eb047ca-0433-11ef-bc93-c7b262605968 -e sha1=b22e2ebdeb24376882b7bda2a7329c8cccc2276a -- bash -c 'ceph orch ps'" |
||||||||||||||
pass | 7675416 | 2024-04-26 20:43:14 | 2024-04-27 09:27:10 | 2024-04-27 10:14:14 | 0:47:04 | 0:40:45 | 0:06:19 | smithi | main | centos | 9.stream | rgw/verify/{0-install accounts$/{none} clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability inline-data$/{off} msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/ec-profile s3tests-branch sharding$/{single} striping$/{stripe-equals-chunk} supported-random-distro$/{centos_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/lockdep} | 2 | |
fail | 7675298 | 2024-04-26 19:35:20 | 2024-04-26 23:07:35 | 2024-04-27 00:23:52 | 1:16:17 | 1:07:41 | 0:08:36 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/kernel_untar_build}} | 3 | |
Failure Reason:
error during scrub thrashing: rank damage found: {'backtrace'} |
||||||||||||||
fail | 7675084 | 2024-04-26 18:22:48 | 2024-04-27 00:31:38 | 2024-04-27 00:55:01 | 0:23:23 | 0:14:45 | 0:08:38 | smithi | main | ubuntu | 22.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/mon} | 1 | |
Failure Reason:
Command failed (workunit test mon/mkfs.sh) on smithi110 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5d349943c59c9485df060d6adb0594f3940ec0eb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mkfs.sh' |
||||||||||||||
dead | 7674683 | 2024-04-26 10:55:04 | 2024-04-26 10:58:58 | 2024-04-26 23:10:49 | 12:11:51 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/postgres}} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7674654 | 2024-04-26 07:24:01 | 2024-04-26 07:42:32 | 2024-04-26 08:18:09 | 0:35:37 | 0:24:48 | 0:10:49 | smithi | main | ubuntu | 22.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/replicated s3tests-branch tasks/rgw_s3tests ubuntu_latest} | 2 | |
fail | 7674575 | 2024-04-26 02:09:12 | 2024-04-26 04:26:33 | 2024-04-26 07:34:55 | 3:08:22 | 3:00:43 | 0:07:39 | smithi | main | centos | 9.stream | upgrade/reef-x/stress-split/{0-distro/centos_9.stream_runc 0-roles 1-start 2-first-half-tasks/readwrite 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/connectivity} | 2 | |
Failure Reason:
"2024-04-26T05:00:00.000164+0000 mon.a (mon.0) 649 : cluster 3 [WRN] OSDMAP_FLAGS: noscrub flag(s) set" in cluster log |
||||||||||||||
pass | 7674431 | 2024-04-26 01:28:20 | 2024-04-26 03:18:11 | 2024-04-26 04:27:17 | 1:09:06 | 1:03:21 | 0:05:45 | smithi | main | centos | 9.stream | rados/standalone/{supported-random-distro$/{centos_latest} workloads/misc} | 1 | |
pass | 7674372 | 2024-04-26 01:27:17 | 2024-04-26 02:48:23 | 2024-04-26 03:17:53 | 0:29:30 | 0:20:03 | 0:09:27 | smithi | main | ubuntu | 22.04 | rados/singleton/{all/osd-backfill mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7674338 | 2024-04-26 01:26:40 | 2024-04-26 02:31:28 | 2024-04-26 02:48:18 | 0:16:50 | 0:10:36 | 0:06:14 | smithi | main | centos | 9.stream | rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream_runc} 1-start 2-services/basic 3-final} | 1 | |
fail | 7674254 | 2024-04-26 01:03:35 | 2024-04-26 01:21:54 | 2024-04-26 02:17:04 | 0:55:10 | 0:45:11 | 0:09:59 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/fs/test_o_trunc}} | 3 | |
Failure Reason:
"2024-04-26T01:50:00.000295+0000 mon.a (mon.0) 1036 : cluster [WRN] fs cephfs has 3 MDS online, but wants 5" in cluster log |
||||||||||||||
pass | 7674121 | 2024-04-25 22:33:16 | 2024-04-26 10:13:09 | 2024-04-26 10:59:23 | 0:46:14 | 0:39:34 | 0:06:40 | smithi | main | rhel | 8.6 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-hybrid powercycle/default supported-all-distro/rhel_8 tasks/rados_api_tests thrashosds-health} | 4 | |
pass | 7674098 | 2024-04-25 22:32:53 | 2024-04-26 09:46:26 | 2024-04-26 10:13:44 | 0:27:18 | 0:16:40 | 0:10:38 | smithi | main | ubuntu | 22.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-all-distro/ubuntu_latest tasks/admin_socket_objecter_requests thrashosds-health} | 4 | |
fail | 7674073 | 2024-04-25 21:33:03 | 2024-04-25 22:38:47 | 2024-04-25 22:57:26 | 0:18:39 | 0:11:10 | 0:07:29 | smithi | main | centos | 9.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-distros/centos_latest tasks/cfuse_workunit_suites_fsx thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test suites/fsx.sh) on smithi052 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b22e2ebdeb24376882b7bda2a7329c8cccc2276a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh' |
||||||||||||||
pass | 7674061 | 2024-04-25 21:32:51 | 2024-04-25 22:22:50 | 2024-04-25 22:40:00 | 0:17:10 | 0:10:19 | 0:06:51 | smithi | main | centos | 9.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-distros/centos_latest tasks/cfuse_workunit_suites_pjd thrashosds-health} | 4 | |
pass | 7674047 | 2024-04-25 21:32:37 | 2024-04-25 22:01:21 | 2024-04-25 22:23:03 | 0:21:42 | 0:11:27 | 0:10:15 | smithi | main | centos | 9.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-distros/centos_latest tasks/cfuse_workunit_suites_pjd thrashosds-health} | 4 |