Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
smithi070.front.sepia.ceph.com | smithi | True | True | 2024-04-23 13:17:39.739198 | adking@teuthology | centos | 9 | x86_64 | None |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7669549 | 2024-04-23 12:16:55 | 2024-04-23 12:18:08 | 2024-04-23 12:47:27 | 0:29:19 | 0:21:33 | 0:07:46 | smithi | main | centos | 9.stream | rgw/notifications/{beast bluestore-bitmap fixed-2 ignore-pg-availability overrides tasks/kafka/{0-install supported-distros/{centos_latest} test_kafka}} | 2 | |
Failure Reason:
Command failed (bucket notification tests against different endpoints) on smithi070 with status 1: 'BNTESTS_CONF=/home/ubuntu/cephtest/ceph/src/test/rgw/bucket_notification/bn-tests.client.0.conf /home/ubuntu/cephtest/ceph/src/test/rgw/bucket_notification/virtualenv/bin/python -m nose -s /home/ubuntu/cephtest/ceph/src/test/rgw/bucket_notification/test_bn.py -v -a kafka_test' |
||||||||||||||
fail | 7669503 | 2024-04-23 09:50:06 | 2024-04-23 09:50:57 | 2024-04-23 10:57:17 | 1:06:20 | 0:56:47 | 0:09:33 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/fs/misc}} | 3 | |
Failure Reason:
error during quiesce thrashing: Error quiescing root: 110 (ETIMEDOUT) |
||||||||||||||
pass | 7669471 | 2024-04-23 05:01:19 | 2024-04-23 05:28:10 | 977 | smithi | main | ubuntu | 22.04 | smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/{0-install test/cfuse_workunit_suites_iozone}} | 3 | ||||
fail | 7669257 | 2024-04-22 22:47:13 | 2024-04-22 23:39:17 | 2024-04-23 00:06:46 | 0:27:29 | 0:15:59 | 0:11:30 | smithi | main | ubuntu | 22.04 | orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/connectivity task/test_orch_cli_mon} | 5 | |
Failure Reason:
"2024-04-23T00:03:20.892648+0000 mon.a (mon.0) 101 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
pass | 7669153 | 2024-04-22 22:11:14 | 2024-04-23 02:11:38 | 2024-04-23 02:55:17 | 0:43:39 | 0:37:12 | 0:06:27 | smithi | main | rhel | 8.6 | orch/cephadm/no-agent-workunits/{0-distro/rhel_8.6_container_tools_rhel8 mon_election/connectivity task/test_orch_cli_mon} | 5 | |
pass | 7669112 | 2024-04-22 22:10:35 | 2024-04-23 01:48:39 | 2024-04-23 02:11:59 | 0:23:20 | 0:14:02 | 0:09:18 | smithi | main | centos | 8.stream | orch/cephadm/smoke-small/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 3 | |
pass | 7669036 | 2024-04-22 22:09:24 | 2024-04-23 00:52:52 | 2024-04-23 01:48:51 | 0:55:59 | 0:45:21 | 0:10:38 | smithi | main | ubuntu | 20.04 | orch/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} | 2 | |
pass | 7668971 | 2024-04-22 21:33:01 | 2024-04-23 00:10:03 | 2024-04-23 00:52:44 | 0:42:41 | 0:32:08 | 0:10:33 | smithi | main | ubuntu | 22.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zlib powercycle/default supported-distros/ubuntu_latest tasks/cfuse_workunit_misc thrashosds-health} | 4 | |
fail | 7668708 | 2024-04-22 20:12:55 | 2024-04-23 02:55:11 | 2024-04-23 03:12:51 | 0:17:40 | 0:07:32 | 0:10:08 | smithi | main | ubuntu | 22.04 | orch/rook/smoke/{0-distro/ubuntu_22.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/3-node k8s/1.21 net/host rook/1.7.2} | 3 | |
Failure Reason:
Command failed on smithi055 with status 100: "sudo apt update && sudo apt install -y apt-transport-https ca-certificates curl && sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg && echo 'deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list && sudo apt update && sudo apt install -y kubelet kubeadm kubectl bridge-utils" |
||||||||||||||
fail | 7668218 | 2024-04-22 09:22:35 | 2024-04-22 09:26:49 | 2024-04-22 09:54:14 | 0:27:25 | 0:19:35 | 0:07:50 | smithi | main | centos | 9.stream | rbd:nvmeof/{base/install centos_latest conf/{disable-pool-app} workloads/nvmeof_thrash} | 4 | |
Failure Reason:
Command failed (workunit test rbd/nvmeof_basic_tests.sh) on smithi092 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.3/client.3/tmp && cd -- /home/ubuntu/cephtest/mnt.3/client.3/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bf9581dbbd9e682c36bac5ea9c58c734b61fd9c9 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="3" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.3 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.3 CEPH_MNT=/home/ubuntu/cephtest/mnt.3 RBD_IMAGE_PREFIX=myimage RBD_POOL=mypool timeout 3h /home/ubuntu/cephtest/clone.client.3/qa/workunits/rbd/nvmeof_basic_tests.sh' |
||||||||||||||
pass | 7668208 | 2024-04-22 07:08:56 | 2024-04-22 07:22:55 | 2024-04-22 07:55:48 | 0:32:53 | 0:26:11 | 0:06:42 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/simple thrashosds-health workloads/snaps-few-objects-localized} | 2 | |
fail | 7668157 | 2024-04-22 05:53:53 | 2024-04-22 06:51:26 | 2024-04-22 07:12:38 | 0:21:12 | 0:13:09 | 0:08:03 | smithi | main | centos | 9.stream | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/with-quiesce 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
Command failed (workunit test fs/snaps/snap-rm-diff.sh) on smithi044 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=86f7587a5a09af35f0895e1a2d08527638fae697 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snap-rm-diff.sh' |
||||||||||||||
fail | 7668133 | 2024-04-22 05:53:46 | 2024-04-22 06:18:38 | 2024-04-22 06:39:49 | 0:21:11 | 0:13:13 | 0:07:58 | smithi | main | centos | 9.stream | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/with-quiesce 2-workunit/fs/trivial_sync}} | 2 | |
Failure Reason:
Command failed on smithi044 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs dump --format=json' |
||||||||||||||
pass | 7668079 | 2024-04-22 00:32:18 | 2024-04-22 03:59:17 | 2024-04-22 06:18:46 | 2:19:29 | 2:12:24 | 0:07:05 | smithi | main | rhel | 8.6 | upgrade:pacific-x/stress-split/{0-distro/rhel_8.6_container_tools_3.0 0-roles 1-start 2-first-half-tasks/rbd-import-export 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/connectivity} | 2 | |
pass | 7668044 | 2024-04-22 00:08:25 | 2024-04-22 03:37:22 | 2024-04-22 03:59:21 | 0:21:59 | 0:11:41 | 0:10:18 | smithi | main | ubuntu | 20.04 | upgrade-clients:client-upgrade-octopus-quincy/octopus-client-x/rbd/{0-cluster/{openstack start} 1-install/octopus-client-x 2-workload/rbd_notification_tests supported/ubuntu_20.04} | 2 | |
pass | 7667965 | 2024-04-21 22:05:17 | 2024-04-22 08:59:10 | 2024-04-22 09:26:30 | 0:27:20 | 0:16:46 | 0:10:34 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/classic task/test_rgw_multisite} | 3 | |
pass | 7667893 | 2024-04-21 22:04:04 | 2024-04-22 08:19:46 | 2024-04-22 08:59:18 | 0:39:32 | 0:29:57 | 0:09:35 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 7667855 | 2024-04-21 22:03:25 | 2024-04-22 07:54:25 | 2024-04-22 08:19:57 | 0:25:32 | 0:13:21 | 0:12:11 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_20.04} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7667725 | 2024-04-21 21:28:19 | 2024-04-22 00:54:37 | 2024-04-22 01:21:04 | 0:26:27 | 0:16:02 | 0:10:25 | smithi | main | ubuntu | 22.04 | fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/quota} | 2 | |
fail | 7667632 | 2024-04-21 21:26:42 | 2024-04-21 23:30:55 | 2024-04-22 00:41:47 | 1:10:52 | 0:58:55 | 0:11:57 | smithi | main | ubuntu | 22.04 | fs/multiclient/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-2-client conf/{client mds mgr mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/cephfs_misc_tests} | 4 | |
Failure Reason:
Command failed on smithi070 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf' |