Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi111.front.sepia.ceph.com smithi True True 2022-07-04 02:02:12.193578 scheduled_teuthology@teuthology x86_64 /home/teuthworker/archive/teuthology-2022-07-03_03:31:04-rados-pacific-distro-default-smithi/6911073
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6911987 2022-07-03 15:12:13 2022-07-03 20:28:43 2022-07-04 02:02:04 5:33:21 5:18:28 0:14:53 smithi main ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/pjd}} 2
Failure Reason:

SSH connection to smithi111 was lost: 'sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0'

waiting 6911073 2022-07-03 03:33:31 2022-07-04 02:01:31 2022-07-04 02:02:13 0:02:05 0:02:05 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} 2
dead 6910814 2022-07-02 14:19:26 2022-07-03 08:16:18 2022-07-03 20:34:04 12:17:46 smithi main fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_dbench_iozone} 2
Failure Reason:

hit max job timeout

pass 6910800 2022-07-02 14:19:15 2022-07-03 07:40:02 2022-07-03 08:24:48 0:44:46 0:30:08 0:14:38 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/5 scrub/yes standby-replay subvolume/{with-quota} tasks/{0-check-counter workunit/suites/fsync-tester}} 3
dead 6910512 2022-07-02 14:15:27 2022-07-02 19:36:53 2022-07-03 07:45:19 12:08:26 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/5 scrub/no standby-replay subvolume/{with-no-extra-options} tasks/{0-check-counter workunit/kernel_untar_build}} 3
Failure Reason:

hit max job timeout

pass 6910164 2022-07-01 21:01:11 2022-07-01 22:53:45 2022-07-01 23:25:43 0:31:58 0:23:12 0:08:46 smithi main centos 8.stream rados:thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-balanced} 2
pass 6910134 2022-07-01 21:01:10 2022-07-01 22:22:33 2022-07-01 22:56:14 0:33:41 0:24:13 0:09:28 smithi main centos 8.stream rados:thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-balanced} 2
pass 6910113 2022-07-01 21:01:09 2022-07-01 21:51:38 2022-07-01 22:24:27 0:32:49 0:22:55 0:09:54 smithi main centos 8.stream rados:thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-balanced} 2
pass 6910096 2022-07-01 21:01:08 2022-07-01 21:22:37 2022-07-01 21:54:42 0:32:05 0:22:44 0:09:21 smithi main centos 8.stream rados:thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-balanced} 2
pass 6909930 2022-07-01 20:39:57 2022-07-02 15:58:18 2022-07-02 17:04:08 1:05:50 0:57:38 0:08:12 smithi main centos 8.stream rbd/pwl-cache/tmpfs/{1-base/install 2-cluster/{fix-2 openstack} 3-supported-random-distro$/{centos_8} 4-cache-path 5-cache-mode/ssd 6-cache-size/1G 7-workloads/qemu_xfstests} 2
pass 6909795 2022-07-01 20:36:35 2022-07-02 13:16:18 2022-07-02 15:58:29 2:42:11 2:29:12 0:12:59 smithi main ubuntu 20.04 rbd/maintenance/{base/install clusters/{fixed-3 openstack} objectstore/bluestore-low-osd-mem-target qemu/xfstests supported-random-distro$/{ubuntu_latest} workloads/rebuild_object_map} 3
pass 6909755 2022-07-01 20:35:51 2022-07-02 12:33:15 2022-07-02 13:17:05 0:43:50 0:27:51 0:15:59 smithi main rhel 8.4 rbd/immutable-object-cache/{clusters/{fix-2 openstack} pool/ceph_and_immutable_object_cache supported-random-distro$/{rhel_8} workloads/c_api_tests_with_defaults} 2
pass 6909747 2022-07-01 20:35:43 2022-07-02 12:18:32 2022-07-02 12:41:35 0:23:03 0:10:30 0:12:33 smithi main centos 8.stream rbd/singleton/{all/read-flags-writethrough objectstore/filestore-xfs openstack supported-random-distro$/{centos_8}} 1
pass 6909729 2022-07-01 20:35:23 2022-07-02 11:54:47 2022-07-02 12:24:09 0:29:22 0:12:49 0:16:33 smithi main centos 8.stream rbd/nbd/{base/install cluster/{fixed-3 openstack} msgr-failures/few objectstore/filestore-xfs supported-random-distro$/{centos_8} thrashers/cache thrashosds-health workloads/rbd_nbd} 3
pass 6909711 2022-07-01 20:35:03 2022-07-02 11:37:31 2022-07-02 11:57:10 0:19:39 0:11:28 0:08:11 smithi main centos 8.stream rbd/singleton/{all/read-flags-no-cache objectstore/bluestore-low-osd-mem-target openstack supported-random-distro$/{centos_8}} 1
pass 6909670 2022-07-01 20:16:23 2022-07-02 17:29:26 2022-07-02 17:54:08 0:24:42 0:13:54 0:10:48 smithi main ubuntu 20.04 fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_misc} 2
pass 6909640 2022-07-01 20:15:32 2022-07-02 17:03:29 2022-07-02 17:30:04 0:26:35 0:15:21 0:11:14 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/mds-flush} 2
pass 6909187 2022-07-01 15:53:21 2022-07-02 18:27:58 2022-07-02 19:36:58 1:09:00 1:01:00 0:08:00 smithi main rhel 8.6 orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_rhel8 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
fail 6909158 2022-07-01 15:52:47 2022-07-02 18:11:23 2022-07-02 18:29:23 0:18:00 0:07:26 0:10:34 smithi main orch:cephadm/workunits/{agent/on mon_election/classic task/test_adoption} 1
Failure Reason:

Command failed (workunit test cephadm/test_adoption.sh) on smithi111 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=19c5b8dbbdd59578fd7085156ce8d7836681eed0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'

fail 6909139 2022-07-01 15:52:28 2022-07-02 17:53:53 2022-07-02 18:11:41 0:17:48 0:11:07 0:06:41 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi103 with status 5: 'sudo systemctl stop ceph-3d98b0cc-fa32-11ec-842c-001a4aab830c@mon.smithi103'