Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi201.front.sepia.ceph.com smithi False True 2021-07-11 17:21:00.157215 scheduled_sage@teuthology centos 8 x86_64 Cant ssh
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6264310 2021-07-11 17:20:38 2021-07-11 17:21:00 2021-07-11 18:03:55 0:42:55 0:28:00 0:14:55 smithi master centos 8.2 orch/cephadm/with-work/{0-distro/centos_8.2_kubic_stable fixed-2 mode/packaged mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

timeout expired in wait_until_healthy

pass 6263529 2021-07-11 03:37:26 2021-07-11 07:07:41 2021-07-11 07:49:58 0:42:17 0:29:05 0:13:12 smithi master ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 6263301 2021-07-11 03:34:01 2021-07-11 05:28:47 2021-07-11 07:08:30 1:39:43 1:29:02 0:10:41 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/radosbench} 3
pass 6263222 2021-07-11 01:52:03 2021-07-11 04:55:22 2021-07-11 05:28:49 0:33:27 0:22:26 0:11:01 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/cache-agent-big} 2
pass 6263169 2021-07-11 01:51:19 2021-07-11 04:31:47 2021-07-11 04:55:36 0:23:49 0:17:45 0:06:04 smithi master rhel 8.4 rados/singleton/{all/divergent_priors mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 1
pass 6262827 2021-07-11 01:46:27 2021-07-11 02:14:42 2021-07-11 04:31:47 2:17:05 2:08:50 0:08:15 smithi master centos 8.3 rados/objectstore/{backends/filestore-idempotent supported-random-distro$/{centos_8}} 1
pass 6262808 2021-07-11 01:46:12 2021-07-11 01:46:41 2021-07-11 02:14:42 0:28:01 0:15:20 0:12:41 smithi master centos 8.3 rados/singleton/{all/max-pg-per-osd.from-primary mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 1
pass 6262686 2021-07-10 15:17:34 2021-07-10 18:47:50 2021-07-10 19:15:19 0:27:29 0:20:56 0:06:33 smithi master rhel 8.4 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 6262622 2021-07-10 15:16:33 2021-07-10 18:22:36 2021-07-10 18:47:57 0:25:21 0:12:24 0:12:57 smithi master centos 8.3 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 6262572 2021-07-10 15:15:44 2021-07-10 18:00:02 2021-07-10 18:23:37 0:23:35 0:12:55 0:10:40 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/cache-agent-small} 2
pass 6262498 2021-07-10 15:14:35 2021-07-10 17:29:13 2021-07-10 18:00:09 0:30:56 0:20:25 0:10:31 smithi master centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/small-objects-localized} 2
dead 6262464 2021-07-10 15:14:02 2021-07-10 17:13:39 2021-07-10 17:28:58 0:15:19 smithi master rhel 8.3 rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/packaged mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

dead 6262442 2021-07-10 15:13:41 2021-07-10 16:58:21 2021-07-10 17:13:45 0:15:24 smithi master centos 8.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8.stream} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

dead 6262410 2021-07-10 15:13:11 2021-07-10 16:43:08 2021-07-10 16:58:19 0:15:11 smithi master centos 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/libcephsqlite} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 6262295 2021-07-10 14:22:46 2021-07-10 14:22:46 2021-07-10 16:43:12 2:20:26 2:04:46 0:15:40 smithi master ubuntu 18.04 upgrade:octopus-x/stress-split/{0-distro/ubuntu_18.04 0-roles 1-start 2-first-half-tasks/rbd-cls 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/classic} 2
pass 6262142 2021-07-10 06:53:12 2021-07-10 09:51:13 2021-07-10 10:35:06 0:43:53 0:32:00 0:11:53 smithi master ubuntu 20.04 rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-comp-snappy policy/simple rbd-mirror/four-per-cluster supported-random-distro$/{ubuntu_latest} workloads/rbd-mirror-journal-workunit} 2
fail 6262091 2021-07-10 06:52:23 2021-07-10 09:22:44 2021-07-10 09:51:49 0:29:05 0:19:17 0:09:48 smithi master centos 8.3 rbd/mirror/{base/install clients/{mirror-extra mirror} cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-comp-snappy supported-random-distro$/{centos_8} workloads/rbd-mirror-workunit-config-key} 2
Failure Reason:

Command failed (workunit test rbd/rbd_mirror_journal.sh) on smithi201 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && cd -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=92adad3411b7280fb159eb9d7304c654695beb8b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster cluster1" CEPH_ID="mirror" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.cluster1.client.mirror CEPH_ROOT=/home/ubuntu/cephtest/clone.cluster1.client.mirror CEPH_MNT=/home/ubuntu/cephtest/mnt.cluster1.mirror CEPH_ARGS=\'\' RBD_MIRROR_CONFIG_KEY=1 RBD_MIRROR_INSTANCES=4 RBD_MIRROR_USE_EXISTING_CLUSTER=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.cluster1.client.mirror/qa/workunits/rbd/rbd_mirror_journal.sh'

fail 6261992 2021-07-10 06:08:33 2021-07-10 07:52:35 2021-07-10 09:22:48 1:30:13 1:14:12 0:16:01 smithi master ubuntu 20.04 fs:workload/{cephadm clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/iogen} wsync/{no}} 3
Failure Reason:

The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}

pass 6261942 2021-07-10 06:07:48 2021-07-10 07:02:54 2021-07-10 07:56:17 0:53:23 0:40:47 0:12:36 smithi master ubuntu 20.04 fs:workload/{cephadm clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{yes}} 3
pass 6261858 2021-07-10 06:06:23 2021-07-10 06:07:07 2021-07-10 07:04:01 0:56:54 0:47:32 0:09:22 smithi master rhel 8.4 fs:workload/{cephadm clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{no}} 3