Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi066.front.sepia.ceph.com smithi False False rhel 8.4 x86_64 Cant ssh
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6265011 2021-07-12 10:36:12 2021-07-12 11:29:17 2021-07-12 11:56:57 0:27:40 0:20:40 0:07:00 smithi master rhel 8.4 fs:workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{no}} 3
Failure Reason:

Command failed (workunit test fs/misc/kernel-failures-bug.sh) on smithi066 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3b3abceaa44e7adc538471c5e5c550275b879f55 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/kernel-failures-bug.sh'

fail 6264959 2021-07-12 10:35:55 2021-07-12 11:00:37 2021-07-12 11:29:53 0:29:16 0:21:59 0:07:17 smithi master rhel 8.4 fs:workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{no}} 3
Failure Reason:

Command failed (workunit test fs/misc/kernel-failures-bug.sh) on smithi066 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3b3abceaa44e7adc538471c5e5c550275b879f55 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/kernel-failures-bug.sh'

fail 6264904 2021-07-12 10:30:27 2021-07-12 10:31:13 2021-07-12 11:00:55 0:29:42 0:20:03 0:09:39 smithi master rhel 8.4 fs:workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{no}} 3
Failure Reason:

Command failed (workunit test fs/misc/kernel-failures-bug.sh) on smithi066 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3b3abceaa44e7adc538471c5e5c550275b879f55 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/kernel-failures-bug.sh'

pass 6264776 2021-07-12 06:08:53 2021-07-12 06:08:54 2021-07-12 09:36:24 3:27:30 3:11:44 0:15:46 smithi master ubuntu 20.04 rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/bluestore-comp-zlib 4-supported-random-distro$/{ubuntu_latest} 5-pool/ec-data-pool 6-prepare/raw-file 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup} 3
pass 6264290 2021-07-11 17:20:20 2021-07-11 17:20:51 2021-07-11 17:50:27 0:29:36 0:22:06 0:07:30 smithi master rhel 8.3 orch/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/nfs2 3-final} 2
pass 6262899 2021-07-11 01:47:29 2021-07-11 02:37:41 2021-07-11 03:04:46 0:27:05 0:17:33 0:09:32 smithi master centos 8.2 rados/cephadm/orchestrator_cli/{0-random-distro$/{centos_8.2_kubic_stable} 2-node-mgr orchestrator_cli} 2
pass 6262812 2021-07-11 01:46:15 2021-07-11 02:02:39 2021-07-11 02:37:30 0:34:51 0:22:51 0:12:00 smithi master ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/connectivity msgr-failures/osd-delay rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-overwrites} 2
fail 6262731 2021-07-11 01:36:54 2021-07-11 01:37:36 2021-07-11 02:03:55 0:26:19 0:15:23 0:10:56 smithi master ubuntu 20.04 rados:mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-comp-zlib supported-random-distro$/{ubuntu_latest} tasks/progress} 2
Failure Reason:

Test failure: test_osd_came_back (tasks.mgr.test_progress.TestProgress)

pass 6262640 2021-07-10 15:16:50 2021-07-10 18:31:13 2021-07-10 19:05:20 0:34:07 0:23:02 0:11:05 smithi master ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 2
dead 6262609 2021-07-10 15:16:20 2021-07-10 18:14:39 2021-07-10 18:30:13 0:15:34 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 6262544 2021-07-10 15:15:18 2021-07-10 17:46:31 2021-07-10 18:14:08 0:27:37 0:20:45 0:06:52 smithi master rhel 8.4 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 6262462 2021-07-10 15:14:00 2021-07-10 17:11:28 2021-07-10 17:47:02 0:35:34 0:22:07 0:13:27 smithi master ubuntu 20.04 rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} 3
pass 6262432 2021-07-10 15:13:32 2021-07-10 16:54:58 2021-07-10 17:12:23 0:17:25 0:07:36 0:09:49 smithi master ubuntu 20.04 rados/singleton/{all/peer mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} 1
pass 6262383 2021-07-10 15:12:44 2021-07-10 16:17:48 2021-07-10 16:54:53 0:37:05 0:22:24 0:14:41 smithi master rhel 8.4 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-comp-snappy supported-random-distro$/{rhel_8} tasks/failover} 2
pass 6262350 2021-07-10 15:12:10 2021-07-10 15:40:55 2021-07-10 16:25:46 0:44:51 0:39:31 0:05:20 smithi master rhel 8.4 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/cache-pool-snaps-readproxy} 2
pass 6262336 2021-07-10 14:23:27 2021-07-10 19:04:07 2021-07-10 21:36:06 2:31:59 2:12:44 0:19:15 smithi master ubuntu 18.04 upgrade:nautilus-x/parallel/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/short_pg_log 2-workload/{blogbench ec-rados-default rados_api rados_loadgenbig rgw_ragweed_prepare test_rbd_api test_rbd_python} 3-upgrade-sequence/upgrade-mon-osd-mds 4-octopus 5-final-workload/{blogbench rados-snaps-few-objects rados_loadgenmix rados_mon_thrash rbd_cls rbd_import_export rgw rgw_ragweed_check} mon_election/classic objectstore/bluestore-bitmap ubuntu_18.04} 4
pass 6262297 2021-07-10 14:22:47 2021-07-10 14:22:47 2021-07-10 15:03:13 0:40:26 0:23:04 0:17:22 smithi master centos 8.2 upgrade:octopus-x/parallel-no-cephadm/{0-cluster/{openstack start} 1-ceph-install/octopus 1.1-pg-log-overrides/short_pg_log 2-workload/{rgw_ragweed_prepare} 3-upgrade-sequence/upgrade-all 4-pacific 5-final-workload/{rgw rgw_ragweed_check} centos_latest} 4
fail 6262083 2021-07-10 06:52:15 2021-07-10 09:19:01 2021-07-10 10:33:24 1:14:23 1:03:17 0:11:06 smithi master centos 8.3 rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-comp-lz4 policy/none rbd-mirror/four-per-cluster supported-random-distro$/{centos_8} workloads/rbd-mirror-snapshot-stress-workunit-exclusive-lock} 2
Failure Reason:

Command failed (workunit test rbd/rbd_mirror_stress.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && cd -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=92adad3411b7280fb159eb9d7304c654695beb8b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster cluster1" CEPH_ID="mirror" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.cluster1.client.mirror CEPH_ROOT=/home/ubuntu/cephtest/clone.cluster1.client.mirror CEPH_MNT=/home/ubuntu/cephtest/mnt.cluster1.mirror CEPH_ARGS=\'\' MIRROR_IMAGE_MODE=snapshot MIRROR_POOL_MODE=image RBD_IMAGE_FEATURES=layering,exclusive-lock RBD_MIRROR_INSTANCES=4 RBD_MIRROR_USE_EXISTING_CLUSTER=1 RBD_MIRROR_USE_RBD_MIRROR=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.cluster1.client.mirror/qa/workunits/rbd/rbd_mirror_stress.sh'

pass 6262015 2021-07-10 06:08:54 2021-07-10 08:36:36 2021-07-10 09:20:09 0:43:33 0:31:22 0:12:11 smithi master ubuntu 20.04 fs:workload/{cephadm clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/norstats} wsync/{yes}} 3
pass 6261994 2021-07-10 06:08:35 2021-07-10 07:57:26 2021-07-10 08:38:35 0:41:09 0:29:29 0:11:40 smithi master centos 8.3 fs:workload/{cephadm clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/suites/pjd} wsync/{no}} 3