Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi105.front.sepia.ceph.com smithi True True 2021-05-11 04:18:47.296067 scheduled_teuthology@teuthology centos 8 x86_64 /home/teuthworker/archive/teuthology-2021-05-11_04:17:02-fs-pacific-distro-basic-smithi/6108498
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 6108498 2021-05-11 04:18:21 2021-05-11 04:18:47 2021-05-11 04:37:05 0:18:40 smithi master centos 8.2 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/direct_io}} 3
pass 6108118 2021-05-11 00:26:26 2021-05-11 01:15:20 2021-05-11 01:34:53 0:19:33 0:09:34 0:09:59 smithi master rados/verify/{ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} msgr-failures/few msgr/random objectstore/bluestore-bitmap rados tasks/mon_recovery validater/lockdep} 2
pass 6108046 2021-05-11 00:25:21 2021-05-11 00:50:23 2021-05-11 01:15:18 0:24:55 0:14:14 0:10:41 smithi master ubuntu 16.04 rados/monthrash/{ceph clusters/9-mons msgr-failures/mon-delay msgr/simple objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_16.04} thrashers/sync-many workloads/pool-create-delete} 2
pass 6108006 2021-05-11 00:24:45 2021-05-11 00:24:45 2021-05-11 00:50:19 0:25:34 0:11:26 0:14:08 smithi master ubuntu 18.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 6107165 2021-05-09 06:09:42 2021-05-09 22:20:05 2021-05-10 00:23:52 2:03:47 1:53:00 0:10:47 smithi master rhel 8.3 rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/bluestore-comp-zstd 4-supported-random-distro$/{rhel_8} 5-pool/ec-data-pool 6-prepare/raw-file 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup} 3
pass 6107146 2021-05-09 06:09:27 2021-05-09 21:51:37 2021-05-09 22:24:41 0:33:04 0:24:29 0:08:35 smithi master rhel 8.3 rbd/qemu/{cache/none clusters/{fixed-3 openstack} features/journaling msgr-failures/few objectstore/bluestore-low-osd-mem-target pool/none supported-random-distro$/{rhel_8} workloads/qemu_fsstress} 3
pass 6107077 2021-05-09 06:08:31 2021-05-09 20:51:12 2021-05-09 21:52:41 1:01:29 0:51:36 0:09:53 smithi master centos 8.2 rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-bitmap policy/simple rbd-mirror/four-per-cluster supported-random-distro$/{centos_8} workloads/rbd-mirror-snapshot-stress-workunit-fast-diff} 2
fail 6106561 2021-05-09 04:19:28 2021-05-09 17:15:32 2021-05-09 20:51:40 3:36:08 3:28:12 0:07:56 smithi master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

"2021-05-09T18:05:01.235488+0000 mds.e (mds.0) 1 : cluster [WRN] client.4917 isn't responding to mclientcaps(revoke), ino 0x10000002be8 pending pAsLsXsFscr issued pAsLsXsFsxcrwb, sent 300.004151 seconds ago" in cluster log

fail 6106531 2021-05-09 04:19:03 2021-05-09 16:46:39 2021-05-09 17:16:23 0:29:44 0:18:38 0:11:06 smithi master ubuntu 18.04 fs/verify/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} distro~HEAD/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp overrides/{frag_enable mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/fsstress validater/valgrind} 2
Failure Reason:

Command failed on smithi105 with status 1: "sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term env 'OPENSSL_ia32cap=~0x1000000000000000' valgrind --trace-children=no --child-silent-after-fork=yes '--soname-synonyms=somalloc=*tcmalloc*' --num-callers=50 --suppressions=/home/ubuntu/cephtest/valgrind.supp --xml=yes --xml-file=/var/log/ceph/valgrind/client.0.log --time-stamp=yes --vgdb=yes --exit-on-first-error=yes --error-exitcode=42 --tool=memcheck --leak-check=full --show-reachable=yes ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' --id 0 /home/ubuntu/cephtest/mnt.0"

pass 6106506 2021-05-09 04:18:43 2021-05-09 16:23:09 2021-05-09 16:46:58 0:23:49 0:13:07 0:10:42 smithi master ubuntu 18.04 fs/32bits/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp overrides/{faked-ino frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress} 2
pass 6106384 2021-05-09 03:36:39 2021-05-09 14:47:25 2021-05-09 16:23:25 1:36:00 1:23:59 0:12:01 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/radosbench} 3
pass 6106336 2021-05-09 03:36:02 2021-05-09 14:17:03 2021-05-09 14:47:52 0:30:49 0:20:37 0:10:12 smithi master rhel 8.3 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 6106294 2021-05-09 03:35:30 2021-05-09 13:49:08 2021-05-09 14:20:10 0:31:02 0:21:14 0:09:48 smithi master centos 8.2 rados/cephadm/smoke-roleless/{0-distro/centos_8.2_kubic_stable 1-start 2-services/mirror 3-final} 2
Failure Reason:

Command failed on smithi105 with status 5: 'sudo systemctl stop ceph-6598ff1c-b0cf-11eb-8237-001a4aab830c@mon.smithi105'

pass 6106258 2021-05-09 03:35:03 2021-05-09 13:21:56 2021-05-09 13:49:00 0:27:04 0:14:40 0:12:24 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 1-start 2-services/client-keyring 3-final} 2
pass 6106230 2021-05-09 03:34:40 2021-05-09 13:03:47 2021-05-09 13:23:27 0:19:40 0:09:11 0:10:29 smithi master ubuntu 18.04 rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} 2
fail 6106184 2021-05-09 03:34:05 2021-05-09 12:33:20 2021-05-09 13:04:08 0:30:48 0:20:14 0:10:34 smithi master centos 8.2 rados/cephadm/dashboard/{0-distro/centos_8.2_kubic_stable task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi055 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=df487331eefb15b716b05118803c8aa8f9ad6ffb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6106028 2021-05-08 14:23:20 2021-05-09 07:26:37 2021-05-09 12:33:13 5:06:36 4:45:25 0:21:11 smithi master ubuntu 18.04 upgrade:nautilus-x/parallel/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/short_pg_log 2-workload/{blogbench ec-rados-default rados_api rados_loadgenbig rgw_ragweed_prepare test_rbd_api test_rbd_python} 3-upgrade-sequence/upgrade-mon-osd-mds 4-octopus 5-final-workload/{blogbench rados-snaps-few-objects rados_loadgenmix rados_mon_thrash rbd_cls rbd_import_export rgw rgw_ragweed_check} mon_election/classic objectstore/filestore-xfs ubuntu_18.04} 4
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi173 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=df487331eefb15b716b05118803c8aa8f9ad6ffb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/rados/test.sh'

fail 6105856 2021-05-08 08:15:59 2021-05-08 08:45:48 2021-05-08 09:13:12 0:27:24 0:13:58 0:13:26 smithi master centos 8.2 rbd/encryption/{cache/writeback clusters/{fixed-3 openstack} features/defaults msgr-failures/few objectstore/bluestore-hybrid pool/small-cache-pool supported-random-distro$/{centos_8} workloads/qemu_xfstests_luks1} 3
Failure Reason:

Command failed on smithi032 with status 13: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json'

pass 6105727 2021-05-08 06:09:02 2021-05-09 03:36:30 2021-05-09 07:35:32 3:59:02 3:47:36 0:11:26 smithi master ubuntu 18.04 rbd/encryption/{cache/writethrough clusters/{fixed-3 openstack} features/defaults msgr-failures/few objectstore/bluestore-comp-zlib pool/replicated-data-pool supported-random-distro$/{ubuntu_latest} workloads/qemu_xfstests_luks2} 3
pass 6105687 2021-05-08 06:08:31 2021-05-09 03:02:50 2021-05-09 03:37:59 0:35:09 0:28:38 0:06:31 smithi master rhel 8.3 rbd/thrash/{base/install clusters/{fixed-2 openstack} msgr-failures/few objectstore/filestore-xfs supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/rbd_fsx_deep_copy} 2