Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi151.front.sepia.ceph.com smithi True True 2021-02-26 15:24:17.084972 scheduled_sage@teuthology centos 8 x86_64 /home/teuthworker/archive/sage-2021-02-26_15:10:50-rados:cephadm-wip-sage3-testing-2021-02-26-0847-distro-basic-smithi/5916101
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 5916101 2021-02-26 15:11:30 2021-02-26 15:24:06 2021-02-26 15:35:28 0:11:48 smithi master centos 8.2 rados:cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/repo_digest 3-start-upgrade 4-wait fixed-2 mon_election/classic} 2
fail 5916024 2021-02-26 05:58:50 2021-02-26 12:35:11 2021-02-26 13:14:49 0:39:38 0:25:06 0:14:32 smithi master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/suites/blogbench} wsync/{yes}} 3
Failure Reason:

"2021-02-26T12:59:41.226921+0000 mds.f (mds.0) 24 : cluster [WRN] Scrub error on inode 0x100000010da (/client.0/tmp/blogbench-1.0/src/blogtest_in/blog-22) see mds.f log and `damage ls` output for details" in cluster log

fail 5915949 2021-02-26 05:57:52 2021-02-26 11:46:35 2021-02-26 12:37:02 0:50:27 0:40:03 0:10:24 smithi master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/suites/ffsb} wsync/{yes}} 3
Failure Reason:

"2021-02-26T12:06:54.671918+0000 mds.c (mds.0) 24 : cluster [WRN] Scrub error on inode 0x10000000348 (/client.0/tmp/tmp) see mds.c log and `damage ls` output for details" in cluster log

pass 5915910 2021-02-26 05:57:21 2021-02-26 11:15:04 2021-02-26 11:46:32 0:31:28 0:17:53 0:13:35 smithi master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no tasks/{0-check-counter workunit/fs/norstats} wsync/{yes}} 3
pass 5915884 2021-02-26 05:57:00 2021-02-26 10:53:43 2021-02-26 11:15:50 0:22:07 0:13:43 0:08:24 smithi master rhel 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/forward-scrub} 2
pass 5915843 2021-02-26 05:56:27 2021-02-26 10:26:23 2021-02-26 10:54:14 0:27:51 0:17:44 0:10:07 smithi master ubuntu 18.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
pass 5915023 2021-02-26 00:48:55 2021-02-26 08:29:59 2021-02-26 10:23:19 1:53:20 1:43:00 0:10:20 smithi master centos 8.2 rados/standalone/{mon_election/classic supported-random-distro$/{centos_8} workloads/scrub} 1
pass 5914991 2021-02-26 00:48:24 2021-02-26 08:04:06 2021-02-26 08:29:52 0:25:46 0:13:43 0:12:03 smithi master centos 8.2 rados/cephadm/smoke/{distro/centos_latest fixed-2 mon_election/classic start} 2
pass 5914943 2021-02-26 00:47:34 2021-02-26 07:27:18 2021-02-26 08:04:16 0:36:58 0:23:40 0:13:18 smithi master ubuntu 18.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 5914920 2021-02-26 00:47:11 2021-02-26 07:09:09 2021-02-26 07:25:18 0:16:09 0:06:05 0:10:04 smithi master ubuntu 18.04 rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 5914277 2021-02-25 23:24:03 2021-02-26 14:50:41 2021-02-26 15:24:08 0:33:27 0:23:03 0:10:24 smithi master centos 8.2 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-small-objects-fast-read} 2
pass 5914185 2021-02-25 23:20:52 2021-02-26 14:09:10 2021-02-26 14:50:32 0:41:22 0:34:39 0:06:43 smithi master rhel 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_big} 2
pass 5914147 2021-02-25 23:19:36 2021-02-26 13:50:55 2021-02-26 14:08:11 0:17:16 0:10:38 0:06:38 smithi master rhel 8.3 rados/objectstore/{backends/filejournal supported-random-distro$/{rhel_8}} 1
pass 5914111 2021-02-25 23:18:37 2021-02-26 13:33:40 2021-02-26 13:49:50 0:16:10 0:06:20 0:09:50 smithi master ubuntu 18.04 rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
fail 5914079 2021-02-25 23:17:42 2021-02-26 13:14:57 2021-02-26 13:34:00 0:19:03 0:12:50 0:06:13 smithi master rhel 8.3 rados/singleton-nomsgr/{all/balancer mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi151 with status 1: 'CEPH_REF=master CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/virtualenv/bin/cram -v -- /home/ubuntu/cephtest/archive/cram.client.0/*.t'

pass 5913493 2021-02-25 18:31:24 2021-02-25 18:36:24 2021-02-25 19:02:22 0:25:58 0:15:01 0:10:57 smithi master ubuntu 18.04 rados:cephadm/smoke/{distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
pass 5913413 2021-02-25 15:51:21 2021-02-25 15:53:51 2021-02-25 16:33:10 0:39:19 0:32:08 0:07:11 smithi master rhel 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/admin} 2
fail 5913386 2021-02-25 14:34:18 2021-02-25 17:20:07 2021-02-25 17:44:36 0:24:29 0:12:07 0:12:22 smithi master ubuntu 18.04 rados:monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/rados_mon_workunits} 2
Failure Reason:

Command failed on smithi200 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/liewegas/ceph /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 6923fdcb4e2dbcfb637e83e81baa03b9adb335e9'

pass 5913263 2021-02-25 11:29:37 2021-02-25 11:43:22 2021-02-25 12:17:11 0:33:49 0:22:34 0:11:15 smithi master centos 8.2 rados:cephadm/with-work/{distro/centos_latest fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
fail 5912248 2021-02-24 21:28:37 2021-02-24 22:10:40 2021-02-25 07:05:41 8:55:01 3:16:27 5:38:34 smithi master centos 8.2 rados/standalone/{mon_election/connectivity supported-random-distro$/{centos_8} workloads/mon} 1
Failure Reason:

Command failed (workunit test mon/mon-osdmap-prune.sh) on smithi151 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2cac4ebf2b09365bc660fdb838e2c263ed9838f4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-osdmap-prune.sh'