Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi203.front.sepia.ceph.com smithi True True 2021-01-16 16:52:45.577202 scheduled_jafaj@teuthology ubuntu 18.04 x86_64 /home/teuthworker/archive/jafaj-2021-01-12_18:34:36-rados-wip-jan-testing-2021-01-12-1728-distro-basic-smithi/5780392
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 5791735 2021-01-16 07:07:32 2021-01-16 12:39:33 2021-01-16 13:11:33 0:32:00 0:20:42 0:11:18 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/rbd_cls} 3
pass 5791693 2021-01-16 07:06:59 2021-01-16 12:19:06 2021-01-16 12:39:05 0:19:59 0:10:01 0:09:58 smithi master ubuntu 18.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-comp-snappy supported-random-distro$/{ubuntu_latest} tasks/failover} 2
fail 5791606 2021-01-16 07:05:52 2021-01-16 11:34:55 2021-01-16 12:20:55 0:46:00 0:34:31 0:11:29 smithi master centos 8.2 rados/cephadm/dashboard/{distro/centos_latest task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi058 with status 22: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9280858e9fc1bfb270256febe37c17d78ff0e138 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 5791555 2021-01-16 07:05:11 2021-01-16 11:06:27 2021-01-16 11:34:27 0:28:00 0:18:53 0:09:07 smithi master centos 8.2 rados/singleton-nomsgr/{all/osd_stale_reads mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 5791471 2021-01-16 07:04:03 2021-01-16 10:22:28 2021-01-16 11:06:28 0:44:00 0:32:25 0:11:35 smithi master ubuntu 18.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} tasks/rados_api_tests} 2
pass 5791389 2021-01-16 07:01:00 2021-01-16 09:02:29 2021-01-16 10:24:30 1:22:01 0:45:01 0:37:00 smithi master smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap tasks/rgw_s3tests} 3
pass 5791381 2021-01-16 07:00:54 2021-01-16 08:54:01 2021-01-16 09:30:01 0:36:00 0:20:20 0:15:40 smithi master smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap tasks/rados_python} 3
pass 5790627 2021-01-16 04:00:23 2021-01-16 07:39:15 2021-01-16 08:59:15 1:20:00 1:08:12 0:11:48 smithi master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/suites/iogen}} 3
fail 5790526 2021-01-16 03:59:02 2021-01-16 06:31:27 2021-01-16 07:41:28 1:10:01 1:00:04 0:09:57 smithi master ubuntu 18.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/fs/misc}} 3
Failure Reason:

Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi203 with status 23: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=194b0fb0184623b3f49b5144aa579a8cedfe78f5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/multiple_rsync.sh'

fail 5790166 2021-01-16 01:11:54 2021-01-16 05:20:55 2021-01-16 06:32:56 1:12:01 1:00:33 0:11:28 smithi master centos 8.2 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay objectstore/filestore-xfs rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} 2
Failure Reason:

wait_for_recovery: failed before timeout expired

pass 5790123 2021-01-16 01:11:20 2021-01-16 04:58:30 2021-01-16 05:22:30 0:24:00 0:13:07 0:10:53 smithi master centos 8.2 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} 2
pass 5790047 2021-01-16 01:10:17 2021-01-16 04:18:29 2021-01-16 04:58:29 0:40:00 0:28:53 0:11:07 smithi master centos 8.2 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 5789995 2021-01-16 01:09:36 2021-01-16 03:39:40 2021-01-16 04:19:40 0:40:00 0:15:57 0:24:03 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/none thrashosds-health workloads/rbd_cls} 3
pass 5789941 2021-01-16 01:08:54 2021-01-16 02:47:36 2021-01-16 03:07:36 0:20:00 0:12:16 0:07:44 smithi master rhel 8.3 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 5789919 2021-01-16 01:08:36 2021-01-16 02:47:38 2021-01-16 03:53:39 1:06:01 0:36:37 0:29:24 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/careful thrashosds-health workloads/snaps-few-objects} 3
fail 5789854 2021-01-15 22:37:28 2021-01-16 00:37:19 2021-01-16 01:55:20 1:18:01 0:54:51 0:23:10 smithi master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/suites/ffsb}} 3
Failure Reason:

"2021-01-16T01:17:57.754144+0000 mon.a (mon.0) 264 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log

pass 5789812 2021-01-15 22:36:54 2021-01-16 00:01:47 2021-01-16 00:49:47 0:48:00 0:21:01 0:26:59 smithi master ubuntu 18.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/suites/fsync-tester}} 3
pass 5789811 2021-01-15 22:36:53 2021-01-15 23:59:00 2021-01-16 00:17:00 0:18:00 0:11:19 0:06:41 smithi master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/pjd}} 2
pass 5789780 2021-01-15 22:36:29 2021-01-15 23:39:20 2021-01-16 00:01:20 0:22:00 0:11:02 0:10:58 smithi master centos 8.2 fs/32bits/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{faked-ino frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
pass 5789741 2021-01-15 22:35:58 2021-01-15 23:08:21 2021-01-15 23:40:21 0:32:00 0:08:16 0:23:44 smithi master ubuntu 18.04 fs/multiclient/{begin clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/ior-shared-file} 5