Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi168.front.sepia.ceph.com smithi True True 2020-08-04 16:39:40.388094 scheduled_teuthology@teuthology ubuntu 16.04 x86_64 /home/teuthworker/archive/teuthology-2020-08-02_02:30:04-rados-nautilus-distro-basic-smithi/5278000
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 5286387 2020-08-04 10:27:58 2020-08-04 11:44:00 2020-08-04 12:03:59 0:19:59 0:13:59 0:06:00 smithi master rhel 8.1 rados:cephadm/smoke/{distro/rhel_latest fixed-2 start} 2
pass 5286355 2020-08-04 10:20:30 2020-08-04 11:24:42 2020-08-04 12:36:43 1:12:01 0:26:04 0:45:57 smithi master rhel 8.0 rados:cephadm/with-work/{distro/rhel_8.0 fixed-2 mode/packaged msgr/async start tasks/rados_python} 2
pass 5286350 2020-08-04 10:20:25 2020-08-04 11:20:37 2020-08-04 11:44:36 0:23:59 0:13:49 0:10:10 smithi master ubuntu 18.04 rados:cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 start} 2
fail 5286310 2020-08-04 09:11:02 2020-08-04 09:15:46 2020-08-04 09:35:46 0:20:00 0:08:29 0:11:31 smithi master centos 8.1 rgw:sts/{centos_latest clusters/fixed-2 frontend/civetweb objectstore/bluestore-bitmap overrides rgw_pool_type/ec tasks/{0-install webidentity_sample}} 2
Failure Reason:

Command failed on smithi168 with status 127: 'mvn install -DskipTestsuite'

fail 5286261 2020-08-04 07:00:51 2020-08-04 07:57:01 2020-08-04 10:01:04 2:04:03 0:13:16 1:50:47 smithi master smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap tasks/rgw_swift} 3
Failure Reason:

Command failed on smithi104 with status 1: "SWIFT_TEST_CONFIG_FILE=/home/ubuntu/cephtest/archive/testswift.client.0.conf /home/ubuntu/cephtest/swift/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/swift/test/functional -v -a '!fails_on_rgw'"

pass 5285674 2020-08-04 02:31:04 2020-08-04 02:54:23 2020-08-04 08:50:32 5:56:09 3:02:10 2:53:59 smithi master ubuntu 16.04 upgrade:mimic-x/parallel/{0-cluster/{openstack start} 1-ceph-install/mimic 1.1-pg-log-overrides/normal_pg_log 2-workload/{blogbench ec-rados-default rados_api rados_loadgenbig rgw_ragweed_prepare test_rbd_api test_rbd_python} 3-upgrade-sequence/upgrade-mon-osd-mds 4-msgr2 4-nautilus 5-final-workload/{blogbench rados-snaps-few-objects rados_loadgenmix rados_mon_thrash rbd_cls rbd_import_export rgw rgw_ragweed_check rgw_swift} objectstore/bluestore-bitmap supported-all-distro/ubuntu_16.04} 4
fail 5285448 2020-08-04 01:19:42 2020-08-04 09:56:22 2020-08-04 15:32:30 5:36:08 0:28:07 5:08:01 smithi master ubuntu 16.04 kcephfs/recovery/{begin clusters/1-mds-4-client conf/{client mds mon osd} kclient/{mount overrides/{distro/random/{k-testing supported$/{ubuntu_16.04}} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{frag_enable log-config osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/strays} 6
Failure Reason:

"2020-08-04 15:18:41.476755 mds.a (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi168:0 (6780), after waiting 49.3747 seconds during MDS startup" in cluster log

pass 5285442 2020-08-04 01:19:36 2020-08-04 09:47:13 2020-08-04 11:25:14 1:38:01 0:49:18 0:48:43 smithi master rhel 7.8 kcephfs/mixed-clients/{begin clusters/1-mds-2-client conf/{client mds mon osd} kclient-overrides/{distro/rhel/{k-distro rhel_latest} ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable log-config osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 4
pass 5285439 2020-08-04 01:19:33 2020-08-04 09:43:25 2020-08-04 10:31:26 0:48:01 0:25:48 0:22:13 smithi master rhel 7.8 kcephfs/cephfs/{begin clusters/1-mds-1-client conf/{client mds mon osd} inline/yes kclient/{mount overrides/{distro/rhel/{k-distro rhel_latest} ms-die-on-skipped}} objectstore-ec/bluestore-comp overrides/{frag_enable log-config osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/kclient_workunit_suites_fsx} 3
pass 5285391 2020-08-04 01:15:41 2020-08-04 08:52:58 2020-08-04 09:22:57 0:29:59 0:12:45 0:17:14 smithi master rhel 7.8 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_latest} tasks/openfiletable} 2
fail 5285385 2020-08-04 01:15:34 2020-08-04 08:43:32 2020-08-04 09:05:31 0:21:59 0:05:23 0:16:36 smithi master centos 7.8 fs/verify/{begin centos_latest clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_dbench validater/lockdep} 2
Failure Reason:

Command failed on smithi008 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-cloud ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-ssh ceph-fuse libcephfs2 libcephfs-devel librados2 librbd1 python-ceph rbd-fuse python3-cephfs bison flex elfutils-libelf-devel openssl-devel bison flex elfutils-libelf-devel openssl-devel bison flex elfutils-libelf-devel openssl-devel bison flex elfutils-libelf-devel openssl-devel'

fail 5285312 2020-08-04 01:14:12 2020-08-04 01:50:06 2020-08-04 05:36:11 3:46:05 3:31:36 0:14:29 smithi master centos 7.8 fs/verify/{begin centos_latest clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress validater/valgrind} 2
pass 5285252 2020-08-04 01:12:59 2020-08-04 01:16:21 2020-08-04 01:56:19 0:39:58 0:26:56 0:13:02 smithi master centos 7.8 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_latest} tasks/cfuse_workunit_misc} 2
fail 5285033 2020-08-03 21:37:46 2020-08-04 00:43:21 2020-08-04 01:19:21 0:36:00 0:25:14 0:10:46 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/radosbench} 3
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

pass 5284967 2020-08-03 21:36:48 2020-08-04 00:12:13 2020-08-04 00:44:13 0:32:00 0:25:16 0:06:44 smithi master centos 8.1 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} 2
pass 5284933 2020-08-03 21:36:19 2020-08-03 23:52:12 2020-08-04 00:12:11 0:19:59 0:13:11 0:06:48 smithi master centos 8.1 rados/singleton/{all/osd-recovery msgr-failures/many msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 1
pass 5284903 2020-08-03 21:35:53 2020-08-03 23:40:21 2020-08-03 23:54:20 0:13:59 0:07:31 0:06:28 smithi master ubuntu 18.04 rados/objectstore/{backends/fusestore supported-random-distro$/{ubuntu_latest}} 1
fail 5284503 2020-08-03 19:42:15 2020-08-03 19:51:46 2020-08-03 22:37:49 2:46:03 2:19:03 0:27:00 smithi master ubuntu 18.04 upgrade/nautilus-x/parallel/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/normal_pg_log 2-workload/{blogbench ec-rados-default rados_api rados_loadgenbig rgw_ragweed_prepare test_rbd_api test_rbd_python} 3-upgrade-sequence/upgrade-mon-osd-mds 4-octopus 5-final-workload/{blogbench rados-snaps-few-objects rados_loadgenmix rados_mon_thrash rbd_cls rbd_import_export rgw rgw_ragweed_check} objectstore/filestore-xfs ubuntu_18.04} 4
Failure Reason:

"2020-08-03T20:41:52.407600+0000 mds.a (mds.0) 5 : cluster [WRN] evicting unresponsive client smithi078:3 (55310), after 304.909 seconds" in cluster log

fail 5284447 2020-08-03 18:36:40 2020-08-03 18:43:53 2020-08-03 19:23:53 0:40:00 0:13:27 0:26:33 smithi wip-wl-bl ubuntu 18.04 upgrade/nautilus-x/stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{radosbench rbd-cls rbd-import-export rbd_api readwrite rgw_ragweed_prepare snaps-few-objects} 5-finish-upgrade 6-octopus 7-msgr2 8-final-workload/{rbd-python snaps-many-objects} objectstore/filestore-xfs thrashosds-health ubuntu_18.04} 5
Failure Reason:

HTTPSConnectionPool(host='2.chacra.ceph.com', port=443): Max retries exceeded with url: /repos/ceph/wip-35628-2020-07-28/804404f8309168c2db908fa5863e53833a27dd11/ubuntu/bionic/flavors/default/repo (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fca6a5f7898>: Failed to establish a new connection: [Errno 110] Connection timed out',))

dead 5284379 2020-08-03 17:27:38 2020-08-03 18:36:16 2020-08-03 18:54:16 0:18:00 smithi master rhel 7.8 multimds/basic/{0-supported-random-distro$/{centos_latest} begin clusters/9-mds conf/{client mds mon osd} inline/no mount/kclient/{mount overrides/{distro/random/{k-testing supported$/{rhel_latest}} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root overrides/{basic/{frag_enable whitelist_health whitelist_wrongly_marked_down} fuse-default-perm-no} q_check_counter/check_counter tasks/cephfs_test_exports} 3