Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi051.front.sepia.ceph.com smithi True True 2020-08-11 06:41:05.340904 scheduled_teuthology@teuthology centos 8 x86_64 /home/teuthworker/archive/teuthology-2020-08-11_05:00:03-smoke-master-testing-basic-smithi/5340377
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 5340377 2020-08-11 05:00:51 2020-08-11 06:20:31 2020-08-11 07:38:33 1:19:27 smithi master centos 8.1 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_8} tasks/rados_bench} 3
pass 5340372 2020-08-11 05:00:46 2020-08-11 06:07:11 2020-08-11 06:41:11 0:34:00 0:19:26 0:14:34 smithi master centos 8.1 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_8} tasks/kclient_workunit_suites_fsstress} 3
pass 5340049 2020-08-11 02:30:46 2020-08-11 02:43:56 2020-08-11 06:12:01 3:28:05 3:07:57 0:20:08 smithi master ubuntu 18.04 upgrade:mimic-x/parallel/{0-cluster/{openstack start} 1-ceph-install/mimic 1.1-pg-log-overrides/short_pg_log 2-workload/{blogbench ec-rados-default rados_api rados_loadgenbig rgw_ragweed_prepare test_rbd_api test_rbd_python} 3-upgrade-sequence/upgrade-mon-osd-mds 4-msgr2 4-nautilus 5-final-workload/{blogbench rados-snaps-few-objects rados_loadgenmix rados_mon_thrash rbd_cls rbd_import_export rgw rgw_ragweed_check rgw_swift} objectstore/filestore-xfs supported-all-distro/ubuntu_latest} 4
dead 5336315 2020-08-10 20:47:47 2020-08-10 22:10:48 2020-08-10 22:22:48 0:12:00 0:03:26 0:08:34 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/normal ceph clusters/{fixed-2 openstack} d-balancer/on msgr-failures/fastclose msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/cache} 2
Failure Reason:

SSH connection to smithi051 was lost: 'sudo yum -y install ceph'

pass 5335368 2020-08-10 20:32:46 2020-08-11 02:17:07 2020-08-11 02:55:07 0:38:00 0:09:31 0:28:29 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/backtrace} 2
pass 5335295 2020-08-10 20:32:03 2020-08-11 01:37:41 2020-08-11 02:19:41 0:42:00 0:08:48 0:33:12 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/libcephfs_python} 2
pass 5335292 2020-08-10 20:32:01 2020-08-10 22:32:09 2020-08-10 23:04:08 0:31:59 0:21:08 0:10:51 smithi master fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{frag_enable multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-nautilus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
pass 5335273 2020-08-10 20:31:50 2020-08-11 01:21:40 2020-08-11 02:05:40 0:44:00 0:22:22 0:21:38 smithi master rhel 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/damage} 2
pass 5335269 2020-08-10 20:31:48 2020-08-11 01:21:08 2020-08-11 02:41:09 1:20:01 0:15:49 1:04:12 smithi master rhel 8.1 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_misc} 2
fail 5335043 2020-08-10 20:29:37 2020-08-10 23:47:44 2020-08-11 01:37:46 1:50:02 0:27:31 1:22:31 smithi master rhel 8.1 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/none min-compat-client/default msgr-failures/few objectstore/bluestore-comp-lz4 pool/small-cache-pool supported-random-distro$/{rhel_8} workloads/c_api_tests} 3
Failure Reason:

"2020-08-11T01:28:53.485467+0000 mon.b (mon.0) 787 : cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log

pass 5334935 2020-08-10 20:28:39 2020-08-10 23:16:46 2020-08-11 01:00:48 1:44:02 1:19:02 0:25:00 smithi master centos 8.1 rbd/qemu/{cache/writethrough clusters/{fixed-3 openstack} features/readbalance msgr-failures/few objectstore/filestore-xfs pool/none supported-random-distro$/{centos_8} workloads/qemu_xfstests} 3
pass 5334819 2020-08-10 20:27:37 2020-08-10 22:41:49 2020-08-10 23:37:49 0:56:00 0:27:00 0:29:00 smithi master rhel 8.1 rbd/qemu/{cache/writeback clusters/{fixed-3 openstack} features/defaults msgr-failures/few objectstore/bluestore-comp-zlib pool/none supported-random-distro$/{rhel_8} workloads/qemu_bonnie} 3
fail 5334353 2020-08-10 20:23:17 2020-08-10 22:19:34 2020-08-10 22:35:34 0:16:00 0:06:56 0:09:04 smithi master centos 8.1 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

psutil.NoSuchProcess process no longer exists (pid=1160)

pass 5329896 2020-08-10 19:08:02 2020-08-10 19:28:43 2020-08-10 22:10:47 2:42:04 2:01:29 0:40:35 smithi master ubuntu 18.04 upgrade:nautilus-x/stress-split-erasure-code/{0-cluster/{openstack start} 1-nautilus-install/nautilus 1.1-pg-log-overrides/normal_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/filestore-xfs 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-octopus 7-final-workload thrashosds-health ubuntu_18.04} 5
pass 5328691 2020-08-10 10:23:17 2020-08-10 17:45:22 2020-08-10 20:03:25 2:18:03 2:07:54 0:10:09 smithi master ubuntu 18.04 multimds/basic/{0-supported-random-distro$/{centos_8.yaml} begin.yaml clusters/9-mds.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml mount/kclient/{mount.yaml overrides/{distro/testing/{flavor/ubuntu_latest.yaml k-testing.yaml} ms-die-on-skipped.yaml}} objectstore-ec/bluestore-ec-root.yaml overrides/{basic/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} fuse-default-perm-no.yaml} q_check_counter/check_counter.yaml tasks/cfuse_workunit_suites_ffsb.yaml} 3
pass 5328684 2020-08-10 10:23:09 2020-08-10 15:05:04 2020-08-10 17:47:07 2:42:03 2:10:56 0:31:07 smithi master centos 8.1 multimds/basic/{0-supported-random-distro$/{rhel_8.yaml} begin.yaml clusters/3-mds.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml mount/kclient/{mount.yaml overrides/{distro/testing/{flavor/centos_latest.yaml k-testing.yaml} ms-die-on-skipped.yaml}} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{basic/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} fuse-default-perm-no.yaml} q_check_counter/check_counter.yaml tasks/cfuse_workunit_suites_ffsb.yaml} 3
pass 5328660 2020-08-10 10:22:42 2020-08-10 14:18:48 2020-08-10 15:26:48 1:08:00 0:47:15 0:20:45 smithi master centos 8.1 multimds/basic/{0-supported-random-distro$/{centos_8.yaml} begin.yaml clusters/3-mds.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml mount/kclient/{mount.yaml overrides/{distro/testing/{flavor/centos_latest.yaml k-testing.yaml} ms-die-on-skipped.yaml}} objectstore-ec/bluestore-comp.yaml overrides/{basic/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} fuse-default-perm-no.yaml} q_check_counter/check_counter.yaml tasks/cfuse_workunit_suites_ffsb.yaml} 3
fail 5328547 2020-08-10 09:21:39 2020-08-10 10:26:39 2020-08-10 11:18:39 0:52:00 smithi master rhel 7.8 multimds/basic/{0-supported-random-distro$/{ubuntu_16.04.yaml} begin.yaml clusters/9-mds.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml mount/kclient/{mount.yaml overrides/{distro/random/{k-testing.yaml supported$/{rhel_latest.yaml}} ms-die-on-skipped.yaml}} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{basic/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} fuse-default-perm-no.yaml} q_check_counter/check_counter.yaml tasks/cfuse_workunit_suites_ffsb.yaml} 3
Failure Reason:

Command failed on smithi159 with status 4: 'rm -f /tmp/kernel.x86_64.rpm && echo kernel-4.20.0_ceph_gd4c0c9bb2b1d-1.x86_64.rpm | wget -nv -O /tmp/kernel.x86_64.rpm --base=https://5.chacra.ceph.com/r/kernel/testing/d4c0c9bb2b1d2f91769af32d49121018ffe640fe/centos/7/flavors/default/x86_64/ --input-file=-'

pass 5328484 2020-08-10 09:16:34 2020-08-10 10:00:12 2020-08-10 10:28:12 0:28:00 0:22:42 0:05:18 smithi master rhel 7.8 fs/traceless/{begin.yaml clusters/fixed-2-ucephfs.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} mount/fuse.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/cfuse_workunit_suites_blogbench.yaml traceless/50pc.yaml} 2
fail 5328455 2020-08-10 09:16:02 2020-08-10 09:46:54 2020-08-10 10:00:53 0:13:59 0:04:19 0:09:40 smithi master ubuntu 16.04 fs/multifs/{begin.yaml clusters/1a3s-mds-2c-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} mount/fuse.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml mon-debug.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/failover.yaml} 2
Failure Reason:

Command failed on smithi112 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=14.2.10-270-g31dc968-1xenial ceph-mds=14.2.10-270-g31dc968-1xenial ceph-mgr=14.2.10-270-g31dc968-1xenial ceph-common=14.2.10-270-g31dc968-1xenial ceph-fuse=14.2.10-270-g31dc968-1xenial ceph-test=14.2.10-270-g31dc968-1xenial radosgw=14.2.10-270-g31dc968-1xenial python-ceph=14.2.10-270-g31dc968-1xenial libcephfs2=14.2.10-270-g31dc968-1xenial libcephfs-dev=14.2.10-270-g31dc968-1xenial librados2=14.2.10-270-g31dc968-1xenial librbd1=14.2.10-270-g31dc968-1xenial rbd-fuse=14.2.10-270-g31dc968-1xenial python3-cephfs=14.2.10-270-g31dc968-1xenial cephfs-shell=14.2.10-270-g31dc968-1xenial'