Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 5161024 2020-06-18 16:31:04 2020-06-18 16:40:14 2020-06-18 17:42:15 1:02:01 0:05:38 0:56:23 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/off distro$/{centos_7.6} msgr-failures/few rados thrashers/default thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi008 with status 1: 'sudo yum -y install ceph-radosgw'

pass 5161025 2020-06-18 16:31:05 2020-06-18 16:40:14 2020-06-18 17:42:15 1:02:01 0:33:03 0:28:58 smithi master rhel 8.1 rados/monthrash/{ceph clusters/3-mons msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_latest} thrashers/many workloads/rados_mon_workunits} 2
pass 5161026 2020-06-18 16:31:05 2020-06-18 16:41:34 2020-06-18 17:15:34 0:34:00 0:27:24 0:06:36 smithi master centos 8.1 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/filestore-xfs supported-random-distro$/{centos_latest} tasks/module_selftest} 2
fail 5161027 2020-06-18 16:31:06 2020-06-18 16:42:09 2020-06-18 17:34:09 0:52:00 0:23:33 0:28:27 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/fastclose rados thrashers/careful thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

pass 5161028 2020-06-18 16:31:07 2020-06-18 16:42:28 2020-06-18 17:40:29 0:58:01 0:48:55 0:09:06 smithi master centos 8.1 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none msgr-failures/few msgr/async objectstore/filestore-xfs rados tasks/rados_api_tests validater/valgrind} 2
pass 5161029 2020-06-18 16:31:08 2020-06-18 16:44:22 2020-06-18 19:42:26 2:58:04 2:51:10 0:06:54 smithi master rhel 8.1 rados/standalone/{supported-random-distro$/{rhel_latest} workloads/osd} 1
pass 5161030 2020-06-18 16:31:09 2020-06-18 16:44:23 2020-06-18 17:26:23 0:42:00 0:21:36 0:20:24 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/off distro$/{centos_7.6} msgr-failures/few rados thrashers/default thrashosds-health workloads/test_rbd_api} 3
pass 5161031 2020-06-18 16:31:10 2020-06-18 16:44:23 2020-06-18 17:24:23 0:40:00 0:32:04 0:07:56 smithi master rhel 8.1 rados/singleton/{all/thrash_cache_writeback_proxy_none msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_latest}} 2
fail 5161032 2020-06-18 16:31:10 2020-06-18 16:44:22 2020-06-18 22:18:30 5:34:08 5:18:58 0:15:10 smithi master ubuntu 18.04 rados/objectstore/{backends/objectstore supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi153 with status 1: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-filestore 20 --debug-bluestore 20" ceph_test_objectstore --gtest_filter=-*/3 --gtest_catch_exceptions=0\''