Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 5189853 2020-06-29 16:59:31 2020-06-29 17:00:37 2020-06-29 17:38:37 0:38:00 0:23:32 0:14:28 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/fastclose rados thrashers/careful thrashosds-health workloads/cache-snaps} 3
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5189854 2020-06-29 16:59:32 2020-06-29 17:02:19 2020-06-29 17:20:19 0:18:00 0:06:54 0:11:06 smithi master ubuntu 18.04 rados/upgrade/mimic-x-singleton/{0-cluster/{openstack start} 1-install/mimic 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-octopus 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap thrashosds-health ubuntu_latest} 4
Failure Reason:

Command failed on smithi095 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early osd dump --format=json'

fail 5189855 2020-06-29 16:59:33 2020-06-29 17:02:22 2020-06-29 19:38:26 2:36:04 0:23:07 2:12:57 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/rbd_cls} 3
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5189856 2020-06-29 16:59:34 2020-06-29 17:02:23 2020-06-29 19:00:25 1:58:02 0:22:22 1:35:40 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/hammer backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/radosbench} 3
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

pass 5189857 2020-06-29 16:59:35 2020-06-29 17:02:29 2020-06-29 17:36:29 0:34:00 0:20:33 0:13:27 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/jewel-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/off distro$/{centos_7.6} msgr-failures/few rados thrashers/careful thrashosds-health workloads/rbd_cls} 3
pass 5189858 2020-06-29 16:59:35 2020-06-29 17:04:50 2020-06-29 17:28:50 0:24:00 0:15:23 0:08:37 smithi master ubuntu 18.04 rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} tasks/repair_test} 2
pass 5189859 2020-06-29 16:59:36 2020-06-29 17:04:50 2020-06-29 17:48:50 0:44:00 0:30:07 0:13:53 smithi master rhel 8.1 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
fail 5189860 2020-06-29 16:59:37 2020-06-29 17:05:27 2020-06-29 17:19:27 0:14:00 0:07:33 0:06:27 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-low-osd-mem-target openstack settings/optimized ubuntu_latest workloads/cosbench_64K_read_write} 1
Failure Reason:

Command failed on smithi031 with status 1: 'cd /home/ubuntu/cephtest/cos && chmod +x *.sh'

fail 5189861 2020-06-29 16:59:38 2020-06-29 17:06:34 2020-06-29 17:20:33 0:13:59 0:07:27 0:06:32 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-stupid openstack settings/optimized ubuntu_latest workloads/cosbench_64K_write} 1
Failure Reason:

Command failed on smithi047 with status 1: 'cd /home/ubuntu/cephtest/cos && chmod +x *.sh'

fail 5189862 2020-06-29 16:59:39 2020-06-29 17:06:34 2020-06-29 19:30:37 2:24:03 0:22:42 2:01:21 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/off distro$/{centos_7.6} msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

pass 5189863 2020-06-29 16:59:40 2020-06-29 17:08:01 2020-06-29 18:50:03 1:42:02 1:36:19 0:05:43 smithi master centos 8.1 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados tasks/rados_cls_all validater/valgrind} 2
fail 5189864 2020-06-29 16:59:41 2020-06-29 17:08:14 2020-06-29 22:36:22 5:28:08 5:15:26 0:12:42 smithi master rhel 8.1 rados/objectstore/{backends/objectstore supported-random-distro$/{rhel_latest}} 1
Failure Reason:

Command failed on smithi028 with status 1: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-filestore 20 --debug-bluestore 20" ceph_test_objectstore --gtest_filter=-*/3 --gtest_catch_exceptions=0\''