Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 5176017 2020-06-24 15:26:52 2020-06-24 17:46:38 2020-06-25 05:49:09 12:02:31 smithi master ubuntu 18.04 rados/cephadm/upgrade/{1-start 2-start-upgrade 3-wait distro$/{ubuntu_latest} fixed-2} 2
pass 5176018 2020-06-24 15:26:53 2020-06-24 17:46:38 2020-06-24 18:06:38 0:20:00 0:12:59 0:07:01 smithi master ubuntu 18.04 rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/rados_python} 2
fail 5176019 2020-06-24 15:26:54 2020-06-24 17:46:53 2020-06-24 18:50:54 1:04:01 0:56:23 0:07:38 smithi master centos 8.1 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

"2020-06-24T18:46:02.104873+0000 mon.a (mon.0) 220 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 5176020 2020-06-24 15:26:54 2020-06-24 17:47:03 2020-06-24 18:07:02 0:19:59 0:05:40 0:14:19 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/off distro$/{centos_7.6} msgr-failures/few rados thrashers/default thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi116 with status 1: 'sudo yum -y install ceph-radosgw'

fail 5176021 2020-06-24 15:26:55 2020-06-24 17:48:35 2020-06-24 18:06:34 0:17:59 0:05:39 0:12:20 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi089 with status 1: 'sudo yum -y install ceph-radosgw'

fail 5176022 2020-06-24 15:26:56 2020-06-24 17:48:36 2020-06-24 18:36:36 0:48:00 0:40:18 0:07:42 smithi master centos 8.1 rados/monthrash/{ceph clusters/9-mons msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_latest} thrashers/sync-many workloads/rados_5925} 2
Failure Reason:

Scrubbing terminated -- not all pgs were active and clean.

fail 5176023 2020-06-24 15:26:57 2020-06-24 17:48:39 2020-06-24 18:32:39 0:44:00 0:30:00 0:14:00 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/off distro$/{centos_7.6} msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi203 with status 125: 'sudo docker kill -s 1 ceph-c0bfe5fc-b645-11ea-a06d-001a4aab830c-osd.3'

fail 5176024 2020-06-24 15:26:58 2020-06-24 17:48:58 2020-06-24 18:32:59 0:44:01 0:36:35 0:07:26 smithi master ubuntu 18.04 rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/many msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} tasks/rados_workunit_loadgen_mix} 2
Failure Reason:

"2020-06-24T18:00:44.877225+0000 mon.a (mon.0) 124 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log

pass 5176025 2020-06-24 15:26:58 2020-06-24 17:49:10 2020-06-24 18:03:10 0:14:00 0:08:25 0:05:35 smithi master centos 8.1 rados/singleton-nomsgr/{all/health-warnings rados supported-random-distro$/{centos_latest}} 1
pass 5176026 2020-06-24 15:26:59 2020-06-24 17:50:52 2020-06-24 18:58:53 1:08:01 1:01:26 0:06:35 smithi master centos 8.1 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-zstd supported-random-distro$/{centos_latest} tasks/dashboard} 2
fail 5176027 2020-06-24 15:27:00 2020-06-24 17:50:52 2020-06-24 18:26:52 0:36:00 0:23:23 0:12:37 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/few rados thrashers/none thrashosds-health workloads/radosbench} 3
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

pass 5176028 2020-06-24 15:27:01 2020-06-24 17:50:52 2020-06-24 18:12:52 0:22:00 0:15:34 0:06:26 smithi master rhel 8.1 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} d-balancer/off msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_latest} thrashers/none thrashosds-health workloads/redirect_set_object} 2
dead 5176029 2020-06-24 15:27:02 2020-06-24 17:51:10 2020-06-25 05:53:37 12:02:27 smithi master rhel 7.7 rados/cephadm/upgrade/{1-start 2-start-upgrade 3-wait distro$/{rhel_7} fixed-2} 2
pass 5176030 2020-06-24 15:27:03 2020-06-24 17:52:42 2020-06-24 18:12:42 0:20:00 0:11:07 0:08:53 smithi master ubuntu 18.04 rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} tasks/readwrite} 2
pass 5176031 2020-06-24 15:27:04 2020-06-24 17:52:42 2020-06-24 18:24:42 0:32:00 0:21:01 0:10:59 smithi master ubuntu 18.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 5176032 2020-06-24 15:27:04 2020-06-24 17:52:46 2020-06-24 18:32:46 0:40:00 0:27:47 0:12:13 smithi master ubuntu 18.04 rados/monthrash/{ceph clusters/3-mons msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/many workloads/rados_mon_workunits} 2
fail 5176033 2020-06-24 15:27:05 2020-06-24 17:52:49 2020-06-24 18:06:48 0:13:59 0:06:56 0:07:03 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-low-osd-mem-target openstack settings/optimized ubuntu_latest workloads/cosbench_64K_read_write} 1
Failure Reason:

Command failed on smithi070 with status 1: 'cd /home/ubuntu/cephtest/cos && chmod +x *.sh'

fail 5176034 2020-06-24 15:27:06 2020-06-24 17:55:05 2020-06-24 18:09:04 0:13:59 0:06:43 0:07:16 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-stupid openstack settings/optimized ubuntu_latest workloads/cosbench_64K_write} 1
Failure Reason:

Command failed on smithi134 with status 1: 'cd /home/ubuntu/cephtest/cos && chmod +x *.sh'

pass 5176035 2020-06-24 15:27:07 2020-06-24 17:56:50 2020-06-24 18:22:50 0:26:00 0:10:17 0:15:43 smithi master centos 8.1 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 5176036 2020-06-24 15:27:08 2020-06-24 17:58:51 2020-06-24 18:30:51 0:32:00 0:25:42 0:06:18 smithi master ubuntu 18.04 rados/singleton/{all/thrash_cache_writeback_proxy_none msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} 2
pass 5176037 2020-06-24 15:27:09 2020-06-24 17:58:51 2020-06-24 19:24:53 1:26:02 1:12:41 0:13:21 smithi master centos 8.1 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/valgrind} 2
fail 5176038 2020-06-24 15:27:09 2020-06-24 17:58:52 2020-06-24 23:13:00 5:14:08 5:01:54 0:12:14 smithi master rhel 8.1 rados/objectstore/{backends/objectstore supported-random-distro$/{rhel_latest}} 1
Failure Reason:

Command failed on smithi192 with status 1: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-filestore 20 --debug-bluestore 20" ceph_test_objectstore --gtest_filter=-*/3 --gtest_catch_exceptions=0\''