Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 5138946 2020-06-11 19:56:51 2020-06-11 20:21:56 2020-06-11 20:55:56 0:34:00 0:23:19 0:10:41 smithi py2 centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5138947 2020-06-11 19:56:52 2020-06-11 20:21:56 2020-06-11 20:41:56 0:20:00 0:09:13 0:10:47 smithi py2 ubuntu 18.04 rados/singleton-bluestore/{all/cephtool msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi110 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9d5f3ae9353e05efe95756fd70e2442e45e01a66 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 5138948 2020-06-11 19:56:53 2020-06-11 20:21:56 2020-06-11 21:01:56 0:40:00 0:24:03 0:15:57 smithi py2 centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/cache-snaps} 3
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5138949 2020-06-11 19:56:54 2020-06-11 20:21:56 2020-06-11 20:37:55 0:15:59 0:08:36 0:07:23 smithi py2 centos 8.1 rados/singleton-bluestore/{all/cephtool msgr-failures/many msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi192 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9d5f3ae9353e05efe95756fd70e2442e45e01a66 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 5138950 2020-06-11 19:56:55 2020-06-11 20:21:56 2020-06-11 21:03:56 0:42:00 0:22:33 0:19:27 smithi py2 centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/jewel-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/off distro$/{centos_7.6} msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

pass 5138951 2020-06-11 19:56:56 2020-06-11 20:23:40 2020-06-11 21:17:40 0:54:00 0:21:01 0:32:59 smithi py2 centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/jewel backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} 3
fail 5138952 2020-06-11 19:56:57 2020-06-11 20:23:40 2020-06-11 20:39:40 0:16:00 0:08:53 0:07:07 smithi py2 centos 8.1 rados/singleton-bluestore/{all/cephtool msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi153 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9d5f3ae9353e05efe95756fd70e2442e45e01a66 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

pass 5138953 2020-06-11 19:56:58 2020-06-11 20:23:47 2020-06-11 21:09:47 0:46:00 0:27:59 0:18:01 smithi py2 centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/off distro$/{centos_7.6} msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/cache-snaps} 3
pass 5138954 2020-06-11 19:56:59 2020-06-11 20:23:47 2020-06-11 21:01:47 0:38:00 0:24:16 0:13:44 smithi py2 ubuntu 18.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/filestore-xfs supported-random-distro$/{ubuntu_latest} tasks/module_selftest} 2
fail 5138955 2020-06-11 19:57:00 2020-06-11 20:23:48 2020-06-11 20:47:48 0:24:00 0:08:28 0:15:32 smithi py2 ubuntu 18.04 rados/singleton-bluestore/{all/cephtool msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi186 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9d5f3ae9353e05efe95756fd70e2442e45e01a66 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

pass 5138956 2020-06-11 19:57:01 2020-06-11 20:23:51 2020-06-11 21:15:51 0:52:00 0:20:49 0:31:11 smithi py2 centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/off distro$/{centos_7.6} msgr-failures/few rados thrashers/careful thrashosds-health workloads/rbd_cls} 3
fail 5138957 2020-06-11 19:57:02 2020-06-11 20:25:28 2020-06-11 21:07:29 0:42:01 0:23:02 0:18:59 smithi py2 centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5138958 2020-06-11 19:57:04 2020-06-11 20:25:39 2020-06-12 02:05:48 5:40:09 5:26:09 0:14:00 smithi py2 rhel 8.1 rados/objectstore/{backends/objectstore supported-random-distro$/{rhel_latest}} 1
Failure Reason:

Command failed on smithi022 with status 1: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-filestore 20 --debug-bluestore 20" ceph_test_objectstore --gtest_filter=-*/3 --gtest_catch_exceptions=0\''