Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 5202894 2020-07-06 11:40:01 2020-07-06 11:40:41 2020-07-06 12:06:41 0:26:00 0:19:54 0:06:06 smithi master ubuntu 18.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast msgr-failures/osd-delay rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 2
pass 5202895 2020-07-06 11:40:02 2020-07-06 11:40:42 2020-07-06 12:00:41 0:19:59 0:10:52 0:09:07 smithi master centos 8.1 rados/cephadm/orchestrator_cli/{2-node-mgr orchestrator_cli supported-random-distro$/{centos_8}} 2
fail 5202896 2020-07-06 11:40:03 2020-07-06 11:40:42 2020-07-06 18:26:51 6:46:09 6:37:12 0:08:57 smithi master centos 8.1 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi192 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7b60e408aedc30fb1b71a2c6e541618527d6e6d3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 5202897 2020-07-06 11:40:04 2020-07-06 11:40:46 2020-07-06 12:18:46 0:38:00 0:24:26 0:13:34 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_7.6} msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

pass 5202898 2020-07-06 11:40:04 2020-07-06 11:40:51 2020-07-06 11:56:50 0:15:59 0:08:52 0:07:07 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption} 1
pass 5202899 2020-07-06 11:40:05 2020-07-06 11:42:38 2020-07-06 12:00:38 0:18:00 0:10:41 0:07:19 smithi master centos 8.1 rados/cephadm/orchestrator_cli/{2-node-mgr orchestrator_cli supported-random-distro$/{centos_8}} 2
fail 5202900 2020-07-06 11:40:06 2020-07-06 11:42:38 2020-07-06 18:28:48 6:46:10 6:37:58 0:08:12 smithi master centos 8.1 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi023 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7b60e408aedc30fb1b71a2c6e541618527d6e6d3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 5202901 2020-07-06 11:40:07 2020-07-06 11:42:38 2020-07-06 12:40:39 0:58:01 0:38:02 0:19:59 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_7.6} msgr-failures/fastclose rados thrashers/careful thrashosds-health workloads/snaps-few-objects} 3
pass 5202902 2020-07-06 11:40:08 2020-07-06 11:42:38 2020-07-06 12:22:38 0:40:00 0:21:48 0:18:12 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/few rados thrashers/default thrashosds-health workloads/test_rbd_api} 3
pass 5202903 2020-07-06 11:40:09 2020-07-06 11:42:38 2020-07-06 11:58:38 0:16:00 0:08:47 0:07:13 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption} 1
pass 5202904 2020-07-06 11:40:10 2020-07-06 11:42:39 2020-07-06 12:34:39 0:52:00 0:32:38 0:19:22 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_7.6} msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/cache-snaps} 3
fail 5202905 2020-07-06 11:40:10 2020-07-06 11:42:39 2020-07-06 12:28:39 0:46:00 0:25:30 0:20:30 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/radosbench} 3
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds