Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 4108991 2019-07-10 18:50:47 2019-07-10 18:54:27 2019-07-10 19:10:26 0:15:59 0:08:51 0:07:08 smithi master ubuntu 18.04 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi195 with status 22: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3c05d0d98a86c0fc72784556bc6455544409c838 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

dead 4108992 2019-07-10 18:50:48 2019-07-10 18:54:30 2019-07-11 06:58:59 12:04:29 11:36:30 0:27:59 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/cache-snaps.yaml}
Failure Reason:

psutil.NoSuchProcess process no longer exists (pid=5768)

pass 4108993 2019-07-10 18:50:48 2019-07-10 18:54:53 2019-07-10 19:18:52 0:23:59 0:18:51 0:05:08 smithi master centos 7.6 rados/singleton/{all/rebuild-mondb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 1
fail 4108994 2019-07-10 18:50:49 2019-07-10 18:56:37 2019-07-10 19:34:37 0:38:00 0:21:33 0:16:27 smithi master ubuntu 18.04 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=2-m=1.yaml} 2
Failure Reason:

not clean before minsize thrashing starts

fail 4108995 2019-07-10 18:50:50 2019-07-10 18:56:38 2019-07-10 19:22:37 0:25:59 0:14:59 0:11:00 smithi master centos 7.6 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi049 with status 22: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3c05d0d98a86c0fc72784556bc6455544409c838 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 4108996 2019-07-10 18:50:51 2019-07-10 18:58:54 2019-07-10 19:22:53 0:23:59 0:14:15 0:09:44 smithi master rhel 7.6 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi094 with status 22: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3c05d0d98a86c0fc72784556bc6455544409c838 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

dead 4108997 2019-07-10 18:50:52 2019-07-10 19:02:48 2019-07-11 07:05:15 12:02:27 10:53:30 1:08:57 smithi master centos 7.6 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/nautilus.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-octopus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{centos_7.yaml} thrashosds-health.yaml} 4
Failure Reason:

failed to recover before timeout expired

pass 4108998 2019-07-10 18:50:52 2019-07-10 19:02:48 2019-07-10 19:36:47 0:33:59 0:24:47 0:09:12 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} 2
fail 4108999 2019-07-10 18:50:53 2019-07-10 19:04:52 2019-07-10 19:18:51 0:13:59 0:08:01 0:05:58 smithi master ubuntu 18.04 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi042 with status 22: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3c05d0d98a86c0fc72784556bc6455544409c838 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

pass 4109000 2019-07-10 18:50:54 2019-07-10 19:06:35 2019-07-10 19:38:35 0:32:00 0:23:19 0:08:41 smithi master ubuntu 18.04 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
pass 4109001 2019-07-10 18:50:54 2019-07-10 19:06:44 2019-07-10 20:02:44 0:56:00 0:50:38 0:05:22 smithi master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2
fail 4109002 2019-07-10 18:50:55 2019-07-10 19:06:45 2019-07-10 20:12:45 1:06:00 0:58:36 0:07:24 smithi master centos 7.6 rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/scrub.yaml} 1
Failure Reason:

Command failed (workunit test scrub/osd-scrub-snaps.sh) on smithi103 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3c05d0d98a86c0fc72784556bc6455544409c838 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-snaps.sh'