Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 4308920 2019-09-15 06:16:05 2019-09-15 06:16:27 2019-09-15 06:50:26 0:33:59 0:20:44 0:13:15 mira master centos 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{centos_7.yaml} tasks/orchestrator_cli.yaml} 2
Failure Reason:

"2019-09-15T06:46:15.203478+0000 mds.a (mds.0) 1 : cluster [WRN] evicting unresponsive client mira027:x (4708), after 303.296 seconds" in cluster log

dead 4308922 2019-09-15 06:16:05 2019-09-15 06:18:27 2019-09-15 18:20:53 12:02:26 mira master centos 7.6 rados/upgrade/mimic-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/mimic.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{centos_7.yaml} thrashosds-health.yaml} 4
fail 4308924 2019-09-15 06:16:06 2019-09-15 06:19:17 2019-09-15 06:51:16 0:31:59 0:19:55 0:12:04 mira master centos 7.6 rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/erasure-code.yaml} 1
Failure Reason:

Command failed (workunit test erasure-code/test-erasure-code.sh) on mira046 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=beecf28e6cd5c688b51b703951500f02d728d590 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/erasure-code/test-erasure-code.sh'

fail 4308926 2019-09-15 06:16:07 2019-09-15 06:22:35 2019-09-15 09:06:36 2:44:01 2:25:16 0:18:45 mira master rhel 7.6 rados/perf/{ceph.yaml objectstore/bluestore-basic-min-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/radosbench_omap_write.yaml} 1
Failure Reason:

"2019-09-15T08:58:52.961253+0000 mon.a (mon.0) 138 : cluster [WRN] Health check failed: 4 slow ops, oldest one blocked for 78 sec, daemons [osd,0,osd,2] have slow ops. (SLOW_OPS)" in cluster log

dead 4308928 2019-09-15 06:16:08 2019-09-15 06:24:18 2019-09-15 18:26:43 12:02:25 mira master ubuntu 18.04 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} 2
fail 4308930 2019-09-15 06:16:09 2019-09-15 06:26:57 2019-09-15 06:50:56 0:23:59 0:14:48 0:09:11 mira master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/cosbench_64K_read_write.yaml} 1
Failure Reason:

Command failed on mira032 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest'

pass 4308932 2019-09-15 06:16:09 2019-09-15 06:30:27 2019-09-15 07:34:26 1:03:59 0:42:01 0:21:58 mira master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} 2
fail 4308934 2019-09-15 06:16:10 2019-09-15 06:34:39 2019-09-15 07:00:38 0:25:59 0:16:40 0:09:19 mira master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/cosbench_64K_write.yaml} 1
Failure Reason:

Command failed on mira117 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest'

pass 4308937 2019-09-15 06:16:11 2019-09-15 06:36:41 2019-09-15 07:24:41 0:48:00 0:36:45 0:11:15 mira master ubuntu 18.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} 2
fail 4308939 2019-09-15 06:16:12 2019-09-15 06:50:46 2019-09-15 07:26:46 0:36:00 0:25:51 0:10:09 mira master rhel 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{rhel_7.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_cephfs_get (tasks.mgr.dashboard.test_cephfs.CephfsTest)

fail 4308941 2019-09-15 06:16:13 2019-09-15 06:50:58 2019-09-15 07:56:58 1:06:00 0:56:04 0:09:56 mira master centos 7.6 rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/mon.yaml} 1
Failure Reason:

Command failed (workunit test mon/mon-osdmap-prune.sh) on mira027 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=beecf28e6cd5c688b51b703951500f02d728d590 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-osdmap-prune.sh'

dead 4308943 2019-09-15 06:16:14 2019-09-15 06:51:18 2019-09-15 18:53:40 12:02:22 mira master ubuntu 18.04 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/nautilus.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-octopus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashosds-health.yaml} 4
pass 4308945 2019-09-15 06:16:14 2019-09-15 07:00:56 2019-09-15 07:52:55 0:51:59 0:43:19 0:08:40 mira master rhel 7.6 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} 2
dead 4308947 2019-09-15 06:16:15 2019-09-15 07:24:45 2019-09-15 19:27:06 12:02:21 mira master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/hammer.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} 4
fail 4308949 2019-09-15 06:16:16 2019-09-15 07:26:47 2019-09-15 13:20:52 5:54:05 5:42:26 0:11:39 mira master centos 7.6 rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/osd.yaml} 1
Failure Reason:

Command failed (workunit test osd/divergent-priors.sh) on mira032 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=beecf28e6cd5c688b51b703951500f02d728d590 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/divergent-priors.sh'

pass 4308951 2019-09-15 06:16:17 2019-09-15 07:34:28 2019-09-15 08:22:28 0:48:00 0:39:02 0:08:58 mira master rhel 7.6 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} 2
fail 4308953 2019-09-15 06:16:17 2019-09-15 07:52:58 2019-09-15 09:14:58 1:22:00 1:07:31 0:14:29 mira master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2
Failure Reason:

reached maximum tries (50) after waiting for 300 seconds

fail 4308955 2019-09-15 06:16:18 2019-09-15 07:57:12 2019-09-15 08:31:11 0:33:59 0:23:01 0:10:58 mira master centos 7.6 rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/scrub.yaml} 1
Failure Reason:

Command failed (workunit test scrub/osd-recovery-scrub.sh) on mira027 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=beecf28e6cd5c688b51b703951500f02d728d590 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-recovery-scrub.sh'