Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 4301788 2019-09-13 04:20:05 2019-09-13 04:20:07 2019-09-13 05:06:07 0:46:00 0:38:11 0:07:49 mira master rhel 7.6 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} 2
pass 4301789 2019-09-13 04:20:06 2019-09-13 04:20:07 2019-09-13 05:42:08 1:22:01 0:55:21 0:26:40 mira master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/nautilus-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/radosbench.yaml} 4
pass 4301790 2019-09-13 04:20:06 2019-09-13 04:20:08 2019-09-13 04:58:07 0:37:59 0:17:07 0:20:52 mira master centos 7.6 rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/radosbench_4K_rand_read.yaml} 1
pass 4301791 2019-09-13 04:20:07 2019-09-13 04:20:09 2019-09-13 04:50:08 0:29:59 0:24:19 0:05:40 mira master rhel 7.6 rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/radosbench_4K_seq_read.yaml} 1
fail 4301792 2019-09-13 04:20:08 2019-09-13 04:20:10 2019-09-13 04:54:09 0:33:59 0:25:37 0:08:22 mira master rhel 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{rhel_7.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_cephfs_get (tasks.mgr.dashboard.test_cephfs.CephfsTest)

pass 4301793 2019-09-13 04:20:09 2019-09-13 04:20:11 2019-09-13 05:00:10 0:39:59 0:23:09 0:16:50 mira master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus-v2only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} 4
pass 4301794 2019-09-13 04:20:10 2019-09-13 04:20:11 2019-09-13 04:50:11 0:30:00 0:24:03 0:05:57 mira master rhel 7.6 rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/radosbench_4M_rand_read.yaml} 1
pass 4301795 2019-09-13 04:20:11 2019-09-13 04:20:12 2019-09-13 07:42:15 3:22:03 3:05:30 0:16:33 mira master centos 7.6 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/nautilus.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-octopus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{centos_7.yaml} thrashosds-health.yaml} 4
pass 4301796 2019-09-13 04:20:12 2019-09-13 04:20:13 2019-09-13 05:26:13 1:06:00 0:40:55 0:25:05 mira master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/nautilus.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 4
pass 4301797 2019-09-13 04:20:13 2019-09-13 04:20:14 2019-09-13 07:40:16 3:20:02 2:33:53 0:46:09 mira master rhel 7.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/small-objects.yaml} 2
pass 4301798 2019-09-13 04:20:13 2019-09-13 04:50:11 2019-09-13 05:18:10 0:27:59 0:13:31 0:14:28 mira master ubuntu 18.04 rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/force-sync-many.yaml workloads/pool-create-delete.yaml} 2
dead 4301799 2019-09-13 04:20:14 2019-09-13 04:50:12 2019-09-13 16:52:38 12:02:26 mira master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/hammer.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 4
pass 4301800 2019-09-13 04:20:15 2019-09-13 04:54:24 2019-09-13 05:22:24 0:28:00 0:19:22 0:08:38 mira master ubuntu 18.04 rados/singleton-nomsgr/{all/msgr.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
fail 4301801 2019-09-13 04:20:16 2019-09-13 04:58:09 2019-09-13 06:40:10 1:42:01 1:33:39 0:08:22 mira master rhel 7.6 rados/standalone/{supported-random-distro$/{rhel_7.yaml} workloads/osd.yaml} 1
Failure Reason:

Command failed (workunit test osd/osd-bluefs-volume-ops.sh) on mira038 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=aeeefb50c08911ac144f76a1f57e6ee511c041bb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-bluefs-volume-ops.sh'

fail 4301802 2019-09-13 04:20:17 2019-09-13 05:00:12 2019-09-13 05:36:11 0:35:59 0:19:02 0:16:57 mira master centos 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{centos_7.yaml} tasks/orchestrator_cli.yaml} 2
Failure Reason:

"2019-09-13T05:33:18.531703+0000 mds.c (mds.0) 1 : cluster [WRN] evicting unresponsive client mira110:x (4633), after 303.091 seconds" in cluster log

fail 4301803 2019-09-13 04:20:17 2019-09-13 05:06:22 2019-09-13 05:40:21 0:33:59 0:14:41 0:19:18 mira master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/cosbench_64K_read_write.yaml} 1
Failure Reason:

Command failed on mira064 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest'

fail 4301804 2019-09-13 04:20:18 2019-09-13 05:18:13 2019-09-13 06:38:13 1:20:00 0:54:30 0:25:30 mira master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2
Failure Reason:

"2019-09-13T06:16:35.426264+0000 mon.b (mon.0) 1484 : cluster [WRN] Health check failed: Long heartbeat ping times on back interface seen, longest is 2882.708 msec (OSD_SLOW_PING_TIME_BACK)" in cluster log

fail 4301805 2019-09-13 04:20:19 2019-09-13 05:22:39 2019-09-13 06:00:38 0:37:59 0:24:07 0:13:52 mira master centos 7.6 rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/cosbench_64K_write.yaml} 1
Failure Reason:

Command failed on mira072 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest'

fail 4301806 2019-09-13 04:20:20 2019-09-13 05:26:15 2019-09-13 08:16:17 2:50:02 2:31:21 0:18:41 mira master rhel 7.6 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} 2
Failure Reason:

SSH connection to mira085 was lost: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op rmattr 25 --op append 100 --op delete 50 --pool unique_pool_0'

dead 4301807 2019-09-13 04:20:21 2019-09-13 05:36:13 2019-09-13 17:38:34 12:02:21 mira master ubuntu 18.04 rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
fail 4301808 2019-09-13 04:20:22 2019-09-13 05:40:23 2019-09-13 06:34:23 0:54:00 0:40:25 0:13:35 mira master centos 7.6 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
Failure Reason:

failed to complete snap trimming before timeout

fail 4301809 2019-09-13 04:20:22 2019-09-13 05:42:24 2019-09-13 06:04:23 0:21:59 0:11:42 0:10:17 mira master ubuntu 18.04 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_cephfs_get (tasks.mgr.dashboard.test_cephfs.CephfsTest)