Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 4348449 2019-09-30 17:02:57 2019-09-30 17:16:03 2019-10-01 05:18:31 12:02:28 smithi master ubuntu 18.04 rados/upgrade/mimic-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/mimic.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashosds-health.yaml} 4
fail 4348450 2019-09-30 17:02:58 2019-09-30 17:16:03 2019-09-30 17:44:02 0:27:59 0:16:42 0:11:17 smithi master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/cosbench_64K_write.yaml} 1
Failure Reason:

Command failed on smithi180 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest'

pass 4348451 2019-09-30 17:02:59 2019-09-30 17:17:47 2019-09-30 17:39:47 0:22:00 0:11:09 0:10:51 smithi master centos 7.6 rados/singleton-nomsgr/{all/cache-fs-trunc.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 1
fail 4348452 2019-09-30 17:03:00 2019-09-30 17:17:48 2019-09-30 17:51:47 0:33:59 0:25:05 0:08:54 smithi master centos 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{centos_7.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_perf_counters_mds_get (tasks.mgr.dashboard.test_perf_counters.PerfCountersControllerTest)

pass 4348453 2019-09-30 17:03:01 2019-09-30 17:17:48 2019-09-30 18:31:48 1:14:00 1:00:41 0:13:19 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} 4
fail 4348454 2019-09-30 17:03:02 2019-09-30 17:19:44 2019-09-30 17:43:43 0:23:59 0:17:28 0:06:31 smithi master rhel 7.6 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} 2
Failure Reason:

Command failed on smithi023 with status 6: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph -- tell 'mon.*' injectargs --mon_osd_down_out_interval=0"

pass 4348455 2019-09-30 17:03:03 2019-09-30 17:19:44 2019-09-30 18:27:44 1:08:00 0:58:09 0:09:51 smithi master ubuntu 18.04 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} 2
pass 4348456 2019-09-30 17:03:04 2019-09-30 17:19:58 2019-09-30 20:44:01 3:24:03 3:15:28 0:08:35 smithi master rhel 7.6 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/nautilus.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-octopus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} thrashosds-health.yaml} 4
pass 4348457 2019-09-30 17:03:05 2019-09-30 17:20:32 2019-09-30 17:58:32 0:38:00 0:31:56 0:06:04 smithi master rhel 7.6 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/rados_workunit_loadgen_mostlyread.yaml} 2
pass 4348458 2019-09-30 17:03:06 2019-09-30 17:21:19 2019-09-30 17:55:18 0:33:59 0:26:50 0:07:09 smithi master rhel 7.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} 2
fail 4348459 2019-09-30 17:03:07 2019-09-30 17:21:54 2019-09-30 19:15:55 1:54:01 1:44:38 0:09:23 smithi master centos 7.6 rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/osd.yaml} 1
Failure Reason:

Command failed (workunit test osd/osd-bluefs-volume-ops.sh) on smithi144 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a2ea9c913f59e2c5615ea0313e506180d28b2f4e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-bluefs-volume-ops.sh'

fail 4348460 2019-09-30 17:03:08 2019-09-30 17:21:59 2019-09-30 17:47:59 0:26:00 0:15:43 0:10:17 smithi master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/cosbench_64K_read_write.yaml} 1
Failure Reason:

Command failed on smithi186 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest'

fail 4348461 2019-09-30 17:03:09 2019-09-30 17:23:10 2019-09-30 17:53:09 0:29:59 0:21:21 0:08:38 smithi master centos 7.6 rados/perf/{ceph.yaml objectstore/bluestore-basic-min-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/cosbench_64K_write.yaml} 1
Failure Reason:

Command failed on smithi040 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest'

fail 4348462 2019-09-30 17:03:10 2019-09-30 17:23:23 2019-09-30 17:59:22 0:35:59 0:24:41 0:11:18 smithi master centos 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{centos_7.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_perf_counters_mds_get (tasks.mgr.dashboard.test_perf_counters.PerfCountersControllerTest)

dead 4348463 2019-09-30 17:03:11 2019-09-30 17:23:50 2019-10-01 05:26:16 12:02:26 smithi master centos 7.6 rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{centos_7.yaml}} 1