Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 4339477 2019-09-27 16:40:47 2019-09-27 16:41:52 2019-09-27 17:11:51 0:29:59 0:15:38 0:14:21 smithi master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-basic-min-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/cosbench_64K_read_write.yaml} 1
Failure Reason:

Command failed on smithi200 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest'

dead 4339478 2019-09-27 16:40:48 2019-09-27 16:41:52 2019-09-28 04:44:18 12:02:26 2:19:28 9:42:58 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 4
Failure Reason:

psutil.NoSuchProcess process no longer exists (pid=12158)

dead 4339479 2019-09-27 16:40:49 2019-09-27 16:41:57 2019-09-28 04:44:28 12:02:31 10:20:10 1:42:21 smithi master ubuntu 18.04 rados/upgrade/mimic-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/mimic.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashosds-health.yaml} 4
Failure Reason:

psutil.NoSuchProcess process no longer exists (pid=18441)

fail 4339480 2019-09-27 16:40:50 2019-09-27 16:42:02 2019-09-27 17:16:01 0:33:59 0:23:03 0:10:56 smithi master centos 7.6 rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/cosbench_64K_write.yaml} 1
Failure Reason:

Command failed on smithi038 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest'

fail 4339481 2019-09-27 16:40:51 2019-09-27 16:42:05 2019-09-27 17:14:05 0:32:00 0:18:02 0:13:58 smithi master rhel 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{rhel_7.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_cephfs_get (tasks.mgr.dashboard.test_cephfs.CephfsTest)

dead 4339482 2019-09-27 16:40:52 2019-09-27 16:42:17 2019-09-28 04:44:34 12:02:17 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml}
pass 4339483 2019-09-27 16:40:53 2019-09-27 16:42:18 2019-09-27 17:16:17 0:33:59 0:21:51 0:12:08 smithi master ubuntu 18.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} 2
fail 4339484 2019-09-27 16:40:54 2019-09-27 16:42:26 2019-09-27 20:16:28 3:34:02 3:15:52 0:18:10 smithi master centos 7.6 rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/erasure-code.yaml} 1
Failure Reason:

Command failed (workunit test erasure-code/test-erasure-eio.sh) on smithi173 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ae1eead5f65bfcfff5cc552db8aa5e86a6ca764e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/erasure-code/test-erasure-eio.sh'

pass 4339485 2019-09-27 16:40:55 2019-09-27 16:43:51 2019-09-27 17:19:50 0:35:59 0:24:41 0:11:18 smithi master ubuntu 18.04 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} 2
pass 4339486 2019-09-27 16:40:56 2019-09-27 16:43:59 2019-09-27 17:53:59 1:10:00 1:00:24 0:09:36 smithi master rhel 7.6 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} 2
fail 4339487 2019-09-27 16:40:57 2019-09-27 16:44:02 2019-09-27 17:38:02 0:54:00 0:40:06 0:13:54 smithi master centos 7.6 rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/mon.yaml} 1
Failure Reason:

Command failed (workunit test mon/mon-osdmap-prune.sh) on smithi117 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ae1eead5f65bfcfff5cc552db8aa5e86a6ca764e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-osdmap-prune.sh'

dead 4339488 2019-09-27 16:40:58 2019-09-27 16:46:03 2019-09-28 04:50:27 12:04:24 4:16:44 7:47:40 smithi master ubuntu 18.04 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/nautilus.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-octopus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashosds-health.yaml}
Failure Reason:

psutil.NoSuchProcess process no longer exists (pid=10688)

pass 4339489 2019-09-27 16:40:59 2019-09-27 16:48:03 2019-09-27 18:38:03 1:50:00 1:03:40 0:46:20 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/mimic.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/radosbench.yaml} 4
fail 4339490 2019-09-27 16:41:00 2019-09-27 16:48:03 2019-09-27 22:38:08 5:50:05 5:42:38 0:07:27 smithi master rhel 7.6 rados/standalone/{supported-random-distro$/{rhel_7.yaml} workloads/osd.yaml} 1
Failure Reason:

Command failed (workunit test osd/divergent-priors.sh) on smithi182 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ae1eead5f65bfcfff5cc552db8aa5e86a6ca764e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/divergent-priors.sh'

pass 4339491 2019-09-27 16:41:01 2019-09-27 16:48:03 2019-09-27 17:30:02 0:41:59 0:16:06 0:25:53 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} 4
pass 4339492 2019-09-27 16:41:02 2019-09-27 16:48:26 2019-09-27 17:58:25 1:09:59 0:58:18 0:11:41 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} 2
fail 4339493 2019-09-27 16:41:03 2019-09-27 16:48:26 2019-09-27 17:12:25 0:23:59 0:15:50 0:08:09 smithi master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/cosbench_64K_read_write.yaml} 1
Failure Reason:

Command failed on smithi002 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest'

fail 4339494 2019-09-27 16:41:04 2019-09-27 16:50:41 2019-09-27 17:16:40 0:25:59 0:20:25 0:05:34 smithi master rhel 7.6 rados/standalone/{supported-random-distro$/{rhel_7.yaml} workloads/scrub.yaml} 1
Failure Reason:

Command failed (workunit test scrub/osd-recovery-scrub.sh) on smithi142 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ae1eead5f65bfcfff5cc552db8aa5e86a6ca764e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-recovery-scrub.sh'

pass 4339495 2019-09-27 16:41:05 2019-09-27 16:50:41 2019-09-27 17:38:40 0:47:59 0:27:55 0:20:04 smithi master ubuntu 18.04 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} 3
pass 4339496 2019-09-27 16:41:06 2019-09-27 16:50:49 2019-09-27 17:16:48 0:25:59 0:19:32 0:06:27 smithi master rhel 7.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} 2
dead 4339497 2019-09-27 16:41:07 2019-09-27 16:51:26 2019-09-28 04:53:48 12:02:22 11:31:41 0:30:41 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 4
Failure Reason:

psutil.NoSuchProcess process no longer exists (pid=16779)

fail 4339498 2019-09-27 16:41:08 2019-09-27 16:52:06 2019-09-27 17:18:05 0:25:59 0:16:31 0:09:28 smithi master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-basic-min-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/cosbench_64K_write.yaml} 1
Failure Reason:

Command failed on smithi120 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest'

pass 4339499 2019-09-27 16:41:09 2019-09-27 16:54:00 2019-09-27 17:27:59 0:33:59 0:24:38 0:09:21 smithi master ubuntu 18.04 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=clay-k=4-m=2.yaml} 2
fail 4339500 2019-09-27 16:41:10 2019-09-27 16:54:03 2019-09-27 17:20:02 0:25:59 0:17:45 0:08:14 smithi master rhel 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_cephfs_get (tasks.mgr.dashboard.test_cephfs.CephfsTest)

dead 4339501 2019-09-27 16:41:11 2019-09-27 16:54:04 2019-09-28 04:56:29 12:02:25 smithi master centos 7.6 rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{centos_7.yaml}} 1
pass 4339502 2019-09-27 16:41:12 2019-09-27 16:54:06 2019-09-27 18:42:06 1:48:00 0:46:48 1:01:12 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel-v1only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} 4