Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 4220730 2019-08-16 05:53:09 2019-08-16 05:53:26 2019-08-16 09:29:29 3:36:03 3:25:47 0:10:16 smithi master ubuntu 18.04 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_api_tests.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi063 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=502e87afa32147a48d24212593c6db046aabe557 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 4220731 2019-08-16 05:53:10 2019-08-16 05:53:27 2019-08-16 06:17:26 0:23:59 0:14:02 0:09:57 smithi master centos 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{centos_7.yaml} tasks/orchestrator_cli.yaml} 2
pass 4220732 2019-08-16 05:53:11 2019-08-16 05:53:30 2019-08-16 06:19:29 0:25:59 0:13:45 0:12:14 smithi master centos 7.6 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} 3
pass 4220733 2019-08-16 05:53:12 2019-08-16 05:55:26 2019-08-16 06:29:25 0:33:59 0:26:50 0:07:09 smithi master rhel 7.6 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} 2
fail 4220734 2019-08-16 05:53:13 2019-08-16 05:57:01 2019-08-16 06:25:00 0:27:59 0:15:25 0:12:34 smithi master rhel 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{rhel_7.yaml} tasks/progress.yaml} 2
Failure Reason:

Test failure: test_osd_came_back (tasks.mgr.test_progress.TestProgress)

pass 4220735 2019-08-16 05:53:14 2019-08-16 05:57:03 2019-08-16 07:01:04 1:04:01 0:36:20 0:27:41 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/nautilus-v2only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 4
fail 4220736 2019-08-16 05:53:15 2019-08-16 05:57:17 2019-08-16 06:45:17 0:48:00 0:41:52 0:06:08 smithi master ubuntu 18.04 rados/singleton/{all/rebuild-mondb.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
Failure Reason:

reached maximum tries (200) after waiting for 1200 seconds

pass 4220737 2019-08-16 05:53:16 2019-08-16 05:57:18 2019-08-16 06:23:17 0:25:59 0:18:36 0:07:23 smithi master rhel 7.6 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} 3
pass 4220738 2019-08-16 05:53:17 2019-08-16 05:58:40 2019-08-16 06:42:39 0:43:59 0:37:06 0:06:53 smithi master rhel 7.6 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} 2
fail 4220739 2019-08-16 05:53:18 2019-08-16 05:59:06 2019-08-16 06:25:05 0:25:59 0:17:14 0:08:45 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/small-objects.yaml} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op rmattr 25 --op delete 50 --pool unique_pool_0'

dead 4220740 2019-08-16 05:53:19 2019-08-16 05:59:13 2019-08-16 10:33:16 4:34:03 1:35:06 2:58:57 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 4
Failure Reason:

psutil.NoSuchProcess process no longer exists (pid=25652)

pass 4220741 2019-08-16 05:53:20 2019-08-16 05:59:15 2019-08-16 06:33:14 0:33:59 0:25:57 0:08:02 smithi master ubuntu 18.04 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
fail 4220742 2019-08-16 05:53:21 2019-08-16 05:59:25 2019-08-16 06:25:24 0:25:59 0:19:42 0:06:17 smithi master centos 7.6 rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/cosbench_64K_read_write.yaml} 1
Failure Reason:

Command failed on smithi013 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest'

fail 4220743 2019-08-16 05:53:22 2019-08-16 05:59:28 2019-08-16 06:15:27 0:15:59 0:07:21 0:08:38 smithi master ubuntu 18.04 rados/singleton/{all/test_envlibrados_for_rocksdb.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi052 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=502e87afa32147a48d24212593c6db046aabe557 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

fail 4220744 2019-08-16 05:53:23 2019-08-16 05:59:30 2019-08-16 07:45:30 1:46:00 1:37:28 0:08:32 smithi master centos 7.6 rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/cosbench_64K_write.yaml} 1
Failure Reason:

Command failed on smithi043 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest'

fail 4220745 2019-08-16 05:53:24 2019-08-16 05:59:34 2019-08-16 07:09:34 1:10:00 0:57:52 0:12:08 smithi master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2
Failure Reason:

ceph-objectstore-tool: exp list-pgs failure with status 1

pass 4220746 2019-08-16 05:53:25 2019-08-16 06:00:12 2019-08-16 07:38:12 1:38:00 1:16:36 0:21:24 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} 4
fail 4220747 2019-08-16 05:53:26 2019-08-16 06:00:38 2019-08-16 06:30:37 0:29:59 0:22:47 0:07:12 smithi master ubuntu 18.04 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: setUpClass (tasks.mgr.dashboard.test_summary.SummaryTest)

pass 4220748 2019-08-16 05:53:27 2019-08-16 06:00:42 2019-08-16 06:44:42 0:44:00 0:29:06 0:14:54 smithi master rhel 7.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} 2
fail 4220749 2019-08-16 05:53:28 2019-08-16 06:01:48 2019-08-16 07:07:49 1:06:01 0:58:04 0:07:57 smithi master rhel 7.6 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} 2
Failure Reason:

not all PGs are active or peered 15 seconds after marking out OSDs

pass 4220750 2019-08-16 05:53:29 2019-08-16 06:01:48 2019-08-16 06:31:47 0:29:59 0:18:22 0:11:37 smithi master rhel 7.6 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} 3
pass 4220751 2019-08-16 05:53:29 2019-08-16 06:03:04 2019-08-16 06:39:04 0:36:00 0:27:11 0:08:49 smithi master rhel 7.6 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} 2
pass 4220752 2019-08-16 05:53:30 2019-08-16 06:03:22 2019-08-16 06:39:21 0:35:59 0:29:19 0:06:40 smithi master centos 7.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} 2
fail 4220753 2019-08-16 05:53:31 2019-08-16 06:03:32 2019-08-16 06:25:31 0:21:59 0:13:35 0:08:24 smithi master centos 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{centos_7.yaml} tasks/orchestrator_cli.yaml} 2
Failure Reason:

"2019-08-16T06:20:17.286150+0000 mon.b (mon.0) 146 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log

pass 4220754 2019-08-16 05:53:32 2019-08-16 06:05:21 2019-08-16 06:33:20 0:27:59 0:18:29 0:09:30 smithi master rhel 7.6 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} 3
fail 4220755 2019-08-16 05:53:33 2019-08-16 06:05:21 2019-08-16 07:09:21 1:04:00 0:56:24 0:07:36 smithi master centos 7.6 rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/scrub.yaml} 1
Failure Reason:

Command failed (workunit test scrub/osd-scrub-snaps.sh) on smithi102 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=502e87afa32147a48d24212593c6db046aabe557 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-snaps.sh'

dead 4220756 2019-08-16 05:53:34 2019-08-16 06:07:28 2019-08-16 10:33:32 4:26:04 smithi master rhel 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{rhel_7.yaml} tasks/progress.yaml} 2