Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 3506868 2019-01-25 23:47:59 2019-01-25 23:49:45 2019-01-26 00:19:45 0:30:00 0:18:49 0:11:11 smithi master rhel 7.5 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} 2
pass 3506869 2019-01-25 23:48:00 2019-01-25 23:49:47 2019-01-26 00:05:46 0:15:59 0:06:29 0:09:30 smithi master ubuntu 18.04 rados/objectstore/{backends/fusestore.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
fail 3506870 2019-01-25 23:48:01 2019-01-25 23:51:54 2019-01-26 01:01:54 1:10:00 1:01:30 0:08:30 smithi master rhel 7.5 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} 2
Failure Reason:

Command failed on smithi072 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506871 2019-01-25 23:48:02 2019-01-25 23:51:54 2019-01-26 00:23:54 0:32:00 0:18:04 0:13:56 smithi master rhel 7.5 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
Failure Reason:

Command failed on smithi018 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3506872 2019-01-25 23:48:03 2019-01-25 23:51:54 2019-01-26 00:19:54 0:28:00 0:17:49 0:10:11 smithi master rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-stupid.yaml rados.yaml tasks/rados_api_tests.yaml validater/lockdep.yaml} 2
fail 3506873 2019-01-25 23:48:03 2019-01-25 23:53:46 2019-01-26 00:29:45 0:35:59 smithi master centos 7.5 rados/multimon/{clusters/21.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/mon_recovery.yaml} 2
Failure Reason:

machine smithi057.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3506874 2019-01-25 23:48:04 2019-01-25 23:53:46 2019-01-26 00:29:45 0:35:59 0:22:39 0:13:20 smithi master ubuntu 16.04 rados/standalone/{supported-random-distro$/{ubuntu_16.04.yaml} workloads/mon.yaml} 1
Failure Reason:

Command failed (workunit test mon/mon-bind.sh) on smithi063 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=766a80f7f265c10db5be1a845019d45da54d1eff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-bind.sh'

fail 3506875 2019-01-25 23:48:05 2019-01-25 23:53:46 2019-01-26 00:19:45 0:25:59 0:11:27 0:14:32 smithi master ubuntu 18.04 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rgw_snaps.yaml} 2
Failure Reason:

Command failed on smithi176 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506876 2019-01-25 23:48:06 2019-01-25 23:55:46 2019-01-26 01:01:46 1:06:00 1:00:29 0:05:31 smithi master rhel 7.5 rados/singleton/{all/pg-removal-interruption.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
Failure Reason:

Command failed on smithi190 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506877 2019-01-25 23:48:07 2019-01-25 23:55:46 2019-01-26 01:07:46 1:12:00 0:58:58 0:13:02 smithi master centos 7.5 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{centos_latest.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Command failed on smithi025 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506878 2019-01-25 23:48:07 2019-01-25 23:55:46 2019-01-26 01:25:46 1:30:00 0:58:37 0:31:23 smithi master centos 7.5 rados/monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} thrashers/many.yaml workloads/snaps-few-objects.yaml} 2
Failure Reason:

Command failed on smithi173 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506879 2019-01-25 23:48:08 2019-01-25 23:55:46 2019-01-26 01:17:46 1:22:00 1:02:04 0:19:56 smithi master rhel 7.5 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} 3
Failure Reason:

Command failed on smithi002 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506880 2019-01-25 23:48:09 2019-01-25 23:55:47 2019-01-26 00:23:46 0:27:59 0:11:25 0:16:34 smithi master ubuntu 18.04 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
Failure Reason:

Command failed on smithi071 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506881 2019-01-25 23:48:10 2019-01-25 23:55:47 2019-01-26 01:29:48 1:34:01 0:59:27 0:34:34 smithi master centos 7.5 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml supported-random-distro$/{centos_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} 2
Failure Reason:

Command failed on smithi188 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3506882 2019-01-25 23:48:11 2019-01-25 23:55:48 2019-01-26 01:21:49 1:26:01 0:32:17 0:53:44 smithi master centos 7.5 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml msgr/async.yaml rados.yaml rocksdb.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 4
fail 3506883 2019-01-25 23:48:12 2019-01-25 23:57:36 2019-01-26 01:05:36 1:08:00 0:58:06 0:09:54 smithi master centos 7.5 rados/singleton/{all/radostool.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} 1
Failure Reason:

Command failed on smithi109 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506884 2019-01-25 23:48:12 2019-01-25 23:57:38 2019-01-26 00:59:38 1:02:00 0:50:16 0:11:44 smithi master ubuntu 16.04 rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/fio_4K_rand_read.yaml} 1
Failure Reason:

Command failed on smithi084 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506885 2019-01-25 23:48:13 2019-01-25 23:57:41 2019-01-26 00:15:40 0:17:59 0:10:05 0:07:54 smithi master ubuntu 16.04 rados/singleton-nomsgr/{all/librados_hello_world.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} 1
Failure Reason:

Command failed on smithi006 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

dead 3506886 2019-01-25 23:48:14 2019-01-25 23:57:41 2019-01-26 00:25:40 0:27:59 smithi master centos 7.5 rados/singleton/{all/random-eio.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

fail 3506887 2019-01-25 23:48:15 2019-01-25 23:59:43 2019-01-26 00:21:43 0:22:00 0:11:26 0:10:34 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} 2
Failure Reason:

Command failed on smithi129 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

dead 3506888 2019-01-25 23:48:16 2019-01-25 23:59:44 2019-01-26 00:25:43 0:25:59 smithi master rhel 7.5 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

pass 3506889 2019-01-25 23:48:16 2019-01-25 23:59:44 2019-01-26 00:19:43 0:19:59 0:09:37 0:10:22 smithi master ubuntu 16.04 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/scrub_test.yaml} 2
dead 3506890 2019-01-25 23:48:17 2019-01-25 23:59:44 2019-01-26 00:29:43 0:29:59 smithi master rhel 7.5 rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_latest.yaml} workloads/fio_4K_rand_rw.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

pass 3506891 2019-01-25 23:48:18 2019-01-25 23:59:47 2019-01-26 00:25:46 0:25:59 0:12:09 0:13:50 smithi master ubuntu 16.04 rados/singleton/{all/rebuild-mondb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} 1
fail 3506892 2019-01-25 23:48:19 2019-01-25 23:59:48 2019-01-26 01:09:48 1:10:00 0:55:35 0:14:25 smithi master ubuntu 16.04 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/failover.yaml} 2
Failure Reason:

Command failed on smithi184 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3506893 2019-01-25 23:48:20 2019-01-26 00:01:49 2019-01-26 00:25:48 0:23:59 0:08:32 0:15:27 smithi master centos 7.5 rados/objectstore/{backends/keyvaluedb.yaml supported-random-distro$/{centos_latest.yaml}} 1
fail 3506894 2019-01-25 23:48:20 2019-01-26 00:01:49 2019-01-26 00:29:49 0:28:00 smithi master ubuntu 18.04 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects.yaml} 1
Failure Reason:

machine smithi164.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

pass 3506895 2019-01-25 23:48:21 2019-01-26 00:03:56 2019-01-26 00:27:55 0:23:59 0:11:35 0:12:24 smithi master ubuntu 16.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-stupid.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} 2
pass 3506896 2019-01-25 23:48:22 2019-01-26 00:03:56 2019-01-26 00:39:56 0:36:00 0:17:59 0:18:01 smithi master centos 7.5 rados/singleton/{all/recovery-preemption.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} 1
pass 3506897 2019-01-25 23:48:23 2019-01-26 00:03:56 2019-01-26 00:33:56 0:30:00 0:22:15 0:07:45 smithi master rhel 7.5 rados/singleton-nomsgr/{all/msgr.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
fail 3506898 2019-01-25 23:48:24 2019-01-26 00:03:56 2019-01-26 00:29:55 0:25:59 smithi master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/fio_4M_rand_read.yaml}
Failure Reason:

machine smithi057.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3506899 2019-01-25 23:48:24 2019-01-26 00:03:56 2019-01-26 00:27:55 0:23:59 0:13:09 0:10:50 smithi master ubuntu 16.04 rados/standalone/{supported-random-distro$/{ubuntu_16.04.yaml} workloads/osd.yaml} 1
Failure Reason:

Command failed (workunit test osd/ec-error-rollforward.sh) on smithi152 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=766a80f7f265c10db5be1a845019d45da54d1eff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/ec-error-rollforward.sh'

fail 3506900 2019-01-25 23:48:25 2019-01-26 00:05:49 2019-01-26 01:17:50 1:12:01 1:00:56 0:11:05 smithi master ubuntu 18.04 rados/singleton/{all/resolve_stuck_peering.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 2
Failure Reason:

Command failed on smithi042 with status 1: 'sudo ceph --cluster ceph osd create c4ff2dcf-23e4-47e8-b2de-9539ccab10a3'

fail 3506901 2019-01-25 23:48:26 2019-01-26 00:05:49 2019-01-26 00:29:49 0:24:00 0:11:05 0:12:55 smithi master ubuntu 16.04 rados/monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/one.yaml workloads/pool-create-delete.yaml} 2
Failure Reason:

Command failed on smithi204 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506902 2019-01-25 23:48:27 2019-01-26 00:05:49 2019-01-26 00:29:49 0:24:00 smithi master rhel 7.5 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} 2
Failure Reason:

machine smithi057.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3506903 2019-01-25 23:48:28 2019-01-26 00:07:43 2019-01-26 01:15:43 1:08:00 1:01:16 0:06:44 smithi master rhel 7.5 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
Failure Reason:

Command failed on smithi112 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506904 2019-01-25 23:48:29 2019-01-26 00:07:45 2019-01-26 00:29:46 0:22:01 0:11:07 0:10:54 smithi master ubuntu 16.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} 2
Failure Reason:

Command failed on smithi165 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506905 2019-01-25 23:48:29 2019-01-26 00:07:47 2019-01-26 01:17:47 1:10:00 0:56:51 0:13:09 smithi master ubuntu 18.04 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
Failure Reason:

Command failed on smithi073 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3506906 2019-01-25 23:48:30 2019-01-26 00:07:48 2019-01-26 00:59:48 0:52:00 0:31:34 0:20:26 smithi master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} 2
fail 3506907 2019-01-25 23:48:31 2019-01-26 00:07:49 2019-01-26 00:29:48 0:21:59 smithi master centos 7.5 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml msgr/random.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} 3
Failure Reason:

machine smithi057.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

pass 3506908 2019-01-25 23:48:32 2019-01-26 00:07:50 2019-01-26 00:41:50 0:34:00 0:21:04 0:12:56 smithi master ubuntu 18.04 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_api_tests.yaml} 2
fail 3506909 2019-01-25 23:48:33 2019-01-26 00:08:03 2019-01-26 00:30:02 0:21:59 smithi master centos 7.5 rados/multimon/{clusters/21.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/mon_recovery.yaml} 2
Failure Reason:

machine smithi057.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3506910 2019-01-25 23:48:34 2019-01-26 00:09:52 2019-01-26 00:29:51 0:19:59 smithi master rhel 7.5 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml}
Failure Reason:

machine smithi164.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3506911 2019-01-25 23:48:34 2019-01-26 00:09:52 2019-01-26 01:15:52 1:06:00 0:55:03 0:10:57 smithi master ubuntu 16.04 rados/singleton/{all/test-crash.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} 1
Failure Reason:

Command failed on smithi169 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506912 2019-01-25 23:48:35 2019-01-26 00:09:52 2019-01-26 00:29:52 0:20:00 smithi master rhel 7.5 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{rhel_latest.yaml} tasks/insights.yaml} 1
Failure Reason:

machine smithi057.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3506913 2019-01-25 23:48:36 2019-01-26 00:09:52 2019-01-26 01:17:53 1:08:01 0:55:33 0:12:28 smithi master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/fio_4M_rand_rw.yaml} 1
Failure Reason:

Command failed on smithi103 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506914 2019-01-25 23:48:37 2019-01-26 00:09:53 2019-01-26 00:29:52 0:19:59 smithi master ubuntu 16.04 rados/singleton-nomsgr/{all/multi-backfill-reject.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} 1
Failure Reason:

machine smithi164.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3506915 2019-01-25 23:48:38 2019-01-26 00:09:52 2019-01-26 01:15:53 1:06:01 0:55:32 0:10:29 smithi master ubuntu 18.04 rados/objectstore/{backends/objectcacher-stress.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
Failure Reason:

Command failed on smithi179 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3506916 2019-01-25 23:48:38 2019-01-26 00:11:41 2019-01-26 00:35:40 0:23:59 0:13:46 0:10:13 smithi master ubuntu 18.04 rados/singleton/{all/test_envlibrados_for_rocksdb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 3506917 2019-01-25 23:48:39 2019-01-26 00:11:45 2019-01-26 00:45:45 0:34:00 0:23:00 0:11:00 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} 2
fail 3506918 2019-01-25 23:48:40 2019-01-26 00:13:43 2019-01-26 01:19:43 1:06:00 0:55:46 0:10:14 smithi master ubuntu 16.04 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=clay-k=4-m=2.yaml} 2
Failure Reason:

Command failed on smithi083 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506919 2019-01-25 23:48:41 2019-01-26 00:13:45 2019-01-26 01:19:45 1:06:00 0:55:06 0:10:54 smithi master ubuntu 16.04 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} 1
Failure Reason:

Command failed on smithi067 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3506920 2019-01-25 23:48:42 2019-01-26 00:13:46 2019-01-26 00:45:46 0:32:00 0:13:35 0:18:25 smithi master centos 7.5 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/rados_cls_all.yaml} 2
dead 3506921 2019-01-25 23:48:42 2019-01-26 00:13:48 2019-01-26 12:16:15 12:02:27 smithi master centos 7.5 rados/singleton/{all/thrash-backfill-full.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} 2
fail 3506922 2019-01-25 23:48:43 2019-01-26 00:15:52 2019-01-26 00:29:51 0:13:59 smithi master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/fio_4M_rand_write.yaml}
Failure Reason:

machine smithi164.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3506923 2019-01-25 23:48:44 2019-01-26 00:15:52 2019-01-26 01:25:52 1:10:00 0:56:57 0:13:03 smithi master ubuntu 16.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 2
Failure Reason:

Command failed on smithi162 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506924 2019-01-25 23:48:45 2019-01-26 00:15:52 2019-01-26 00:29:51 0:13:59 smithi master ubuntu 16.04 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/module_selftest.yaml} 1
Failure Reason:

machine smithi057.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3506925 2019-01-25 23:48:46 2019-01-26 00:15:52 2019-01-26 00:29:51 0:13:59 smithi master ubuntu 18.04 rados/singleton/{all/thrash-eio.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
Failure Reason:

machine smithi164.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3506926 2019-01-25 23:48:46 2019-01-26 00:17:48 2019-01-26 01:21:48 1:04:00 0:55:27 0:08:33 smithi master ubuntu 18.04 rados/singleton-nomsgr/{all/pool-access.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
Failure Reason:

Command failed on smithi088 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3506927 2019-01-25 23:48:47 2019-01-26 00:17:48 2019-01-26 00:55:48 0:38:00 0:24:28 0:13:32 smithi master ubuntu 16.04 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/fastclose.yaml msgr/simple.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 4
fail 3506928 2019-01-25 23:48:48 2019-01-26 00:19:48 2019-01-26 00:43:48 0:24:00 0:17:23 0:06:37 smithi master rhel 7.5 rados/standalone/{supported-random-distro$/{rhel_latest.yaml} workloads/scrub.yaml} 1
Failure Reason:

Command failed (workunit test scrub/osd-recovery-scrub.sh) on smithi093 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=766a80f7f265c10db5be1a845019d45da54d1eff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-recovery-scrub.sh'

pass 3506929 2019-01-25 23:48:49 2019-01-26 00:19:48 2019-01-26 00:39:48 0:20:00 0:08:11 0:11:49 smithi master ubuntu 16.04 rados/monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/sync-many.yaml workloads/rados_5925.yaml} 2
fail 3506930 2019-01-25 23:48:50 2019-01-26 00:19:49 2019-01-26 02:23:49 2:04:00 1:51:42 0:12:18 smithi master ubuntu 16.04 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} 3
Failure Reason:

"2019-01-26 01:26:44.514134 mon.c (mon.0) 22 : cluster [ERR] Health check failed: no active mgr (MGR_DOWN)" in cluster log

fail 3506931 2019-01-25 23:48:50 2019-01-26 00:19:49 2019-01-26 01:39:49 1:20:00 0:59:11 0:20:49 smithi master centos 7.5 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
Failure Reason:

Command failed on smithi018 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506932 2019-01-25 23:48:51 2019-01-26 00:19:49 2019-01-26 01:39:49 1:20:00 0:59:03 0:20:57 smithi master centos 7.5 rados/singleton/{all/thrash-rados/{thrash-rados.yaml thrashosds-health.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} 2
Failure Reason:

Command failed on smithi033 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506933 2019-01-25 23:48:52 2019-01-26 00:19:49 2019-01-26 00:29:48 0:09:59 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache.yaml}
Failure Reason:

machine smithi164.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3506934 2019-01-25 23:48:53 2019-01-26 00:19:55 2019-01-26 01:29:55 1:10:00 0:58:30 0:11:30 smithi master centos 7.5 rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_latest.yaml} workloads/radosbench_4K_rand_read.yaml} 1
Failure Reason:

Command failed on smithi013 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506935 2019-01-25 23:48:53 2019-01-26 00:21:40 2019-01-26 01:29:40 1:08:00 0:56:24 0:11:36 smithi master ubuntu 18.04 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_python.yaml} 2
Failure Reason:

Command failed on smithi100 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3506936 2019-01-25 23:48:54 2019-01-26 00:21:40 2019-01-26 05:11:44 4:50:04 4:29:01 0:21:03 smithi master centos 7.5 rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{centos_latest.yaml}} 1
fail 3506937 2019-01-25 23:48:55 2019-01-26 00:21:44 2019-01-26 01:35:44 1:14:00 1:01:36 0:12:24 smithi master ubuntu 18.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} 2
Failure Reason:

Command failed on smithi194 with status 1: 'sudo ceph --cluster ceph osd create 134731ca-1414-4897-8ead-94ebd34c462b'

fail 3506938 2019-01-25 23:48:56 2019-01-26 00:23:56 2019-01-26 01:37:56 1:14:00 1:02:22 0:11:38 smithi master rhel 7.5 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
Failure Reason:

Command failed on smithi077 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506939 2019-01-25 23:48:57 2019-01-26 00:23:56 2019-01-26 01:31:56 1:08:00 0:55:55 0:12:05 smithi master rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} 2
Failure Reason:

Command failed on smithi110 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506940 2019-01-25 23:48:57 2019-01-26 00:23:56 2019-01-26 00:49:55 0:25:59 0:10:54 0:15:05 smithi master ubuntu 16.04 rados/singleton/{all/thrash_cache_writeback_proxy_none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} 2
Failure Reason:

Command failed on smithi172 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3506941 2019-01-25 23:48:58 2019-01-26 00:23:56 2019-01-26 01:05:56 0:42:00 0:23:32 0:18:28 smithi master centos 7.5 rados/multimon/{clusters/21.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/mon_recovery.yaml} 3
fail 3506942 2019-01-25 23:48:59 2019-01-26 00:23:56 2019-01-26 01:33:56 1:10:00 0:55:48 0:14:12 smithi master ubuntu 16.04 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=2-m=1.yaml} 2
Failure Reason:

Command failed on smithi092 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506943 2019-01-25 23:49:00 2019-01-26 00:23:56 2019-01-26 01:35:56 1:12:00 1:00:43 0:11:17 smithi master rhel 7.5 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{rhel_latest.yaml} tasks/orchestrator_cli.yaml} 2
Failure Reason:

Command failed on smithi118 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3506944 2019-01-25 23:49:00 2019-01-26 00:23:56 2019-01-26 01:07:56 0:44:00 0:28:42 0:15:18 smithi master centos 7.5 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml supported-random-distro$/{centos_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} 2
pass 3506945 2019-01-25 23:49:01 2019-01-26 00:25:52 2019-01-26 00:45:52 0:20:00 0:11:28 0:08:32 smithi master rhel 7.5 rados/singleton/{all/watch-notify-same-primary.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
fail 3506946 2019-01-25 23:49:02 2019-01-26 00:25:52 2019-01-26 00:49:52 0:24:00 0:15:20 0:08:40 smithi master rhel 7.5 rados/singleton-nomsgr/{all/recovery-unfound-found.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
Failure Reason:

Command failed on smithi097 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506947 2019-01-25 23:49:03 2019-01-26 00:25:52 2019-01-26 01:31:53 1:06:01 0:53:08 0:12:53 smithi master centos 7.5 rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_latest.yaml} workloads/radosbench_4K_seq_read.yaml} 1
Failure Reason:

Command failed on smithi063 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506948 2019-01-25 23:49:03 2019-01-26 00:25:52 2019-01-26 00:39:52 0:14:00 0:06:59 0:07:01 smithi master rhel 7.5 rados/singleton/{all/admin-socket.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
Failure Reason:

{'smithi141.front.sepia.ceph.com': {'msg': "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'interface'\n\nThe error appears to have been in '/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/roles/testnode/tasks/resolvconf.yml': line 9, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Set interface\n ^ here\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: 'dict object' has no attribute 'interface'"}}

pass 3506949 2019-01-25 23:49:04 2019-01-26 00:28:07 2019-01-26 01:06:07 0:38:00 0:31:15 0:06:45 smithi master rhel 7.5 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
fail 3506950 2019-01-25 23:49:05 2019-01-26 00:28:07 2019-01-26 01:38:08 1:10:01 0:59:04 0:10:57 smithi master centos 7.5 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/rados_stress_watch.yaml} 2
Failure Reason:

Command failed on smithi165 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3506951 2019-01-25 23:49:06 2019-01-26 00:29:48 2019-01-26 01:03:47 0:33:59 0:22:59 0:11:00 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
pass 3506952 2019-01-25 23:49:06 2019-01-26 00:29:48 2019-01-26 00:49:47 0:19:59 0:10:29 0:09:30 smithi master ubuntu 18.04 rados/singleton/{all/divergent_priors.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
fail 3506953 2019-01-25 23:49:07 2019-01-26 00:29:48 2019-01-26 01:37:48 1:08:00 0:58:06 0:09:54 smithi master centos 7.5 rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_latest.yaml} workloads/radosbench_4M_rand_read.yaml} 1
Failure Reason:

Command failed on smithi095 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3506954 2019-01-25 23:49:08 2019-01-26 00:29:48 2019-01-26 01:43:48 1:14:00 1:01:32 0:12:28 smithi master ubuntu 16.04 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} 4
fail 3506955 2019-01-25 23:49:09 2019-01-26 00:29:48 2019-01-26 01:45:48 1:16:00 1:03:55 0:12:05 smithi master centos 7.5 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{centos_latest.yaml} tasks/progress.yaml} 2
Failure Reason:

Command failed on smithi085 with status 1: 'sudo ceph --cluster ceph osd create ad895f9b-0c26-4e5f-9a21-a44670ad116c'

fail 3506956 2019-01-25 23:49:10 2019-01-26 00:29:48 2019-01-26 01:35:48 1:06:00 1:00:28 0:05:32 smithi master rhel 7.5 rados/objectstore/{backends/alloc-hint.yaml supported-random-distro$/{rhel_latest.yaml}} 1
Failure Reason:

Command failed on smithi160 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506957 2019-01-25 23:49:10 2019-01-26 00:29:48 2019-01-26 01:39:49 1:10:01 0:58:03 0:11:58 smithi master centos 7.5 rados/rest/{mgr-restful.yaml supported-random-distro$/{centos_latest.yaml}} 1
Failure Reason:

Command failed on smithi154 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506958 2019-01-25 23:49:11 2019-01-26 00:29:49 2019-01-26 01:47:49 1:18:00 1:03:53 0:14:07 smithi master centos rados/singleton-flat/valgrind-leaks.yaml 1
Failure Reason:

Command failed on smithi082 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506959 2019-01-25 23:49:12 2019-01-26 00:29:49 2019-01-26 01:45:49 1:16:00 0:58:06 0:17:54 smithi master centos 7.5 rados/singleton-nomsgr/{all/admin_socket_output.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} 1
Failure Reason:

Command failed on smithi174 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506960 2019-01-25 23:49:13 2019-01-26 00:29:49 2019-01-26 00:59:49 0:30:00 0:12:56 0:17:04 smithi master ubuntu 18.04 rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/crush.yaml} 1
Failure Reason:

Command failed (workunit test crush/crush-choose-args.sh) on smithi148 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=766a80f7f265c10db5be1a845019d45da54d1eff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/crush/crush-choose-args.sh'

dead 3506961 2019-01-25 23:49:14 2019-01-26 00:29:50 2019-01-26 01:17:49 0:47:59 smithi master rhel 7.5 rados/upgrade/luminous-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} supported-random-distro$/{rhel_latest.yaml} thrashosds-health.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

pass 3506962 2019-01-25 23:49:14 2019-01-26 00:29:50 2019-01-26 01:15:50 0:46:00 0:20:59 0:25:01 smithi master ubuntu 16.04 rados/monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/mon-delay.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/sync.yaml workloads/rados_api_tests.yaml} 2
dead 3506963 2019-01-25 23:49:15 2019-01-26 00:29:50 2019-01-26 01:13:50 0:44:00 smithi master centos 7.5 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

fail 3506964 2019-01-25 23:49:16 2019-01-26 00:29:50 2019-01-26 01:51:50 1:22:00 0:55:59 0:26:01 smithi master ubuntu 18.04 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
Failure Reason:

Command failed on smithi102 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506965 2019-01-25 23:49:17 2019-01-26 00:29:50 2019-01-26 01:43:51 1:14:01 0:55:01 0:19:00 smithi master ubuntu 16.04 rados/singleton/{all/divergent_priors2.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} 1
Failure Reason:

Command failed on smithi168 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3506966 2019-01-25 23:49:18 2019-01-26 00:29:52 2019-01-26 01:57:53 1:28:01 1:07:20 0:20:41 smithi master ubuntu 16.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/radosbench.yaml} 2
pass 3506967 2019-01-25 23:49:18 2019-01-26 00:29:52 2019-01-26 00:55:52 0:26:00 0:07:13 0:18:47 smithi master ubuntu 16.04 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/rados_striper.yaml} 2
fail 3506968 2019-01-25 23:49:19 2019-01-26 00:29:53 2019-01-26 01:43:53 1:14:00 0:55:32 0:18:28 smithi master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_4M_seq_read.yaml} 1
Failure Reason:

Command failed on smithi079 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506969 2019-01-25 23:49:20 2019-01-26 00:29:52 2019-01-26 01:45:53 1:16:01 1:00:07 0:15:54 smithi master rhel 7.5 rados/singleton/{all/dump-stuck.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
Failure Reason:

Command failed on smithi086 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506970 2019-01-25 23:49:21 2019-01-26 00:29:53 2019-01-26 01:59:53 1:30:00 0:59:05 0:30:55 smithi master centos 7.5 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} 2
Failure Reason:

Command failed on smithi008 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506971 2019-01-25 23:49:21 2019-01-26 00:29:53 2019-01-26 01:07:53 0:38:00 0:10:46 0:27:14 smithi master ubuntu 16.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/redirect.yaml} 2
Failure Reason:

Command failed on smithi202 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506972 2019-01-25 23:49:22 2019-01-26 00:29:56 2019-01-26 01:39:57 1:10:01 0:56:21 0:13:40 smithi master ubuntu 18.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} 2
Failure Reason:

Command failed on smithi041 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

dead 3506973 2019-01-25 23:49:23 2019-01-26 00:30:03 2019-01-26 01:18:03 0:48:00 smithi master rhel 7.5 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

fail 3506974 2019-01-25 23:49:24 2019-01-26 00:31:39 2019-01-26 02:57:41 2:26:02 2:02:55 0:23:07 smithi master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2
Failure Reason:

"2019-01-26 01:12:22.869844 mon.a (mon.0) 16 : cluster [ERR] Health check failed: no active mgr (MGR_DOWN)" in cluster log

pass 3506975 2019-01-25 23:49:24 2019-01-26 00:33:51 2019-01-26 01:23:51 0:50:00 0:22:40 0:27:20 smithi master rhel 7.5 rados/multimon/{clusters/21.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} tasks/mon_recovery.yaml} 3
dead 3506976 2019-01-25 23:49:25 2019-01-26 00:33:54 2019-01-26 01:15:54 0:42:00 smithi master ubuntu 16.04 rados/singleton/{all/ec-lost-unfound.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

fail 3506977 2019-01-25 23:49:26 2019-01-26 00:33:57 2019-01-26 01:41:57 1:08:00 1:00:41 0:07:19 smithi master rhel 7.5 rados/singleton-nomsgr/{all/cache-fs-trunc.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
Failure Reason:

Command failed on smithi066 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506978 2019-01-25 23:49:27 2019-01-26 00:35:55 2019-01-26 02:03:55 1:28:00 0:59:56 0:28:04 smithi master centos 7.5 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{centos_latest.yaml} tasks/prometheus.yaml} 2
Failure Reason:

Command failed on smithi014 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506979 2019-01-25 23:49:28 2019-01-26 00:38:00 2019-01-26 01:48:00 1:10:00 0:55:03 0:14:57 smithi master ubuntu 16.04 rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/radosbench_4M_write.yaml} 1
Failure Reason:

Command failed on smithi093 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3506980 2019-01-25 23:49:28 2019-01-26 00:39:44 2019-01-26 00:55:44 0:16:00 0:06:09 0:09:51 smithi master ubuntu 18.04 rados/singleton/{all/erasure-code-nonregression.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
dead 3506981 2019-01-25 23:49:29 2019-01-26 00:39:49 2019-01-26 01:11:48 0:31:59 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

pass 3506982 2019-01-25 23:49:30 2019-01-26 00:39:54 2019-01-26 01:21:54 0:42:00 0:30:29 0:11:31 smithi master rhel 7.5 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
pass 3506983 2019-01-25 23:49:31 2019-01-26 00:39:57 2019-01-26 01:19:57 0:40:00 0:28:43 0:11:17 smithi master ubuntu 18.04 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_workunit_loadgen_big.yaml} 2
dead 3506984 2019-01-25 23:49:31 2019-01-26 00:41:51 2019-01-26 01:15:51 0:34:00 smithi master ubuntu 16.04 rados/objectstore/{backends/ceph_objectstore_tool.yaml supported-random-distro$/{ubuntu_16.04.yaml}}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

dead 3506985 2019-01-25 23:49:32 2019-01-26 00:41:52 2019-01-26 01:21:51 0:39:59 smithi master centos 7.5 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/hammer.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml msgr/async.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/rbd_cls.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

dead 3506986 2019-01-25 23:49:33 2019-01-26 00:41:56 2019-01-26 01:19:55 0:37:59 smithi master ubuntu 18.04 rados/singleton/{all/lost-unfound-delete.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

pass 3506987 2019-01-25 23:49:34 2019-01-26 00:44:00 2019-01-26 01:34:00 0:50:00 0:42:46 0:07:14 smithi master rhel 7.5 rados/monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/force-sync-many.yaml workloads/rados_mon_osdmap_prune.yaml} 2
fail 3506988 2019-01-25 23:49:35 2019-01-26 00:44:00 2019-01-26 01:56:00 1:12:00 0:56:51 0:15:09 smithi master ubuntu 18.04 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} 3
Failure Reason:

Command failed on smithi156 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

dead 3506989 2019-01-25 23:49:35 2019-01-26 00:45:56 2019-01-26 01:17:56 0:32:00 smithi master ubuntu 18.04 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

fail 3506990 2019-01-25 23:49:36 2019-01-26 00:45:56 2019-01-26 01:51:57 1:06:01 0:53:05 0:12:56 smithi master centos 7.5 rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_latest.yaml} workloads/sample_fio.yaml} 1
Failure Reason:

Command failed on smithi097 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

dead 3506991 2019-01-25 23:49:37 2019-01-26 00:45:57 2019-01-26 01:13:56 0:27:59 smithi master ubuntu 16.04 rados/standalone/{supported-random-distro$/{ubuntu_16.04.yaml} workloads/erasure-code.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

pass 3506992 2019-01-25 23:49:38 2019-01-26 00:45:57 2019-01-26 01:17:56 0:31:59 0:11:35 0:20:24 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} 2
dead 3506993 2019-01-25 23:49:38 2019-01-26 00:47:59 2019-01-26 01:11:58 0:23:59 smithi master ubuntu 16.04 rados/singleton/{all/lost-unfound.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

dead 3506994 2019-01-25 23:49:39 2019-01-26 00:49:51 2019-01-26 01:15:51 0:26:00 smithi master rhel 7.5 rados/singleton-nomsgr/{all/ceph-kvstore-tool.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

dead 3506995 2019-01-25 23:49:40 2019-01-26 00:49:52 2019-01-26 01:15:51 0:25:59 smithi master centos 7.5 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

fail 3506996 2019-01-25 23:49:41 2019-01-26 00:49:53 2019-01-26 01:57:53 1:08:00 0:55:57 0:12:03 smithi master ubuntu 18.04 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/workunits.yaml} 2
Failure Reason:

Command failed on smithi064 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3506997 2019-01-25 23:49:41 2019-01-26 00:49:57 2019-01-26 01:57:57 1:08:00 1:00:22 0:07:38 smithi master rhel 7.5 rados/singleton/{all/max-pg-per-osd.from-mon.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
Failure Reason:

Command failed on smithi071 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

dead 3506998 2019-01-25 23:49:42 2019-01-26 00:51:37 2019-01-26 01:17:37 0:26:00 smithi master rhel 7.5 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} tasks/rados_workunit_loadgen_mix.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

pass 3506999 2019-01-25 23:49:43 2019-01-26 00:54:06 2019-01-26 01:20:06 0:26:00 0:13:53 0:12:07 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} 2
dead 3507000 2019-01-25 23:49:44 2019-01-26 00:55:55 2019-01-26 01:17:54 0:21:59 smithi master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/sample_radosbench.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

dead 3507001 2019-01-25 23:49:45 2019-01-26 00:55:55 2019-01-26 01:23:54 0:27:59 smithi master ubuntu 16.04 rados/singleton/{all/max-pg-per-osd.from-primary.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

pass 3507002 2019-01-25 23:49:45 2019-01-26 00:55:55 2019-01-26 01:35:55 0:40:00 0:23:20 0:16:40 smithi master centos 7.5 rados/multimon/{clusters/21.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/mon_recovery.yaml} 3
dead 3507003 2019-01-25 23:49:46 2019-01-26 00:55:56 2019-01-26 01:19:56 0:24:00 smithi master ubuntu 16.04 rados/objectstore/{backends/filejournal.yaml supported-random-distro$/{ubuntu_16.04.yaml}}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

dead 3507004 2019-01-25 23:49:47 2019-01-26 00:59:50 2019-01-26 01:19:50 0:20:00 smithi master centos 7.5 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

dead 3507005 2019-01-25 23:49:48 2019-01-26 00:59:50 2019-01-26 01:23:50 0:24:00 smithi master ubuntu 16.04 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

dead 3507006 2019-01-25 23:49:49 2019-01-26 00:59:51 2019-01-26 01:19:50 0:19:59 smithi master rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-stupid.yaml rados.yaml tasks/rados_cls_all.yaml validater/lockdep.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

pass 3507007 2019-01-25 23:49:49 2019-01-26 01:01:58 2019-01-26 01:19:57 0:17:59 0:08:36 0:09:23 smithi master centos 7.5 rados/singleton-nomsgr/{all/ceph-post-file.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} 1
pass 3507008 2019-01-25 23:49:50 2019-01-26 01:01:58 2019-01-26 01:33:57 0:31:59 0:25:57 0:06:02 smithi master rhel 7.5 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml rocksdb.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/small-objects.yaml} 2
pass 3507009 2019-01-25 23:49:51 2019-01-26 01:03:59 2019-01-26 01:25:58 0:21:59 0:15:21 0:06:38 smithi master rhel 7.5 rados/singleton/{all/max-pg-per-osd.from-replica.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
dead 3507010 2019-01-25 23:49:52 2019-01-26 01:04:00 2019-01-26 01:21:59 0:17:59 smithi master centos 7.5 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/jewel.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml msgr/random.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

fail 3507011 2019-01-25 23:49:52 2019-01-26 01:05:48 2019-01-26 02:15:48 1:10:00 0:58:29 0:11:31 smithi master centos 7.5 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{centos_latest.yaml} tasks/crash.yaml} 2
Failure Reason:

Command failed on smithi068 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

dead 3507012 2019-01-25 23:49:53 2019-01-26 01:05:57 2019-01-26 01:31:56 0:25:59 smithi master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/cosbench_64K_read_write.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

dead 3507013 2019-01-25 23:49:54 2019-01-26 01:06:08 2019-01-26 01:26:08 0:20:00 smithi master centos 7.5 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

dead 3507014 2019-01-25 23:49:55 2019-01-26 01:07:58 2019-01-26 01:27:57 0:19:59 smithi master ubuntu 16.04 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/rados_workunit_loadgen_mostlyread.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

pass 3507015 2019-01-25 23:49:56 2019-01-26 01:07:58 2019-01-26 01:29:57 0:21:59 0:14:39 0:07:20 smithi master rhel 7.5 rados/singleton/{all/mon-auth-caps.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
fail 3507016 2019-01-25 23:49:56 2019-01-26 01:07:58 2019-01-26 02:13:58 1:06:00 0:56:03 0:09:57 smithi master ubuntu 16.04 rados/monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/mon-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/many.yaml workloads/rados_mon_workunits.yaml} 2
Failure Reason:

Command failed on smithi099 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

dead 3507017 2019-01-25 23:49:57 2019-01-26 01:09:59 2019-01-26 01:29:59 0:20:00 smithi master centos 7.5 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

dead 3507018 2019-01-25 23:49:58 2019-01-26 01:10:00 2019-01-26 01:27:59 0:17:59 smithi master ubuntu 18.04 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

dead 3507019 2019-01-25 23:49:59 2019-01-26 01:12:00 2019-01-26 01:29:59 0:17:59 smithi master ubuntu 16.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

dead 3507020 2019-01-25 23:49:59 2019-01-26 01:12:00 2019-01-26 01:29:59 0:17:59 smithi master rhel 7.5 rados/standalone/{supported-random-distro$/{rhel_latest.yaml} workloads/misc.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

dead 3507021 2019-01-25 23:50:00 2019-01-26 01:14:01 2019-01-26 01:34:01 0:20:00 smithi master centos 7.5 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

dead 3507022 2019-01-25 23:50:01 2019-01-26 01:14:02 2019-01-26 01:32:01 0:17:59 smithi master centos 7.5 rados/singleton/{all/mon-config-key-caps.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

fail 3507023 2019-01-25 23:50:02 2019-01-26 01:15:46 2019-01-26 02:19:46 1:04:00 0:55:54 0:08:06 smithi master ubuntu 16.04 rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/cosbench_64K_write.yaml} 1
Failure Reason:

Command failed on smithi038 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3507024 2019-01-25 23:50:02 2019-01-26 01:15:46 2019-01-26 02:23:46 1:08:00 0:58:05 0:09:55 smithi master centos 7.5 rados/singleton-nomsgr/{all/export-after-evict.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} 1
Failure Reason:

Command failed on smithi175 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3507025 2019-01-25 23:50:03 2019-01-26 01:15:51 2019-01-26 01:37:50 0:21:59 smithi master rhel 7.5 rados/singleton/{all/mon-config-keys.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}}
Failure Reason:

machine smithi057.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3507026 2019-01-25 23:50:04 2019-01-26 01:15:52 2019-01-26 01:37:51 0:21:59 smithi master ubuntu 16.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} 1
Failure Reason:

machine smithi151.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

dead 3507027 2019-01-25 23:50:05 2019-01-26 01:15:52 2019-01-26 01:35:51 0:19:59 smithi master rhel 7.5 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_latest.yaml} tasks/dashboard.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

fail 3507028 2019-01-25 23:50:05 2019-01-26 01:15:52 2019-01-26 02:21:52 1:06:00 1:00:14 0:05:46 smithi master rhel 7.5 rados/objectstore/{backends/filestore-idempotent-aio-journal.yaml supported-random-distro$/{rhel_latest.yaml}} 1
Failure Reason:

Command failed on smithi169 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3507029 2019-01-25 23:50:06 2019-01-26 01:15:53 2019-01-26 02:27:54 1:12:01 1:01:24 0:10:37 smithi master ubuntu 18.04 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/readwrite.yaml} 2
Failure Reason:

Command failed on smithi179 with status 1: 'sudo ceph --cluster ceph osd create aeac0457-ec0f-4006-a36e-43794c48e2cd'

pass 3507030 2019-01-25 23:50:07 2019-01-26 01:15:54 2019-01-26 01:35:53 0:19:59 0:12:39 0:07:20 smithi master rhel 7.5 rados/singleton/{all/mon-config.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
fail 3507031 2019-01-25 23:50:08 2019-01-26 01:15:55 2019-01-26 02:19:55 1:04:00 0:54:53 0:09:07 smithi master ubuntu 16.04 rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/fio_4K_rand_read.yaml} 1
Failure Reason:

Command failed on smithi184 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3507032 2019-01-25 23:50:09 2019-01-26 01:17:48 2019-01-26 01:57:48 0:40:00 0:27:45 0:12:15 smithi master centos 7.5 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml supported-random-distro$/{centos_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} 2
fail 3507033 2019-01-25 23:50:09 2019-01-26 01:17:48 2019-01-26 01:37:48 0:20:00 smithi master ubuntu 16.04 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/few.yaml msgr/simple.yaml rados.yaml rocksdb.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} 2
Failure Reason:

machine smithi151.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

pass 3507034 2019-01-25 23:50:10 2019-01-26 01:17:48 2019-01-26 01:39:48 0:22:00 0:12:37 0:09:23 smithi master ubuntu 18.04 rados/singleton/{all/osd-backfill.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
fail 3507035 2019-01-25 23:50:11 2019-01-26 01:17:49 2019-01-26 01:39:48 0:21:59 smithi master ubuntu 18.04 rados/multimon/{clusters/21.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_recovery.yaml} 2
Failure Reason:

machine smithi164.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3507036 2019-01-25 23:50:12 2019-01-26 01:17:51 2019-01-26 02:25:51 1:08:00 1:00:14 0:07:46 smithi master rhel 7.5 rados/singleton-nomsgr/{all/full-tiering.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
Failure Reason:

Command failed on smithi080 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3507037 2019-01-25 23:50:12 2019-01-26 01:17:51 2019-01-26 02:27:52 1:10:01 0:59:02 0:10:59 smithi master centos 7.5 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} 2
Failure Reason:

Command failed on smithi005 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3507038 2019-01-25 23:50:13 2019-01-26 01:17:54 2019-01-26 02:27:54 1:10:00 0:56:44 0:13:16 smithi master ubuntu 18.04 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
Failure Reason:

Command failed on smithi103 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3507039 2019-01-25 23:50:14 2019-01-26 01:17:55 2019-01-26 01:53:55 0:36:00 0:24:00 0:12:00 smithi master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} 2
fail 3507040 2019-01-25 23:50:14 2019-01-26 01:17:57 2019-01-26 01:41:56 0:23:59 0:15:18 0:08:41 smithi master ubuntu 16.04 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} 1
Failure Reason:

Command failed on smithi196 with status 1: 'sudo ceph --cluster ceph osd create 2f7d3b6b-d8dc-4500-a219-35e7d7b71cdb'

pass 3507041 2019-01-25 23:50:15 2019-01-26 01:17:57 2019-01-26 01:39:57 0:22:00 0:13:56 0:08:04 smithi master ubuntu 16.04 rados/singleton/{all/osd-recovery-incomplete.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} 1
fail 3507042 2019-01-25 23:50:16 2019-01-26 01:18:04 2019-01-26 02:26:04 1:08:00 0:58:02 0:09:58 smithi master centos 7.5 rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_latest.yaml} workloads/fio_4K_rand_rw.yaml} 1
Failure Reason:

Command failed on smithi105 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3507043 2019-01-25 23:50:17 2019-01-26 01:19:54 2019-01-26 01:37:54 0:18:00 smithi master ubuntu 16.04 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-many-deletes.yaml} 1
Failure Reason:

machine smithi151.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3507044 2019-01-25 23:50:18 2019-01-26 01:19:54 2019-01-26 01:39:54 0:20:00 smithi master rhel 7.5 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} tasks/repair_test.yaml} 1
Failure Reason:

machine smithi164.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

pass 3507045 2019-01-25 23:50:18 2019-01-26 01:19:54 2019-01-26 01:55:54 0:36:00 0:24:01 0:11:59 smithi master centos 7.5 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore-stupid.yaml rados.yaml rocksdb.yaml supported-random-distro$/{centos_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} 2
fail 3507046 2019-01-25 23:50:19 2019-01-26 01:19:54 2019-01-26 01:37:54 0:18:00 smithi master rhel 7.5 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{rhel_latest.yaml} tasks/failover.yaml} 1
Failure Reason:

machine smithi057.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3507047 2019-01-25 23:50:20 2019-01-26 01:19:57 2019-01-26 02:37:57 1:18:00 1:11:12 0:06:48 smithi master rhel 7.5 rados/monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/one.yaml workloads/snaps-few-objects.yaml} 2
Failure Reason:

Command failed on smithi067 with status 1: 'sudo ceph --cluster ceph osd create 0b93a6ca-4775-412e-9284-21140337eae5'

fail 3507048 2019-01-25 23:50:21 2019-01-26 01:19:57 2019-01-26 02:29:57 1:10:00 0:56:55 0:13:05 smithi master ubuntu 18.04 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} 3
Failure Reason:

Command failed on smithi200 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3507049 2019-01-25 23:50:21 2019-01-26 01:19:58 2019-01-26 01:39:57 0:19:59 smithi master rhel 7.5 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml}
Failure Reason:

machine smithi164.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3507050 2019-01-25 23:50:22 2019-01-26 01:19:58 2019-01-26 01:45:58 0:26:00 0:17:57 0:08:03 smithi master rhel 7.5 rados/standalone/{supported-random-distro$/{rhel_latest.yaml} workloads/mon-seesaw.yaml} 1
Failure Reason:

Command failed (workunit test mon/mon-seesaw.sh) on smithi155 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=766a80f7f265c10db5be1a845019d45da54d1eff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-seesaw.sh'

fail 3507051 2019-01-25 23:50:23 2019-01-26 01:20:07 2019-01-26 03:16:08 1:56:01 1:45:51 0:10:10 smithi master ubuntu 18.04 rados/singleton/{all/osd-recovery.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
Failure Reason:

"2019-01-26 01:36:21.359154 mon.a (mon.0) 23 : cluster [ERR] Health check failed: no active mgr (MGR_DOWN)" in cluster log

fail 3507052 2019-01-25 23:50:24 2019-01-26 01:21:59 2019-01-26 01:37:59 0:16:00 smithi master centos 7.5 rados/objectstore/{backends/filestore-idempotent.yaml supported-random-distro$/{centos_latest.yaml}}
Failure Reason:

machine smithi057.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3507053 2019-01-25 23:50:24 2019-01-26 01:21:59 2019-01-26 02:28:00 1:06:01 1:00:43 0:05:18 smithi master rhel 7.5 rados/singleton-nomsgr/{all/health-warnings.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
Failure Reason:

Command failed on smithi197 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3507054 2019-01-25 23:50:25 2019-01-26 01:22:00 2019-01-26 01:47:59 0:25:59 0:14:40 0:11:19 smithi master centos 7.5 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml supported-random-distro$/{centos_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} 2
fail 3507055 2019-01-25 23:50:26 2019-01-26 01:21:59 2019-01-26 02:25:59 1:04:00 0:55:06 0:08:54 smithi master ubuntu 16.04 rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/fio_4M_rand_read.yaml} 1
Failure Reason:

Command failed on smithi088 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3507056 2019-01-25 23:50:27 2019-01-26 01:22:00 2019-01-26 02:30:01 1:08:01 1:00:52 0:07:09 smithi master rhel 7.5 rados/singleton/{all/peer.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
Failure Reason:

Command failed on smithi027 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3507057 2019-01-25 23:50:27 2019-01-26 01:24:01 2019-01-26 01:38:01 0:14:00 smithi master centos 7.5 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/rgw_snaps.yaml}
Failure Reason:

machine smithi057.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3507058 2019-01-25 23:50:28 2019-01-26 01:24:01 2019-01-26 01:40:01 0:16:00 smithi master ubuntu 16.04 rados/singleton/{all/pg-autoscaler.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} 1
Failure Reason:

machine smithi164.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

pass 3507059 2019-01-25 23:50:29 2019-01-26 01:24:01 2019-01-26 02:12:01 0:48:00 0:35:00 0:13:00 smithi master ubuntu 16.04 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/osd-delay.yaml msgr/async-v1only.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 4
fail 3507060 2019-01-25 23:50:30 2019-01-26 01:25:58 2019-01-26 01:37:57 0:11:59 smithi master ubuntu 16.04 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/insights.yaml} 1
Failure Reason:

machine smithi057.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3507061 2019-01-25 23:50:31 2019-01-26 01:25:58 2019-01-26 01:37:58 0:12:00 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} 1
Failure Reason:

machine smithi151.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3507062 2019-01-25 23:50:31 2019-01-26 01:26:00 2019-01-26 01:37:59 0:11:59 smithi master rhel 7.5 rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_latest.yaml} workloads/fio_4M_rand_rw.yaml}
Failure Reason:

machine smithi151.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3507063 2019-01-25 23:50:32 2019-01-26 01:26:09 2019-01-26 01:38:08 0:11:59 smithi master ubuntu 18.04 rados/singleton/{all/pg-removal-interruption.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}}
Failure Reason:

machine smithi164.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3507064 2019-01-25 23:50:33 2019-01-26 01:28:09 2019-01-26 01:38:08 0:09:59 smithi master rhel 7.5 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-small-objects.yaml} 1
Failure Reason:

machine smithi164.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3507065 2019-01-25 23:50:33 2019-01-26 01:28:09 2019-01-26 01:38:09 0:10:00 smithi master ubuntu 18.04 rados/singleton-nomsgr/{all/large-omap-object-warnings.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}}
Failure Reason:

machine smithi057.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_sage@teuthology

fail 3507066 2019-01-25 23:50:34 2019-01-26 01:29:51 2019-01-26 02:03:51 0:34:00 0:14:22 0:19:38 smithi master centos 7.5 rados/multimon/{clusters/21.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/mon_recovery.yaml} 3
Failure Reason:

Command failed on smithi023 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3507067 2019-01-25 23:50:35 2019-01-26 01:29:51 2019-01-26 01:51:50 0:21:59 0:15:22 0:06:37 smithi master rhel 7.5 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
Failure Reason:

Command failed on smithi193 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'