User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
sage | 2019-02-05 01:44:22 | 2019-02-05 01:45:24 | 2019-02-05 10:50:38 | 9:05:14 | rados | wip-msgr2-peer-addr | smithi | 38cd095 | 32 | 141 | 27 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 3550514 | 2019-02-05 01:44:36 | 2019-02-05 01:45:23 | 2019-02-05 02:37:22 | 0:51:59 | 0:40:46 | 0:11:13 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |
Failure Reason:
"2019-02-05 02:00:00.394394 osd.7 (osd.7) 1 : cluster [ERR] map e15 had wrong cluster addr (v1:172.21.15.89:6809/11131 != my 172.21.15.89:6809/11131)" in cluster log |
||||||||||||||
dead | 3550515 | 2019-02-05 01:44:37 | 2019-02-05 01:45:23 | 2019-02-05 10:49:31 | 9:04:08 | smithi | master | centos | 7.5 | rados/singleton/{all/lost-unfound.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |||
fail | 3550516 | 2019-02-05 01:44:38 | 2019-02-05 01:45:24 | 2019-02-05 02:53:24 | 1:08:00 | 0:58:27 | 0:09:33 | smithi | master | centos | 7.5 | rados/singleton-nomsgr/{all/ceph-kvstore-tool.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
Failure Reason:
Command failed on smithi066 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
dead | 3550517 | 2019-02-05 01:44:39 | 2019-02-05 01:45:34 | 2019-02-05 10:49:42 | 9:04:08 | smithi | master | centos | 7.5 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} | 2 | |||
fail | 3550518 | 2019-02-05 01:44:39 | 2019-02-05 01:45:35 | 2019-02-05 03:27:35 | 1:42:00 | 1:01:23 | 0:40:37 | smithi | master | rhel | 7.5 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{rhel_latest.yaml} tasks/workunits.yaml} | 2 | |
Failure Reason:
Command failed on smithi014 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550519 | 2019-02-05 01:44:40 | 2019-02-05 01:45:38 | 2019-02-05 02:55:38 | 1:10:00 | 0:58:01 | 0:11:59 | smithi | master | centos | 7.5 | rados/singleton/{all/max-pg-per-osd.from-mon.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
Failure Reason:
Command failed on smithi122 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
pass | 3550520 | 2019-02-05 01:44:41 | 2019-02-05 01:45:54 | 2019-02-05 02:19:53 | 0:33:59 | 0:25:00 | 0:08:59 | smithi | master | rhel | 7.5 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} tasks/rados_workunit_loadgen_mix.yaml} | 2 | |
fail | 3550521 | 2019-02-05 01:44:42 | 2019-02-05 01:46:46 | 2019-02-05 02:40:46 | 0:54:00 | 0:40:32 | 0:13:28 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 3550522 | 2019-02-05 01:44:43 | 2019-02-05 01:47:10 | 2019-02-05 02:53:10 | 1:06:00 | 0:55:23 | 0:10:37 | smithi | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/sample_radosbench.yaml} | 1 | |
Failure Reason:
Command failed on smithi198 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550523 | 2019-02-05 01:44:43 | 2019-02-05 01:47:30 | 2019-02-05 03:03:31 | 1:16:01 | 0:55:37 | 0:20:24 | smithi | master | ubuntu | 16.04 | rados/multimon/{clusters/3.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/mon_clock_with_skews.yaml} | 2 | |
Failure Reason:
Command failed on smithi074 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550524 | 2019-02-05 01:44:44 | 2019-02-05 01:47:35 | 2019-02-05 02:21:34 | 0:33:59 | 0:24:06 | 0:09:53 | smithi | master | centos | 7.5 | rados/singleton/{all/max-pg-per-osd.from-primary.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
fail | 3550525 | 2019-02-05 01:44:45 | 2019-02-05 01:49:46 | 2019-02-05 02:17:45 | 0:27:59 | 0:15:46 | 0:12:13 | smithi | master | rhel | 7.5 | rados/objectstore/{backends/filejournal.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
Failure Reason:
Command failed on smithi155 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550526 | 2019-02-05 01:44:46 | 2019-02-05 01:49:46 | 2019-02-05 03:17:46 | 1:28:00 | 0:55:57 | 0:32:03 | smithi | master | ubuntu | 16.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} | 2 | |
Failure Reason:
Command failed on smithi164 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550527 | 2019-02-05 01:44:47 | 2019-02-05 01:49:46 | 2019-02-05 07:11:50 | 5:22:04 | 1:50:39 | 3:31:25 | smithi | master | centos | 7.5 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
Failure Reason:
"2019-02-05 05:28:52.484546 mon.a (mon.0) 30 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 3550528 | 2019-02-05 01:44:48 | 2019-02-05 01:49:50 | 2019-02-05 02:41:50 | 0:52:00 | 0:40:10 | 0:11:50 | smithi | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-stupid.yaml rados.yaml tasks/rados_cls_all.yaml validater/lockdep.yaml} | 2 | |||
Failure Reason:
"2019-02-05 02:05:23.644410 osd.5 (osd.5) 1 : cluster [ERR] map e15 had wrong cluster addr (v1:172.21.15.100:6801/13423 != my 172.21.15.100:6801/13423)" in cluster log |
||||||||||||||
pass | 3550529 | 2019-02-05 01:44:49 | 2019-02-05 01:51:56 | 2019-02-05 02:15:56 | 0:24:00 | 0:09:06 | 0:14:54 | smithi | master | centos | 7.5 | rados/singleton-nomsgr/{all/ceph-post-file.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
fail | 3550530 | 2019-02-05 01:44:49 | 2019-02-05 01:51:57 | 2019-02-05 02:35:56 | 0:43:59 | smithi | master | centos | 7.5 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml rocksdb.yaml supported-random-distro$/{centos_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |||
Failure Reason:
Command failed on smithi141 with status 1: 'sudo yum install -y kernel' |
||||||||||||||
fail | 3550531 | 2019-02-05 01:44:50 | 2019-02-05 01:53:45 | 2019-02-05 02:25:44 | 0:31:59 | 0:25:28 | 0:06:31 | smithi | master | rhel | 7.5 | rados/singleton/{all/max-pg-per-osd.from-replica.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
fail | 3550532 | 2019-02-05 01:44:51 | 2019-02-05 01:55:26 | 2019-02-05 07:13:30 | 5:18:04 | 0:48:41 | 4:29:23 | smithi | master | centos | 7.5 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/jewel.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml msgr/random.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 4 | |
Failure Reason:
"2019-02-05 06:37:19.779451 osd.11 (osd.11) 1 : cluster [ERR] map e18 had wrong cluster addr (v1:172.21.15.3:6813/34281 != my 172.21.15.3:6813/34281)" in cluster log |
||||||||||||||
fail | 3550533 | 2019-02-05 01:44:52 | 2019-02-05 01:55:34 | 2019-02-05 03:15:34 | 1:20:00 | 1:01:41 | 0:18:19 | smithi | master | rhel | 7.5 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{rhel_latest.yaml} tasks/crash.yaml} | 2 | |
Failure Reason:
Command failed on smithi071 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550534 | 2019-02-05 01:44:53 | 2019-02-05 01:55:59 | 2019-02-05 03:03:59 | 1:08:00 | 1:02:04 | 0:05:56 | smithi | master | rhel | 7.5 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_latest.yaml} workloads/cosbench_64K_read_write.yaml} | 1 | |
Failure Reason:
Command failed on smithi178 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
pass | 3550535 | 2019-02-05 01:44:53 | 2019-02-05 01:56:02 | 2019-02-05 02:40:02 | 0:44:00 | 0:31:52 | 0:12:08 | smithi | master | rhel | 7.5 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
fail | 3550536 | 2019-02-05 01:44:54 | 2019-02-05 01:57:26 | 2019-02-05 03:03:26 | 1:06:00 | 0:55:51 | 0:10:09 | smithi | master | ubuntu | 16.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/rados_workunit_loadgen_mostlyread.yaml} | 2 | |
Failure Reason:
Command failed on smithi069 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
pass | 3550537 | 2019-02-05 01:44:55 | 2019-02-05 01:57:33 | 2019-02-05 02:19:33 | 0:22:00 | 0:10:07 | 0:11:53 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/mon-auth-caps.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
dead | 3550538 | 2019-02-05 01:44:56 | 2019-02-05 01:57:48 | 2019-02-05 10:49:57 | 8:52:09 | smithi | master | ubuntu | 16.04 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/mon-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/many.yaml workloads/rados_mon_workunits.yaml} | 2 | |||
dead | 3550539 | 2019-02-05 01:44:57 | 2019-02-05 01:59:38 | 2019-02-05 10:49:46 | 8:50:08 | 6:32:06 | 2:18:02 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=4446) |
||||||||||||||
fail | 3550540 | 2019-02-05 01:44:58 | 2019-02-05 02:01:27 | 2019-02-05 03:09:27 | 1:08:00 | 0:56:15 | 0:11:45 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Command failed on smithi118 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550541 | 2019-02-05 01:44:58 | 2019-02-05 02:02:20 | 2019-02-05 03:42:20 | 1:40:00 | 0:56:15 | 0:43:45 | smithi | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
Failure Reason:
"2019-02-05 02:49:18.992232 mon.a (mon.1) 12 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 3550542 | 2019-02-05 01:44:59 | 2019-02-05 02:03:52 | 2019-02-05 02:31:51 | 0:27:59 | 0:17:16 | 0:10:43 | smithi | master | centos | 7.5 | rados/standalone/{supported-random-distro$/{centos_latest.yaml} workloads/misc.yaml} | 1 | |
fail | 3550543 | 2019-02-05 01:45:00 | 2019-02-05 02:04:30 | 2019-02-05 03:46:30 | 1:42:00 | 0:56:20 | 0:45:40 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read.yaml} | 2 | |
Failure Reason:
Command failed on smithi110 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550544 | 2019-02-05 01:45:01 | 2019-02-05 02:07:58 | 2019-02-05 03:23:58 | 1:16:00 | 0:55:36 | 0:20:24 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/mon-config-key-caps.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
Command failed on smithi032 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550545 | 2019-02-05 01:45:02 | 2019-02-05 02:11:41 | 2019-02-05 03:25:41 | 1:14:00 | 0:58:04 | 0:15:56 | smithi | master | centos | 7.5 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_latest.yaml} workloads/cosbench_64K_write.yaml} | 1 | |
Failure Reason:
Command failed on smithi138 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550546 | 2019-02-05 01:45:02 | 2019-02-05 02:16:07 | 2019-02-05 03:30:08 | 1:14:01 | 0:58:10 | 0:15:51 | smithi | master | centos | 7.5 | rados/singleton-nomsgr/{all/export-after-evict.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
Failure Reason:
Command failed on smithi155 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
dead | 3550547 | 2019-02-05 01:45:03 | 2019-02-05 02:17:57 | 2019-02-05 10:50:05 | 8:32:08 | smithi | master | rhel | 7.5 | rados/singleton/{all/mon-config-keys.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |||
fail | 3550548 | 2019-02-05 01:45:04 | 2019-02-05 02:19:44 | 2019-02-05 03:43:45 | 1:24:01 | 0:56:16 | 0:27:45 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
Failure Reason:
Command failed on smithi003 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
dead | 3550549 | 2019-02-05 01:45:05 | 2019-02-05 02:19:55 | 2019-02-05 10:50:02 | 8:30:07 | smithi | master | ubuntu | 16.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/dashboard.yaml} | 2 | |||
fail | 3550550 | 2019-02-05 01:45:06 | 2019-02-05 02:20:02 | 2019-02-05 03:26:02 | 1:06:00 | 0:55:32 | 0:10:28 | smithi | master | ubuntu | 18.04 | rados/objectstore/{backends/filestore-idempotent-aio-journal.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
Command failed on smithi062 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550551 | 2019-02-05 01:45:06 | 2019-02-05 02:21:46 | 2019-02-05 03:49:46 | 1:28:00 | 1:01:15 | 0:26:45 | smithi | master | rhel | 7.5 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} tasks/readwrite.yaml} | 2 | |
Failure Reason:
Command failed on smithi204 with status 1: 'sudo ceph --cluster ceph osd crush tunables optimal' |
||||||||||||||
fail | 3550552 | 2019-02-05 01:45:07 | 2019-02-05 03:03:38 | 2019-02-05 03:53:38 | 0:50:00 | 0:40:29 | 0:09:31 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/mon-config.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
"2019-02-05 03:17:33.267711 osd.0 (osd.0) 1 : cluster [ERR] map e9 had wrong cluster addr (v1:172.21.15.178:6810/11219 != my 172.21.15.178:6810/11219)" in cluster log |
||||||||||||||
fail | 3550553 | 2019-02-05 01:45:08 | 2019-02-05 03:03:39 | 2019-02-05 04:11:39 | 1:08:00 | 0:55:01 | 0:12:59 | smithi | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/fio_4K_rand_read.yaml} | 1 | |
Failure Reason:
Command failed on smithi099 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
dead | 3550554 | 2019-02-05 01:45:09 | 2019-02-05 03:04:00 | 2019-02-05 10:50:07 | 7:46:07 | smithi | master | rhel | 7.5 | rados/multimon/{clusters/6.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} tasks/mon_recovery.yaml} | 2 | |||
fail | 3550555 | 2019-02-05 01:45:09 | 2019-02-05 03:09:43 | 2019-02-05 04:07:43 | 0:58:00 | 0:41:55 | 0:16:05 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
Failure Reason:
"2019-02-05 03:30:23.223470 osd.1 (osd.1) 1 : cluster [ERR] map e14 had wrong cluster addr (v1:172.21.15.71:6810/11289 != my 172.21.15.71:6810/11289)" in cluster log |
||||||||||||||
fail | 3550556 | 2019-02-05 01:45:10 | 2019-02-05 03:15:40 | 2019-02-05 04:39:45 | 1:24:05 | 0:49:17 | 0:34:48 | smithi | master | centos | 7.5 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml msgr/simple.yaml rados.yaml rocksdb.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 3550557 | 2019-02-05 01:45:11 | 2019-02-05 03:15:40 | 2019-02-05 04:07:40 | 0:52:00 | 0:45:04 | 0:06:56 | smithi | master | rhel | 7.5 | rados/singleton/{all/osd-backfill.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 3550558 | 2019-02-05 01:45:12 | 2019-02-05 03:17:57 | 2019-02-05 04:25:57 | 1:08:00 | 1:00:27 | 0:07:33 | smithi | master | rhel | 7.5 | rados/singleton-nomsgr/{all/full-tiering.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
Failure Reason:
Command failed on smithi081 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550559 | 2019-02-05 01:45:13 | 2019-02-05 03:19:48 | 2019-02-05 04:31:49 | 1:12:01 | 0:56:52 | 0:15:09 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |
Failure Reason:
Command failed on smithi153 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550560 | 2019-02-05 01:45:13 | 2019-02-05 03:24:10 | 2019-02-05 06:20:12 | 2:56:02 | 1:04:29 | 1:51:33 | smithi | master | rhel | 7.5 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
Failure Reason:
Command failed on smithi156 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550561 | 2019-02-05 01:45:14 | 2019-02-05 03:25:53 | 2019-02-05 04:29:53 | 1:04:00 | 0:52:35 | 0:11:25 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
dead | 3550562 | 2019-02-05 01:45:15 | 2019-02-05 03:25:53 | 2019-02-05 10:50:00 | 7:24:07 | smithi | master | centos | 7.5 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |||
pass | 3550563 | 2019-02-05 01:45:16 | 2019-02-05 03:26:03 | 2019-02-05 03:54:03 | 0:28:00 | 0:15:33 | 0:12:27 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/osd-recovery-incomplete.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
fail | 3550564 | 2019-02-05 01:45:17 | 2019-02-05 03:27:47 | 2019-02-05 04:35:47 | 1:08:00 | 1:00:46 | 0:07:14 | smithi | master | rhel | 7.5 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_latest.yaml} workloads/fio_4K_rand_rw.yaml} | 1 | |
Failure Reason:
Command failed on smithi125 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550565 | 2019-02-05 01:45:17 | 2019-02-05 03:27:47 | 2019-02-05 04:41:47 | 1:14:00 | 0:58:49 | 0:15:11 | smithi | master | centos | 7.5 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-many-deletes.yaml} | 2 | |
Failure Reason:
Command failed on smithi155 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550566 | 2019-02-05 01:45:18 | 2019-02-05 03:27:47 | 2019-02-05 04:07:46 | 0:39:59 | 0:31:57 | 0:08:02 | smithi | master | rhel | 7.5 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} tasks/repair_test.yaml} | 2 | |
Failure Reason:
"2019-02-05 03:45:51.555896 osd.6 (osd.6) 1 : cluster [ERR] map e15 had wrong cluster addr (v1:172.21.15.115:6809/37251 != my 172.21.15.115:6809/37251)" in cluster log |
||||||||||||||
fail | 3550567 | 2019-02-05 01:45:19 | 2019-02-05 03:28:07 | 2019-02-05 04:30:07 | 1:02:00 | 0:41:06 | 0:20:54 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore-stupid.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 3550568 | 2019-02-05 01:45:20 | 2019-02-05 03:29:36 | 2019-02-05 04:57:37 | 1:28:01 | 0:58:55 | 0:29:06 | smithi | master | centos | 7.5 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{centos_latest.yaml} tasks/failover.yaml} | 2 | |
Failure Reason:
Command failed on smithi129 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550569 | 2019-02-05 01:45:20 | 2019-02-05 03:30:09 | 2019-02-05 04:48:09 | 1:18:00 | 0:55:57 | 0:22:03 | smithi | master | ubuntu | 18.04 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/one.yaml workloads/snaps-few-objects.yaml} | 2 | |
Failure Reason:
Command failed on smithi041 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550570 | 2019-02-05 01:45:21 | 2019-02-05 03:31:47 | 2019-02-05 05:17:48 | 1:46:01 | 0:56:49 | 0:49:12 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
Failure Reason:
Command failed on smithi168 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550571 | 2019-02-05 01:45:22 | 2019-02-05 03:36:12 | 2019-02-05 04:48:12 | 1:12:00 | 0:56:34 | 0:15:26 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Command failed on smithi158 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
pass | 3550572 | 2019-02-05 01:45:23 | 2019-02-05 03:39:15 | 2019-02-05 03:57:14 | 0:17:59 | 0:07:32 | 0:10:27 | smithi | master | ubuntu | 18.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/mon-seesaw.yaml} | 1 | |
dead | 3550573 | 2019-02-05 01:45:24 | 2019-02-05 03:41:52 | 2019-02-05 10:49:58 | 7:08:06 | smithi | master | centos | 7.5 | rados/singleton/{all/osd-recovery.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |||
fail | 3550574 | 2019-02-05 01:45:25 | 2019-02-05 03:41:52 | 2019-02-05 04:53:52 | 1:12:00 | 1:00:37 | 0:11:23 | smithi | master | rhel | 7.5 | rados/objectstore/{backends/filestore-idempotent.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
Failure Reason:
Command failed on smithi110 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550575 | 2019-02-05 01:45:25 | 2019-02-05 03:42:15 | 2019-02-05 04:48:15 | 1:06:00 | 0:55:28 | 0:10:32 | smithi | master | ubuntu | 16.04 | rados/singleton-nomsgr/{all/health-warnings.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
Command failed on smithi177 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
pass | 3550576 | 2019-02-05 01:45:26 | 2019-02-05 03:42:22 | 2019-02-05 04:08:21 | 0:25:59 | 0:16:30 | 0:09:29 | smithi | master | rhel | 7.5 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
fail | 3550577 | 2019-02-05 01:45:27 | 2019-02-05 03:43:56 | 2019-02-05 04:49:56 | 1:06:00 | 0:56:06 | 0:09:54 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/fio_4M_rand_read.yaml} | 1 | |
Failure Reason:
Command failed on smithi029 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
dead | 3550578 | 2019-02-05 01:45:28 | 2019-02-05 03:43:56 | 2019-02-05 10:50:02 | 7:06:06 | smithi | master | centos | 7.5 | rados/singleton/{all/peer.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |||
fail | 3550579 | 2019-02-05 01:45:29 | 2019-02-05 03:44:19 | 2019-02-05 04:38:19 | 0:54:00 | 0:40:35 | 0:13:25 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rgw_snaps.yaml} | 2 | |
Failure Reason:
"2019-02-05 04:03:38.627371 mon.b (mon.0) 188 : cluster [WRN] Health check failed: Reduced data availability: 8 pgs inactive, 5 pgs peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 3550580 | 2019-02-05 01:45:29 | 2019-02-05 03:44:36 | 2019-02-05 04:34:36 | 0:50:00 | 0:29:14 | 0:20:46 | smithi | master | rhel | 7.5 | rados/singleton/{all/pg-autoscaler.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 2 | |
Failure Reason:
Command failed (workunit test mon/pg_autoscaler.sh) on smithi198 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=38cd095b11db5cbd5536b33a72906b559d57eb32 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/pg_autoscaler.sh' |
||||||||||||||
pass | 3550581 | 2019-02-05 01:45:30 | 2019-02-05 03:45:50 | 2019-02-05 05:39:51 | 1:54:01 | 0:34:19 | 1:19:42 | smithi | master | centos | 7.5 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml msgr/async-v1only.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 4 | |
fail | 3550582 | 2019-02-05 01:45:31 | 2019-02-05 03:45:50 | 2019-02-05 04:59:51 | 1:14:01 | 0:59:03 | 0:14:58 | smithi | master | centos | 7.5 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{centos_latest.yaml} tasks/insights.yaml} | 2 | |
Failure Reason:
Command failed on smithi002 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550583 | 2019-02-05 01:45:32 | 2019-02-05 03:46:32 | 2019-02-05 05:00:32 | 1:14:00 | 0:56:00 | 0:18:00 | smithi | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
Failure Reason:
Command failed on smithi183 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550584 | 2019-02-05 01:45:33 | 2019-02-05 03:47:55 | 2019-02-05 04:55:55 | 1:08:00 | 0:55:37 | 0:12:23 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/fio_4M_rand_rw.yaml} | 1 | |
Failure Reason:
Command failed on smithi072 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550585 | 2019-02-05 01:45:34 | 2019-02-05 03:47:55 | 2019-02-05 04:41:55 | 0:54:00 | 0:40:23 | 0:13:37 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/pg-removal-interruption.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 3550586 | 2019-02-05 01:45:35 | 2019-02-05 03:47:55 | 2019-02-05 04:53:55 | 1:06:00 | 0:40:14 | 0:25:46 | smithi | master | ubuntu | 16.04 | rados/multimon/{clusters/9.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/mon_clock_no_skews.yaml} | 3 | |
Failure Reason:
"2019-02-05 04:17:33.872958 osd.1 (osd.1) 1 : cluster [ERR] map e8 had wrong cluster addr (v1:172.21.15.119:6801/12711 != my 172.21.15.119:6801/12711)" in cluster log |
||||||||||||||
fail | 3550587 | 2019-02-05 01:45:36 | 2019-02-05 03:50:05 | 2019-02-05 04:22:04 | 0:31:59 | 0:14:19 | 0:17:40 | smithi | master | centos | 7.5 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-small-objects.yaml} | 2 | |
Failure Reason:
"2019-02-05 04:14:31.547019 mon.a (mon.1) 10 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 3550588 | 2019-02-05 01:45:36 | 2019-02-05 03:52:11 | 2019-02-05 05:00:11 | 1:08:00 | 1:00:43 | 0:07:17 | smithi | master | rhel | 7.5 | rados/singleton-nomsgr/{all/large-omap-object-warnings.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
Failure Reason:
Command failed on smithi019 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
dead | 3550589 | 2019-02-05 01:45:37 | 2019-02-05 03:52:11 | 2019-02-05 10:50:17 | 6:58:06 | smithi | master | ubuntu | 16.04 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |||
fail | 3550590 | 2019-02-05 01:45:38 | 2019-02-05 03:53:50 | 2019-02-05 06:07:51 | 2:14:01 | 1:50:32 | 0:23:29 | smithi | master | centos | 7.5 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml supported-random-distro$/{centos_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
Failure Reason:
"2019-02-05 04:25:01.459536 mon.a (mon.0) 18 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 3550591 | 2019-02-05 01:45:39 | 2019-02-05 03:53:50 | 2019-02-05 04:37:50 | 0:44:00 | 0:33:15 | 0:10:45 | smithi | master | ubuntu | 16.04 | rados/singleton/{all/radostool.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
pass | 3550592 | 2019-02-05 01:45:40 | 2019-02-05 03:54:04 | 2019-02-05 04:18:03 | 0:23:59 | 0:08:51 | 0:15:08 | smithi | master | centos | 7.5 | rados/objectstore/{backends/fusestore.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
fail | 3550593 | 2019-02-05 01:45:41 | 2019-02-05 03:55:52 | 2019-02-05 04:59:51 | 1:03:59 | 0:20:20 | 0:43:39 | smithi | master | ubuntu | 16.04 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/mon-delay.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/sync-many.yaml workloads/pool-create-delete.yaml} | 2 | |
Failure Reason:
"2019-02-05 04:43:27.669745 osd.3 (osd.3) 1 : cluster [ERR] map e12 had wrong cluster addr (v1:172.21.15.98:6810/13404 != my 172.21.15.98:6810/13404)" in cluster log |
||||||||||||||
dead | 3550594 | 2019-02-05 01:45:41 | 2019-02-05 03:57:27 | 2019-02-05 10:49:33 | 6:52:06 | smithi | master | rhel | 7.5 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |||
fail | 3550595 | 2019-02-05 01:45:42 | 2019-02-05 04:00:02 | 2019-02-05 05:40:03 | 1:40:01 | 0:56:17 | 0:43:44 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Command failed on smithi198 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
dead | 3550596 | 2019-02-05 01:45:43 | 2019-02-05 04:03:48 | 2019-02-05 10:49:54 | 6:46:06 | smithi | master | centos | 7.5 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} | 2 | |||
fail | 3550597 | 2019-02-05 01:45:44 | 2019-02-05 04:03:48 | 2019-02-05 07:19:51 | 3:16:03 | 1:02:42 | 2:13:21 | smithi | master | rhel | 7.5 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
Failure Reason:
Command failed on smithi110 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
pass | 3550598 | 2019-02-05 01:45:45 | 2019-02-05 04:07:51 | 2019-02-05 04:57:51 | 0:50:00 | 0:18:42 | 0:31:18 | smithi | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_api_tests.yaml validater/lockdep.yaml} | 2 | |||
pass | 3550599 | 2019-02-05 01:45:45 | 2019-02-05 04:07:51 | 2019-02-05 04:47:51 | 0:40:00 | 0:10:09 | 0:29:51 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/scrub_test.yaml} | 2 | |
fail | 3550600 | 2019-02-05 01:45:46 | 2019-02-05 04:07:51 | 2019-02-05 05:17:51 | 1:10:00 | 0:55:31 | 0:14:29 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/fio_4M_rand_write.yaml} | 1 | |
Failure Reason:
Command failed on smithi099 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
pass | 3550601 | 2019-02-05 01:45:47 | 2019-02-05 04:08:22 | 2019-02-05 04:56:22 | 0:48:00 | 0:33:48 | 0:14:12 | smithi | master | rhel | 7.5 | rados/standalone/{supported-random-distro$/{rhel_latest.yaml} workloads/mon.yaml} | 1 | |
fail | 3550602 | 2019-02-05 01:45:48 | 2019-02-05 04:08:40 | 2019-02-05 05:30:41 | 1:22:01 | 0:59:45 | 0:22:16 | smithi | master | centos | 7.5 | rados/singleton/{all/random-eio.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 2 | |
Failure Reason:
Command failed on smithi032 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
dead | 3550603 | 2019-02-05 01:45:49 | 2019-02-05 05:54:06 | 2019-02-05 10:50:10 | 4:56:04 | smithi | master | ubuntu | 16.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/module_selftest.yaml} | 2 | |||
fail | 3550604 | 2019-02-05 01:45:50 | 2019-02-05 05:54:06 | 2019-02-05 06:52:06 | 0:58:00 | 0:46:19 | 0:11:41 | smithi | master | rhel | 7.5 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/bluestore-stupid.yaml rados.yaml rocksdb.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
Failure Reason:
"2019-02-05 06:15:09.795129 osd.0 (osd.0) 1 : cluster [ERR] map e15 had wrong cluster addr (v1:172.21.15.183:6802/38110 != my 172.21.15.183:6802/38110)" in cluster log |
||||||||||||||
dead | 3550605 | 2019-02-05 01:45:50 | 2019-02-05 05:54:12 | 2019-02-05 10:50:16 | 4:56:04 | smithi | master | ubuntu | 16.04 | rados/singleton/{all/rebuild-mondb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |||
fail | 3550606 | 2019-02-05 01:45:51 | 2019-02-05 05:56:15 | 2019-02-05 07:02:15 | 1:06:00 | 0:55:39 | 0:10:21 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/librados_hello_world.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
Command failed on smithi121 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550607 | 2019-02-05 01:45:52 | 2019-02-05 05:56:15 | 2019-02-05 07:04:15 | 1:08:00 | 0:58:20 | 0:09:40 | smithi | master | centos | 7.5 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_latest.yaml} workloads/radosbench_4K_rand_read.yaml} | 1 | |
Failure Reason:
Command failed on smithi159 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
pass | 3550608 | 2019-02-05 01:45:53 | 2019-02-05 05:56:15 | 2019-02-05 07:22:15 | 1:26:00 | 1:07:29 | 0:18:31 | smithi | master | centos | 7.5 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml msgr/async.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 4 | |
fail | 3550609 | 2019-02-05 01:45:54 | 2019-02-05 06:10:24 | 2019-02-05 07:04:24 | 0:54:00 | 0:46:15 | 0:07:45 | smithi | master | rhel | 7.5 | rados/singleton/{all/recovery-preemption.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 3550610 | 2019-02-05 01:45:54 | 2019-02-05 06:10:24 | 2019-02-05 07:22:25 | 1:12:01 | 0:59:03 | 0:12:58 | smithi | master | centos | 7.5 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
Command failed on smithi069 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550611 | 2019-02-05 01:45:55 | 2019-02-05 06:11:05 | 2019-02-05 07:03:05 | 0:52:00 | 0:45:30 | 0:06:30 | smithi | master | rhel | 7.5 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 3550612 | 2019-02-05 01:45:56 | 2019-02-05 06:12:01 | 2019-02-05 07:22:01 | 1:10:00 | 0:59:03 | 0:10:57 | smithi | master | centos | 7.5 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=clay-k=4-m=2.yaml} | 2 | |
Failure Reason:
Command failed on smithi201 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550613 | 2019-02-05 01:45:57 | 2019-02-05 06:12:01 | 2019-02-05 07:18:01 | 1:06:00 | 0:56:03 | 0:09:57 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/orchestrator_cli.yaml} | 2 | |
Failure Reason:
Command failed on smithi175 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
pass | 3550614 | 2019-02-05 01:45:58 | 2019-02-05 06:12:01 | 2019-02-05 06:30:00 | 0:17:59 | 0:06:12 | 0:11:47 | smithi | master | ubuntu | 16.04 | rados/objectstore/{backends/keyvaluedb.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
fail | 3550615 | 2019-02-05 01:45:59 | 2019-02-05 06:12:05 | 2019-02-05 07:04:05 | 0:52:00 | 0:40:28 | 0:11:32 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/resolve_stuck_peering.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 3550616 | 2019-02-05 01:45:59 | 2019-02-05 06:12:43 | 2019-02-05 07:20:43 | 1:08:00 | 0:55:58 | 0:12:02 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_4K_seq_read.yaml} | 1 | |
Failure Reason:
Command failed on smithi029 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
pass | 3550617 | 2019-02-05 01:46:00 | 2019-02-05 06:14:24 | 2019-02-05 06:56:23 | 0:41:59 | 0:32:49 | 0:09:10 | smithi | master | rhel | 7.5 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
fail | 3550618 | 2019-02-05 01:46:01 | 2019-02-05 06:14:25 | 2019-02-05 06:34:24 | 0:19:59 | 0:07:51 | 0:12:08 | smithi | master | centos | 7.5 | rados/singleton-nomsgr/{all/msgr.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_msgr' |
||||||||||||||
fail | 3550619 | 2019-02-05 01:46:02 | 2019-02-05 06:16:24 | 2019-02-05 07:10:24 | 0:54:00 | 0:42:16 | 0:11:44 | smithi | master | ubuntu | 18.04 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/sync.yaml workloads/rados_5925.yaml} | 2 | |
Failure Reason:
"2019-02-05 07:03:03.057811 mon.a (mon.0) 349 : cluster [WRN] Health check failed: 5 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 3550620 | 2019-02-05 01:46:03 | 2019-02-05 06:17:49 | 2019-02-05 07:25:49 | 1:08:00 | 1:02:21 | 0:05:39 | smithi | master | rhel | 7.5 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
Failure Reason:
Command failed on smithi150 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550621 | 2019-02-05 01:46:04 | 2019-02-05 06:18:00 | 2019-02-05 07:28:00 | 1:10:00 | 0:58:28 | 0:11:32 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Command failed on smithi113 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
pass | 3550622 | 2019-02-05 01:46:04 | 2019-02-05 06:18:08 | 2019-02-05 06:38:07 | 0:19:59 | 0:11:10 | 0:08:49 | smithi | master | centos | 7.5 | rados/singleton/{all/test-crash.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
fail | 3550623 | 2019-02-05 01:46:05 | 2019-02-05 06:20:16 | 2019-02-05 07:14:16 | 0:54:00 | 0:42:16 | 0:11:44 | smithi | master | ubuntu | 18.04 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 3550624 | 2019-02-05 01:46:06 | 2019-02-05 06:20:16 | 2019-02-05 06:54:16 | 0:34:00 | 0:26:58 | 0:07:02 | smithi | master | rhel | 7.5 | rados/multimon/{clusters/21.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} tasks/mon_clock_with_skews.yaml} | 3 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 3550625 | 2019-02-05 01:46:07 | 2019-02-05 06:20:16 | 2019-02-05 07:18:16 | 0:58:00 | 0:47:14 | 0:10:46 | smithi | master | centos | 7.5 | rados/standalone/{supported-random-distro$/{centos_latest.yaml} workloads/osd.yaml} | 1 | |
Failure Reason:
Command failed (workunit test osd/osd-backfill-space.sh) on smithi156 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=38cd095b11db5cbd5536b33a72906b559d57eb32 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-backfill-space.sh' |
||||||||||||||
fail | 3550626 | 2019-02-05 01:46:08 | 2019-02-05 06:21:57 | 2019-02-05 07:31:58 | 1:10:01 | 0:58:46 | 0:11:15 | smithi | master | centos | 7.5 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/rados_cls_all.yaml} | 2 | |
Failure Reason:
Command failed on smithi092 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550627 | 2019-02-05 01:46:09 | 2019-02-05 06:21:58 | 2019-02-05 07:29:59 | 1:08:01 | 1:01:36 | 0:06:25 | smithi | master | rhel | 7.5 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} | 2 | |
Failure Reason:
Command failed on smithi041 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550628 | 2019-02-05 01:46:09 | 2019-02-05 06:22:02 | 2019-02-05 08:16:03 | 1:54:01 | 0:58:16 | 0:55:45 | smithi | master | ubuntu | 16.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
Failure Reason:
Command failed on smithi073 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550629 | 2019-02-05 01:46:10 | 2019-02-05 06:22:06 | 2019-02-05 07:42:06 | 1:20:00 | 1:08:22 | 0:11:38 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 3550630 | 2019-02-05 01:46:11 | 2019-02-05 06:24:02 | 2019-02-05 07:48:02 | 1:24:00 | 1:11:02 | 0:12:58 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/test_envlibrados_for_rocksdb.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
Command failed on smithi072 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550631 | 2019-02-05 01:46:12 | 2019-02-05 06:24:02 | 2019-02-05 07:32:03 | 1:08:01 | 1:00:54 | 0:07:07 | smithi | master | rhel | 7.5 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_latest.yaml} workloads/radosbench_4M_rand_read.yaml} | 1 | |
Failure Reason:
Command failed on smithi026 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550632 | 2019-02-05 01:46:13 | 2019-02-05 06:24:04 | 2019-02-05 08:16:05 | 1:52:01 | 0:59:19 | 0:52:42 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
Command failed on smithi039 with status 1: 'sudo ceph --cluster ceph osd crush tunables jewel' |
||||||||||||||
fail | 3550633 | 2019-02-05 01:46:13 | 2019-02-05 06:24:11 | 2019-02-05 07:32:11 | 1:08:00 | 1:01:05 | 0:06:55 | smithi | master | rhel | 7.5 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{rhel_latest.yaml} tasks/progress.yaml} | 2 | |
Failure Reason:
Command failed on smithi071 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550634 | 2019-02-05 01:46:14 | 2019-02-05 06:24:22 | 2019-02-05 07:34:22 | 1:10:00 | 0:59:31 | 0:10:29 | smithi | master | centos | 7.5 | rados/singleton/{all/thrash-backfill-full.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 2 | |
Failure Reason:
Command failed on smithi093 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550635 | 2019-02-05 01:46:15 | 2019-02-05 06:26:30 | 2019-02-05 08:08:30 | 1:42:00 | 0:43:55 | 0:58:05 | smithi | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/few.yaml msgr/random.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | 4 | |
Failure Reason:
"2019-02-05 07:30:58.853789 osd.4 (osd.4) 1 : cluster [ERR] map e18 had wrong cluster addr (v1:172.21.15.158:6801/33971 != my 172.21.15.158:6801/33971)" in cluster log |
||||||||||||||
fail | 3550636 | 2019-02-05 01:46:16 | 2019-02-05 06:28:46 | 2019-02-05 08:18:47 | 1:50:01 | 0:55:52 | 0:54:09 | smithi | master | ubuntu | 16.04 | rados/singleton-nomsgr/{all/multi-backfill-reject.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 2 | |
Failure Reason:
Command failed on smithi203 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550637 | 2019-02-05 01:46:16 | 2019-02-05 06:30:12 | 2019-02-05 07:38:12 | 1:08:00 | 1:00:24 | 0:07:36 | smithi | master | rhel | 7.5 | rados/objectstore/{backends/objectcacher-stress.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
Failure Reason:
Command failed on smithi096 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
dead | 3550638 | 2019-02-05 01:46:17 | 2019-02-05 06:30:12 | 2019-02-05 10:50:15 | 4:20:03 | smithi | master | centos | 7.5 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml rocksdb.yaml supported-random-distro$/{centos_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |||
fail | 3550639 | 2019-02-05 01:46:18 | 2019-02-05 06:30:18 | 2019-02-05 08:02:19 | 1:32:01 | 0:42:43 | 0:49:18 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/thrash-eio.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 2 | |
Failure Reason:
"2019-02-05 07:24:05.894537 osd.4 (osd.4) 1 : cluster [ERR] map e15 had wrong cluster addr (v1:172.21.15.90:6809/1435 != my 172.21.15.90:6809/1435)" in cluster log |
||||||||||||||
fail | 3550640 | 2019-02-05 01:46:19 | 2019-02-05 06:34:37 | 2019-02-05 08:26:38 | 1:52:01 | 0:57:43 | 0:54:18 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=2-m=1.yaml} | 2 | |
Failure Reason:
Command failed on smithi112 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550641 | 2019-02-05 01:46:19 | 2019-02-05 06:36:04 | 2019-02-05 08:12:04 | 1:36:00 | 0:56:32 | 0:39:28 | smithi | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/radosbench_4M_seq_read.yaml} | 1 | |
Failure Reason:
Command failed on smithi044 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550642 | 2019-02-05 01:46:20 | 2019-02-05 06:36:29 | 2019-02-05 07:48:29 | 1:12:00 | 0:21:22 | 0:50:38 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_python.yaml} | 2 | |
Failure Reason:
"2019-02-05 07:31:21.259664 osd.4 (osd.4) 1 : cluster [ERR] map e14 had wrong cluster addr (v1:172.21.15.132:6809/1773 != my 172.21.15.132:6809/1773)" in cluster log |
||||||||||||||
fail | 3550643 | 2019-02-05 01:46:21 | 2019-02-05 06:38:19 | 2019-02-05 07:32:19 | 0:54:00 | 0:43:17 | 0:10:43 | smithi | master | centos | 7.5 | rados/singleton/{all/thrash-rados/{thrash-rados.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 3550644 | 2019-02-05 01:46:22 | 2019-02-05 06:38:19 | 2019-02-05 08:08:19 | 1:30:00 | 0:40:06 | 0:49:54 | smithi | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |
Failure Reason:
"2019-02-05 07:31:13.290748 osd.6 (osd.6) 1 : cluster [ERR] map e14 had wrong cluster addr (v1:172.21.15.81:6805/32384 != my 172.21.15.81:6805/32384)" in cluster log |
||||||||||||||
fail | 3550645 | 2019-02-05 01:46:23 | 2019-02-05 06:43:51 | 2019-02-05 07:51:51 | 1:08:00 | 1:01:08 | 0:06:52 | smithi | master | rhel | 7.5 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{rhel_latest.yaml} tasks/prometheus.yaml} | 2 | |
Failure Reason:
Command failed on smithi002 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
pass | 3550646 | 2019-02-05 01:46:23 | 2019-02-05 06:48:01 | 2019-02-05 07:48:01 | 1:00:00 | 0:21:49 | 0:38:11 | smithi | master | ubuntu | 16.04 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/mon-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/force-sync-many.yaml workloads/rados_api_tests.yaml} | 2 | |
dead | 3550647 | 2019-02-05 01:46:24 | 2019-02-05 06:52:18 | 2019-02-05 10:50:21 | 3:58:03 | smithi | master | centos | 7.5 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |||
fail | 3550648 | 2019-02-05 01:46:25 | 2019-02-05 06:53:16 | 2019-02-05 08:01:16 | 1:08:00 | 1:01:29 | 0:06:31 | smithi | master | rhel | 7.5 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Command failed on smithi059 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550649 | 2019-02-05 01:46:26 | 2019-02-05 06:54:27 | 2019-02-05 08:02:28 | 1:08:01 | 0:58:17 | 0:09:44 | smithi | master | centos | 7.5 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_latest.yaml} workloads/radosbench_4M_write.yaml} | 1 | |
Failure Reason:
Command failed on smithi032 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
pass | 3550650 | 2019-02-05 01:46:27 | 2019-02-05 06:56:34 | 2019-02-05 07:42:34 | 0:46:00 | 0:19:07 | 0:26:53 | smithi | master | ubuntu | 16.04 | rados/singleton/{all/thrash_cache_writeback_proxy_none.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 2 | |
fail | 3550651 | 2019-02-05 01:46:27 | 2019-02-05 07:02:26 | 2019-02-05 08:10:27 | 1:08:01 | 0:58:00 | 0:10:01 | smithi | master | centos | 7.5 | rados/singleton-nomsgr/{all/pool-access.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
Failure Reason:
Command failed on smithi149 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550652 | 2019-02-05 01:46:28 | 2019-02-05 07:03:06 | 2019-02-05 07:53:06 | 0:50:00 | 0:44:52 | 0:05:08 | smithi | master | rhel | 7.5 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
pass | 3550653 | 2019-02-05 01:46:29 | 2019-02-05 07:04:16 | 2019-02-05 08:02:16 | 0:58:00 | 0:45:10 | 0:12:50 | smithi | master | ubuntu | 16.04 | rados/standalone/{supported-random-distro$/{ubuntu_16.04.yaml} workloads/scrub.yaml} | 1 | |
dead | 3550654 | 2019-02-05 01:46:30 | 2019-02-05 07:04:16 | 2019-02-05 07:06:15 | 0:01:59 | smithi | master | ubuntu | 18.04 | rados/multimon/{clusters/3.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_recovery.yaml} | — | |||
Failure Reason:
Timeout exceeded. <pexpect.pty_spawn.spawn object at 0x7f36ba5d52d0> command: /usr/bin/ipmitool args: ['/usr/bin/ipmitool', '-H', 'smithi006.ipmi.sepia.ceph.com', '-I', 'lanplus', '-U', 'inktank', '-P', 'ApGNXcA7', 'power', 'off'] buffer (last 100 chars): '' before (last 100 chars): '' after: <class 'pexpect.exceptions.TIMEOUT'> match: None match_index: None exitstatus: None flag_eof: False pid: 19522 child_fd: 8 closed: False timeout: 30 delimiter: <class 'pexpect.exceptions.EOF'> logfile: None logfile_read: None logfile_send: None maxread: 2000 ignorecase: False searchwindowsize: None delaybeforesend: 0.05 delayafterclose: 0.1 delayafterterminate: 0.1 searcher: searcher_re: 0: re.compile("Chassis Power Control: Down/Off") 1: EOF |
||||||||||||||
dead | 3550655 | 2019-02-05 01:46:30 | 2019-02-05 07:04:25 | 2019-02-05 10:50:28 | 3:46:03 | smithi | master | rhel | 7.5 | rados/singleton/{all/watch-notify-same-primary.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |||
fail | 3550656 | 2019-02-05 01:46:31 | 2019-02-05 07:05:52 | 2019-02-05 07:57:52 | 0:52:00 | 0:45:45 | 0:06:15 | smithi | master | rhel | 7.5 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} tasks/rados_stress_watch.yaml} | 2 | |
Failure Reason:
"2019-02-05 07:22:59.548478 mon.b (mon.0) 197 : cluster [WRN] Health check failed: Reduced data availability: 8 pgs inactive, 5 pgs peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 3550657 | 2019-02-05 01:46:32 | 2019-02-05 07:06:03 | 2019-02-05 08:02:03 | 0:56:00 | 0:43:46 | 0:12:14 | smithi | master | centos | 7.5 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml supported-random-distro$/{centos_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 3550658 | 2019-02-05 01:46:33 | 2019-02-05 07:06:17 | 2019-02-05 08:10:17 | 1:04:00 | 0:47:40 | 0:16:20 | smithi | master | centos | 7.5 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/hammer.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml msgr/simple.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
dead | 3550659 | 2019-02-05 01:46:33 | 2019-02-05 07:09:55 | 2019-02-05 10:49:58 | 3:40:03 | smithi | master | centos | 7.5 | rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |||
fail | 3550660 | 2019-02-05 01:46:34 | 2019-02-05 07:10:25 | 2019-02-05 08:22:25 | 1:12:00 | 0:59:06 | 0:12:54 | smithi | master | centos | 7.5 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} | 2 | |
Failure Reason:
Command failed on smithi067 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550661 | 2019-02-05 01:46:35 | 2019-02-05 07:12:02 | 2019-02-05 08:26:02 | 1:14:00 | 1:00:09 | 0:13:51 | smithi | master | centos | 7.5 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
Failure Reason:
Command failed on smithi153 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550662 | 2019-02-05 01:46:36 | 2019-02-05 07:13:41 | 2019-02-05 08:25:42 | 1:12:01 | 0:56:45 | 0:15:16 | smithi | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} | 2 | |||
Failure Reason:
Command failed on smithi119 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550663 | 2019-02-05 01:46:36 | 2019-02-05 07:14:17 | 2019-02-05 08:22:17 | 1:08:00 | 1:00:41 | 0:07:19 | smithi | master | rhel | 7.5 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_latest.yaml} workloads/sample_fio.yaml} | 1 | |
Failure Reason:
Command failed on smithi097 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550664 | 2019-02-05 01:46:37 | 2019-02-05 07:18:13 | 2019-02-05 08:24:13 | 1:06:00 | 0:55:33 | 0:10:27 | smithi | master | ubuntu | 16.04 | rados/singleton/{all/admin-socket.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
Command failed on smithi175 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550665 | 2019-02-05 01:46:38 | 2019-02-05 07:18:17 | 2019-02-05 08:30:18 | 1:12:01 | 0:55:54 | 0:16:07 | smithi | master | ubuntu | 16.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} | 2 | |
Failure Reason:
Command failed on smithi156 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
pass | 3550666 | 2019-02-05 01:46:39 | 2019-02-05 07:20:03 | 2019-02-05 07:44:02 | 0:23:59 | 0:12:03 | 0:11:56 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |
fail | 3550667 | 2019-02-05 01:46:40 | 2019-02-05 07:20:44 | 2019-02-05 08:30:44 | 1:10:00 | 0:58:54 | 0:11:06 | smithi | master | centos | 7.5 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{centos_latest.yaml} tasks/workunits.yaml} | 2 | |
Failure Reason:
Command failed on smithi091 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
dead | 3550668 | 2019-02-05 01:46:40 | 2019-02-05 07:22:13 | 2019-02-05 10:50:16 | 3:28:03 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/recovery-unfound-found.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |||
fail | 3550669 | 2019-02-05 01:46:41 | 2019-02-05 07:22:17 | 2019-02-05 08:12:17 | 0:50:00 | 0:39:24 | 0:10:36 | smithi | master | ubuntu | 16.04 | rados/singleton/{all/divergent_priors.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
"2019-02-05 07:35:25.559852 osd.0 (osd.0) 1 : cluster [ERR] map e10 had wrong cluster addr (v1:172.21.15.116:6806/13086 != my 172.21.15.116:6806/13086)" in cluster log |
||||||||||||||
pass | 3550670 | 2019-02-05 01:46:42 | 2019-02-05 07:22:26 | 2019-02-05 07:44:25 | 0:21:59 | 0:10:43 | 0:11:16 | smithi | master | centos | 7.5 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/rados_striper.yaml} | 2 | |
fail | 3550671 | 2019-02-05 01:46:43 | 2019-02-05 07:24:15 | 2019-02-05 08:30:15 | 1:06:00 | 0:55:32 | 0:10:28 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/sample_radosbench.yaml} | 1 | |
Failure Reason:
Command failed on smithi076 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550672 | 2019-02-05 01:46:44 | 2019-02-05 07:26:01 | 2019-02-05 08:18:01 | 0:52:00 | 0:43:25 | 0:08:35 | smithi | master | centos | 7.5 | rados/singleton/{all/divergent_priors2.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 3550673 | 2019-02-05 01:46:44 | 2019-02-05 07:28:12 | 2019-02-05 09:24:13 | 1:56:01 | 1:46:23 | 0:09:38 | smithi | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |
Failure Reason:
"2019-02-05 07:42:05.137958 mon.a (mon.1) 11 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
dead | 3550674 | 2019-02-05 01:46:45 | 2019-02-05 07:30:10 | 2019-02-05 10:50:13 | 3:20:03 | 3:09:41 | 0:10:22 | smithi | master | ubuntu | 16.04 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/many.yaml workloads/rados_mon_osdmap_prune.yaml} | 2 | |
Failure Reason:
SSH connection to smithi099 was lost: "sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --" |
||||||||||||||
fail | 3550675 | 2019-02-05 01:46:46 | 2019-02-05 07:30:10 | 2019-02-05 08:38:10 | 1:08:00 | 0:55:58 | 0:12:02 | smithi | master | ubuntu | 16.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
Failure Reason:
Command failed on smithi174 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550676 | 2019-02-05 01:46:47 | 2019-02-05 07:30:10 | 2019-02-05 08:38:11 | 1:08:01 | 1:00:54 | 0:07:07 | smithi | master | rhel | 7.5 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Command failed on smithi077 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
pass | 3550677 | 2019-02-05 01:46:47 | 2019-02-05 07:32:09 | 2019-02-05 07:48:09 | 0:16:00 | 0:07:46 | 0:08:14 | smithi | master | ubuntu | 16.04 | rados/singleton/{all/dump-stuck.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
fail | 3550678 | 2019-02-05 01:46:48 | 2019-02-05 07:32:10 | 2019-02-05 08:38:10 | 1:06:00 | 0:56:03 | 0:09:57 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/crash.yaml} | 2 | |
Failure Reason:
Command failed on smithi125 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550679 | 2019-02-05 01:46:49 | 2019-02-05 07:32:12 | 2019-02-05 08:36:12 | 1:04:00 | 0:54:57 | 0:09:03 | smithi | master | ubuntu | 16.04 | rados/objectstore/{backends/alloc-hint.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
Command failed on smithi143 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550680 | 2019-02-05 01:46:50 | 2019-02-05 07:32:20 | 2019-02-05 08:40:20 | 1:08:00 | 1:00:29 | 0:07:31 | smithi | master | rhel | 7.5 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_latest.yaml} workloads/cosbench_64K_read_write.yaml} | 1 | |
Failure Reason:
Command failed on smithi168 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550681 | 2019-02-05 01:46:51 | 2019-02-05 07:34:34 | 2019-02-05 08:38:34 | 1:04:00 | 0:55:03 | 0:08:57 | smithi | master | ubuntu | 16.04 | rados/rest/{mgr-restful.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
Command failed on smithi104 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
pass | 3550682 | 2019-02-05 01:46:51 | 2019-02-05 07:34:51 | 2019-02-05 08:18:51 | 0:44:00 | 0:33:40 | 0:10:20 | smithi | master | centos | 7.5 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
fail | 3550683 | 2019-02-05 01:46:52 | 2019-02-05 07:36:34 | 2019-02-05 08:04:33 | 0:27:59 | 0:18:58 | 0:09:01 | smithi | master | centos | rados/singleton-flat/valgrind-leaks.yaml | 1 | ||
Failure Reason:
Command failed on smithi057 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550684 | 2019-02-05 01:46:53 | 2019-02-05 07:38:24 | 2019-02-05 08:44:24 | 1:06:00 | 0:55:26 | 0:10:34 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/admin_socket_output.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
Command failed on smithi096 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
pass | 3550685 | 2019-02-05 01:46:54 | 2019-02-05 07:42:18 | 2019-02-05 08:02:17 | 0:19:59 | 0:13:24 | 0:06:35 | smithi | master | rhel | 7.5 | rados/standalone/{supported-random-distro$/{rhel_latest.yaml} workloads/crush.yaml} | 1 | |
dead | 3550686 | 2019-02-05 01:46:54 | 2019-02-05 07:42:35 | 2019-02-05 10:50:38 | 3:08:03 | smithi | master | rhel | 7.5 | rados/upgrade/luminous-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} supported-random-distro$/{rhel_latest.yaml} thrashosds-health.yaml} | 4 | |||
fail | 3550687 | 2019-02-05 01:46:55 | 2019-02-05 07:44:14 | 2019-02-05 08:50:14 | 1:06:00 | 0:55:52 | 0:10:08 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
Failure Reason:
Command failed on smithi201 with status 1: 'sudo ceph --cluster ceph osd crush tunables jewel' |
||||||||||||||
dead | 3550688 | 2019-02-05 01:46:56 | 2019-02-05 07:44:26 | 2019-02-05 10:50:29 | 3:06:03 | smithi | master | ubuntu | 16.04 | rados/singleton/{all/ec-lost-unfound.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |||
pass | 3550689 | 2019-02-05 01:46:57 | 2019-02-05 07:48:12 | 2019-02-05 08:12:12 | 0:24:00 | 0:11:35 | 0:12:25 | smithi | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/jewel.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/fastclose.yaml msgr/async-v1only.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | 4 | |
fail | 3550690 | 2019-02-05 01:46:58 | 2019-02-05 07:48:12 | 2019-02-05 08:56:12 | 1:08:00 | 0:55:52 | 0:12:08 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_workunit_loadgen_big.yaml} | 2 | |
Failure Reason:
Command failed on smithi103 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
dead | 3550691 | 2019-02-05 01:46:58 | 2019-02-05 07:48:12 | 2019-02-05 10:50:15 | 3:02:03 | smithi | master | centos | 7.5 | rados/multimon/{clusters/6.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/mon_clock_no_skews.yaml} | 2 | |||
fail | 3550692 | 2019-02-05 01:46:59 | 2019-02-05 07:48:31 | 2019-02-05 08:56:31 | 1:08:00 | 1:01:24 | 0:06:36 | smithi | master | rhel | 7.5 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} | 2 | |
Failure Reason:
Command failed on smithi136 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
pass | 3550693 | 2019-02-05 01:47:00 | 2019-02-05 07:52:02 | 2019-02-05 08:08:01 | 0:15:59 | 0:11:05 | 0:04:54 | smithi | master | rhel | 7.5 | rados/singleton/{all/erasure-code-nonregression.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
fail | 3550694 | 2019-02-05 01:47:01 | 2019-02-05 07:53:17 | 2019-02-05 08:45:17 | 0:52:00 | 0:46:00 | 0:06:00 | smithi | master | rhel | 7.5 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
Failure Reason:
"2019-02-05 08:09:04.588028 osd.5 (osd.5) 1 : cluster [ERR] map e14 had wrong cluster addr (v1:172.21.15.130:6806/5889 != my 172.21.15.130:6806/5889)" in cluster log |
||||||||||||||
fail | 3550695 | 2019-02-05 01:47:01 | 2019-02-05 07:58:03 | 2019-02-05 09:02:03 | 1:04:00 | 0:55:07 | 0:08:53 | smithi | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/cosbench_64K_write.yaml} | 1 | |
Failure Reason:
Command failed on smithi041 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550696 | 2019-02-05 01:47:02 | 2019-02-05 08:01:17 | 2019-02-05 09:09:18 | 1:08:01 | 0:56:15 | 0:11:46 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |
Failure Reason:
Command failed on smithi144 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550697 | 2019-02-05 01:47:03 | 2019-02-05 08:02:14 | 2019-02-05 09:12:15 | 1:10:01 | 1:02:15 | 0:07:46 | smithi | master | rhel | 7.5 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
Failure Reason:
Command failed on smithi129 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550698 | 2019-02-05 01:47:04 | 2019-02-05 08:02:18 | 2019-02-05 09:06:18 | 1:04:00 | 0:52:17 | 0:11:43 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 3550699 | 2019-02-05 01:47:05 | 2019-02-05 08:02:18 | 2019-02-05 08:58:18 | 0:56:00 | 0:42:50 | 0:13:10 | smithi | master | centos | 7.5 | rados/singleton/{all/lost-unfound-delete.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 3550700 | 2019-02-05 01:47:05 | 2019-02-05 08:02:20 | 2019-02-05 09:10:20 | 1:08:00 | 1:00:54 | 0:07:06 | smithi | master | rhel | 7.5 | rados/singleton-nomsgr/{all/cache-fs-trunc.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
Failure Reason:
Command failed on smithi032 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550701 | 2019-02-05 01:47:06 | 2019-02-05 08:02:29 | 2019-02-05 09:08:29 | 1:06:00 | 0:55:30 | 0:10:30 | smithi | master | ubuntu | 16.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Command failed on smithi165 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550702 | 2019-02-05 01:47:07 | 2019-02-05 08:04:37 | 2019-02-05 08:58:37 | 0:54:00 | 0:46:43 | 0:07:17 | smithi | master | rhel | 7.5 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
dead | 3550703 | 2019-02-05 01:47:08 | 2019-02-05 08:08:13 | 2019-02-05 10:50:15 | 2:42:02 | smithi | master | centos | 7.5 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/rados_workunit_loadgen_mix.yaml} | 2 | |||
fail | 3550704 | 2019-02-05 01:47:09 | 2019-02-05 08:08:20 | 2019-02-05 09:00:20 | 0:52:00 | 0:45:04 | 0:06:56 | smithi | master | rhel | 7.5 | rados/singleton/{all/lost-unfound.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 3550705 | 2019-02-05 01:47:09 | 2019-02-05 08:08:31 | 2019-02-05 09:14:32 | 1:06:01 | 1:00:36 | 0:05:25 | smithi | master | rhel | 7.5 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_latest.yaml} workloads/fio_4K_rand_read.yaml} | 1 | |
Failure Reason:
Command failed on smithi181 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 3550706 | 2019-02-05 01:47:10 | 2019-02-05 08:10:28 | 2019-02-05 09:14:28 | 1:04:00 | 0:55:04 | 0:08:56 | smithi | master | ubuntu | 16.04 | rados/objectstore/{backends/ceph_objectstore_tool.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
Command failed on smithi149 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
dead | 3550707 | 2019-02-05 01:47:11 | 2019-02-05 08:10:28 | 2019-02-05 10:50:30 | 2:40:02 | smithi | master | centos | 7.5 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} thrashers/one.yaml workloads/rados_mon_workunits.yaml} | 2 | |||
fail | 3550708 | 2019-02-05 01:47:12 | 2019-02-05 08:12:16 | 2019-02-05 10:12:17 | 2:00:01 | 1:52:22 | 0:07:39 | smithi | master | rhel | 7.5 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
Failure Reason:
"2019-02-05 08:29:29.722140 mon.a (mon.1) 11 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 3550709 | 2019-02-05 01:47:13 | 2019-02-05 08:12:16 | 2019-02-05 10:10:17 | 1:58:01 | 1:51:37 | 0:06:24 | smithi | master | rhel | 7.5 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
"2019-02-05 08:28:47.852289 mon.b (mon.1) 13 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 3550710 | 2019-02-05 01:47:13 | 2019-02-05 08:12:18 | 2019-02-05 08:32:18 | 0:20:00 | 0:10:09 | 0:09:51 | smithi | master | centos | 7.5 | rados/singleton/{all/max-pg-per-osd.from-mon.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
pass | 3550711 | 2019-02-05 01:47:14 | 2019-02-05 08:16:14 | 2019-02-05 08:52:14 | 0:36:00 | 0:23:53 | 0:12:07 | smithi | master | centos | 7.5 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml rocksdb.yaml supported-random-distro$/{centos_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
pass | 3550712 | 2019-02-05 01:47:15 | 2019-02-05 08:16:14 | 2019-02-05 08:52:14 | 0:36:00 | 0:22:53 | 0:13:07 | smithi | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/few.yaml msgr/async.yaml rados.yaml rocksdb.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 4 | |
pass | 3550713 | 2019-02-05 01:47:16 | 2019-02-05 08:18:12 | 2019-02-05 08:58:12 | 0:40:00 | 0:31:43 | 0:08:17 | smithi | master | ubuntu | 16.04 | rados/standalone/{supported-random-distro$/{ubuntu_16.04.yaml} workloads/erasure-code.yaml} | 1 |