User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
kchai | 2020-07-31 01:42:48 | 2020-07-31 11:11:38 | 2020-07-31 15:41:23 | 4:29:45 | rados | wip-kefu-testing-2020-07-30-2107 | smithi | 61c2eda | 7 | 4 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 5271962 | 2020-07-31 01:43:00 | 2020-07-31 11:11:38 | 2020-07-31 11:33:37 | 0:21:59 | 0:12:39 | 0:09:20 | smithi | master | rhel | 8.1 | rados/cephadm/smoke-roleless/{distro/rhel_latest start} | 2 | |
pass | 5271963 | 2020-07-31 01:43:00 | 2020-07-31 11:11:38 | 2020-07-31 12:05:38 | 0:54:00 | 0:33:21 | 0:20:39 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_7.6} msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/cache-snaps} | 3 | |
fail | 5271964 | 2020-07-31 01:43:01 | 2020-07-31 11:13:29 | 2020-07-31 11:31:28 | 0:17:59 | 0:04:59 | 0:13:00 | smithi | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/dedup_tier} | 2 | |
Failure Reason:
Command failed on smithi203 with status 1: 'sudo yum -y install ceph-mgr-diskprediction-local' |
||||||||||||||
pass | 5271965 | 2020-07-31 01:43:02 | 2020-07-31 11:13:29 | 2020-07-31 11:39:28 | 0:25:59 | 0:12:42 | 0:13:17 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph objectstore/bluestore-basic-min-osd-mem-target openstack settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} | 1 | |
pass | 5271966 | 2020-07-31 01:43:03 | 2020-07-31 11:13:30 | 2020-07-31 11:53:30 | 0:40:00 | 0:25:33 | 0:14:27 | smithi | master | rhel | 8.0 | rados/cephadm/with-work/{distro/rhel_8.0 fixed-2 mode/root msgr/async-v1only start tasks/rados_python} | 2 | |
pass | 5271967 | 2020-07-31 01:43:04 | 2020-07-31 11:13:30 | 2020-07-31 14:43:35 | 3:30:05 | 3:17:58 | 0:12:07 | smithi | master | centos | 8.1 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados tasks/rados_cls_all validater/valgrind} | 2 | |
pass | 5271968 | 2020-07-31 01:43:04 | 2020-07-31 11:13:45 | 2020-07-31 11:33:44 | 0:19:59 | 0:12:56 | 0:07:03 | smithi | master | rhel | 8.1 | rados/cephadm/smoke-roleless/{distro/rhel_latest start} | 2 | |
fail | 5271969 | 2020-07-31 01:43:05 | 2020-07-31 11:15:05 | 2020-07-31 11:35:04 | 0:19:59 | 0:13:12 | 0:06:47 | smithi | master | centos | 8.1 | rados/singleton/{all/thrash_cache_writeback_proxy_none msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 10000 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op copy_from 50 --op write_excl 50 --pool base' |
||||||||||||||
fail | 5271970 | 2020-07-31 01:43:06 | 2020-07-31 11:15:12 | 2020-07-31 12:03:12 | 0:48:00 | 0:24:36 | 0:23:24 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/few rados thrashers/default thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 5271971 | 2020-07-31 01:43:07 | 2020-07-31 11:16:01 | 2020-07-31 12:24:02 | 1:08:01 | 1:01:05 | 0:06:56 | smithi | master | centos | 8.1 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados tasks/rados_api_tests validater/valgrind} | 2 | |
fail | 5271972 | 2020-07-31 01:43:08 | 2020-07-31 11:17:16 | 2020-07-31 15:41:23 | 4:24:07 | 3:56:35 | 0:27:32 | smithi | master | ubuntu | 18.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-octopus 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap thrashosds-health ubuntu_latest} | 4 | |
Failure Reason:
"2020-07-31T14:08:20.993682+0000 osd.10 (osd.10) 100 : cluster [ERR] 5.0 deep-scrub : stat mismatch, got 1/4 objects, 0/0 clones, 0/0 dirty, 0/0 omap, 0/0 pinned, 1/4 hit_set_archive, 0/0 whiteouts, 1189/6549 bytes, 0/0 manifest objects, 1189/6549 hit_set_archive bytes." in cluster log |