User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
sage | 2019-01-28 12:58:43 | 2019-01-28 13:00:23 | 2019-01-28 16:06:45 | 3:06:22 | rados | wip-sage-testing-2019-01-28-0218 | smithi | f91b4e1 | 2 | 5 | 3 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 3520226 | 2019-01-28 12:58:53 | 2019-01-28 13:00:23 | 2019-01-28 13:20:22 | 0:19:59 | 0:11:20 | 0:08:39 | smithi | master | rhel | 7.5 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_latest.yaml} tasks/progress.yaml} | 2 | |
Failure Reason:
"2019-01-28 13:17:48.952802 mon.a (mon.0) 121 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 3520227 | 2019-01-28 12:58:54 | 2019-01-28 13:00:56 | 2019-01-28 13:44:55 | 0:43:59 | 0:32:07 | 0:11:52 | smithi | master | ubuntu | 16.04 | rados/objectstore/{backends/alloc-hint.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_alloc_hint.sh) on smithi169 with status 22: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f91b4e1a8594acd3fc9e56382bc90770978723ec TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_alloc_hint.sh' |
||||||||||||||
fail | 3520228 | 2019-01-28 12:58:54 | 2019-01-28 13:02:56 | 2019-01-28 13:48:56 | 0:46:00 | 0:12:15 | 0:33:45 | smithi | master | centos | 7.5 | rados/rest/{mgr-restful.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
Failure Reason:
"2019-01-28 13:43:28.181023 mon.a (mon.0) 132 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 3520229 | 2019-01-28 12:58:55 | 2019-01-28 13:02:56 | 2019-01-28 14:26:56 | 1:24:00 | 0:49:39 | 0:34:21 | smithi | master | centos | rados/singleton-flat/valgrind-leaks.yaml | 1 | ||
Failure Reason:
"2019-01-28 13:49:22.774498 mon.a (mon.0) 68 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 3520230 | 2019-01-28 12:58:56 | 2019-01-28 13:03:01 | 2019-01-28 13:55:01 | 0:52:00 | 0:22:05 | 0:29:55 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/admin_socket_output.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
"2019-01-28 13:35:33.298729 mon.a (mon.0) 97 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
dead | 3520231 | 2019-01-28 12:58:57 | 2019-01-28 13:03:10 | 2019-01-28 16:05:12 | 3:02:02 | 2:34:13 | 0:27:49 | smithi | master | ubuntu | 16.04 | rados/standalone/{supported-random-distro$/{ubuntu_16.04.yaml} workloads/crush.yaml} | 1 | |
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=1012) |
||||||||||||||
dead | 3520232 | 2019-01-28 12:58:57 | 2019-01-28 13:04:43 | 2019-01-28 16:06:45 | 3:02:02 | smithi | master | ubuntu | 18.04 | rados/upgrade/luminous-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashosds-health.yaml} | 4 | |||
dead | 3520233 | 2019-01-28 12:58:58 | 2019-01-28 13:04:50 | 2019-01-28 16:04:52 | 3:00:02 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_stress_watch.yaml} | 2 | |||
pass | 3520234 | 2019-01-28 12:58:59 | 2019-01-28 13:04:52 | 2019-01-28 14:10:52 | 1:06:00 | 0:18:22 | 0:47:38 | smithi | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
pass | 3520235 | 2019-01-28 12:59:00 | 2019-01-28 13:04:52 | 2019-01-28 14:04:52 | 1:00:00 | 0:13:12 | 0:46:48 | smithi | master | rhel | 7.5 | rados/singleton/{all/mon-config.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 |