User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2020-02-26 02:45:45 | 2020-02-26 02:49:21 | 2020-02-26 13:25:47 | 10:36:26 | rados | nautilus | smithi | d7e0d07 | 4 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 4802524 | 2020-02-26 02:45:56 | 2020-02-26 02:49:18 | 2020-02-26 04:07:21 | 1:18:03 | 1:09:14 | 0:08:49 | smithi | master | ubuntu | 18.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/scrub.yaml} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-scrub-snaps.sh) on smithi191 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d7e0d07101f75950ae482d2aeb08d3eeab024ade TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-snaps.sh' |
||||||||||||||
pass | 4802525 | 2020-02-26 02:45:57 | 2020-02-26 02:49:21 | 2020-02-26 13:25:47 | 10:36:26 | 2:41:43 | 7:54:43 | smithi | master | ubuntu | 18.04 | rados/upgrade/mimic-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/mimic.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashosds-health.yaml} | 4 | |
fail | 4802526 | 2020-02-26 02:45:59 | 2020-02-26 02:49:21 | 2020-02-26 03:11:23 | 0:22:02 | 0:08:00 | 0:14:02 | smithi | master | ubuntu | 16.04 | rados/singleton-nomsgr/{all/balancer.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
"2020-02-26 03:08:56.583241 mon.a (mon.0) 108 : cluster [WRN] Health check failed: Reduced data availability: 3 pgs inactive, 3 pgs peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 4802527 | 2020-02-26 02:46:00 | 2020-02-26 02:50:23 | 2020-02-26 11:42:43 | 8:52:20 | 0:27:51 | 8:24:29 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml msgr/random.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 4 | |
pass | 4802528 | 2020-02-26 02:46:02 | 2020-02-26 02:53:33 | 2020-02-26 03:13:35 | 0:20:02 | 0:10:43 | 0:09:19 | smithi | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/fio_4M_rand_write.yaml} | 1 | |
pass | 4802529 | 2020-02-26 02:46:03 | 2020-02-26 02:54:43 | 2020-02-26 06:00:47 | 3:06:04 | 2:54:37 | 0:11:27 | smithi | master | ubuntu | 16.04 | rados/standalone/{supported-random-distro$/{ubuntu_16.04.yaml} workloads/osd.yaml} | 1 |