User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2017-09-10 05:00:16 | 2017-09-10 05:04:45 | 2017-09-10 10:05:50 | 5:01:05 | smoke | master | vps | f34821c | 12 | 16 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 1616185 | 2017-09-10 05:01:44 | 2017-09-10 05:04:45 | 2017-09-10 05:20:41 | 0:15:56 | 0:11:21 | 0:04:35 | vps | master | ubuntu | 16.04 | smoke/1node/{clusters/{fixed-1.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/ceph-deploy.yaml} | 1 | |
Failure Reason:
Command failed on vpm027 with status 32: 'sudo umount /dev/vdb1' |
||||||||||||||
pass | 1616186 | 2017-09-10 05:01:45 | 2017-09-10 05:15:23 | 2017-09-10 05:53:20 | 0:37:57 | 0:28:24 | 0:09:33 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_blogbench.yaml} | 3 | |||
pass | 1616189 | 2017-09-10 05:01:46 | 2017-09-10 05:17:53 | 2017-09-10 07:59:54 | 2:42:01 | 0:45:34 | 1:56:27 | vps | master | centos | 7.3 | smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} | 4 | |
fail | 1616191 | 2017-09-10 05:01:46 | 2017-09-10 05:19:43 | 2017-09-10 05:43:42 | 0:23:59 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_fsstress.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm199.front.sepia.ceph.com |
||||||||||||||
pass | 1616192 | 2017-09-10 05:01:47 | 2017-09-10 05:21:04 | 2017-09-10 07:56:59 | 2:35:55 | 0:38:19 | 1:57:36 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_iozone.yaml} | 3 | |||
pass | 1616194 | 2017-09-10 05:01:47 | 2017-09-10 05:21:43 | 2017-09-10 08:55:46 | 3:34:03 | 2:20:34 | 1:13:29 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_pjd.yaml} | 3 | |||
pass | 1616197 | 2017-09-10 05:01:48 | 2017-09-10 05:21:43 | 2017-09-10 05:51:42 | 0:29:59 | 0:17:53 | 0:12:06 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} | 3 | |||
fail | 1616199 | 2017-09-10 05:01:49 | 2017-09-10 05:21:46 | 2017-09-10 06:27:45 | 1:05:59 | 0:48:40 | 0:17:19 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
Failure Reason:
"2017-09-10 06:17:23.664474 mon.a mon.0 172.21.2.47:6789/0 127 : cluster [WRN] daemon mds.a is not responding, replacing it as rank 0 with standby daemon mds.a-s" in cluster log |
||||||||||||||
fail | 1616201 | 2017-09-10 05:01:49 | 2017-09-10 05:27:44 | 2017-09-10 06:49:41 | 1:21:57 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm085.front.sepia.ceph.com |
||||||||||||||
pass | 1616203 | 2017-09-10 05:01:50 | 2017-09-10 05:31:21 | 2017-09-10 08:45:25 | 3:14:04 | 2:32:03 | 0:42:01 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||
pass | 1616205 | 2017-09-10 05:01:50 | 2017-09-10 05:31:47 | 2017-09-10 06:19:47 | 0:48:00 | 0:21:40 | 0:26:20 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/libcephfs_interface_tests.yaml} | 3 | |||
fail | 1616207 | 2017-09-10 05:01:51 | 2017-09-10 05:33:43 | 2017-09-10 08:59:47 | 3:26:04 | 0:31:38 | 2:54:26 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/mon_thrash.yaml} | 3 | |||
Failure Reason:
"2017-09-10 08:42:06.378443 mon.a mon.0 172.21.2.19:6789/0 165 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 1616209 | 2017-09-10 05:01:52 | 2017-09-10 05:33:47 | 2017-09-10 08:37:52 | 3:04:05 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_api_tests.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm015.front.sepia.ceph.com |
||||||||||||||
fail | 1616211 | 2017-09-10 05:01:52 | 2017-09-10 05:43:45 | 2017-09-10 08:53:48 | 3:10:03 | 0:49:44 | 2:20:19 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_bench.yaml} | 3 | |||
Failure Reason:
"2017-09-10 08:16:12.728781 mon.a mon.0 172.21.2.67:6789/0 119 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
fail | 1616213 | 2017-09-10 05:01:53 | 2017-09-10 05:51:50 | 2017-09-10 10:05:50 | 4:14:00 | 0:56:41 | 3:17:19 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cache_snaps.yaml} | 3 | |||
Failure Reason:
"2017-09-10 09:23:42.202975 mon.a mon.0 172.21.2.9:6789/0 125 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
fail | 1616214 | 2017-09-10 05:01:53 | 2017-09-10 05:53:27 | 2017-09-10 08:09:31 | 2:16:04 | vps | master | ubuntu | 16.04 | smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} | 4 | |||
Failure Reason:
Could not reconnect to ubuntu@vpm091.front.sepia.ceph.com |
||||||||||||||
fail | 1616216 | 2017-09-10 05:01:54 | 2017-09-10 06:07:41 | 2017-09-10 06:49:41 | 0:42:00 | 0:21:17 | 0:20:43 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} | 3 | |||
Failure Reason:
Command failed (workunit test cls/test_cls_sdk.sh) on vpm021 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_sdk.sh' |
||||||||||||||
fail | 1616218 | 2017-09-10 05:01:55 | 2017-09-10 06:19:51 | 2017-09-10 08:21:51 | 2:02:00 | 0:41:52 | 1:20:08 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_ec_snaps.yaml} | 3 | |||
Failure Reason:
"2017-09-10 07:53:33.393440 mon.b mon.0 172.21.2.27:6789/0 133 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
fail | 1616220 | 2017-09-10 05:01:55 | 2017-09-10 06:27:51 | 2017-09-10 06:47:50 | 0:19:59 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_python.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm129.front.sepia.ceph.com |
||||||||||||||
pass | 1616222 | 2017-09-10 05:01:56 | 2017-09-10 06:31:38 | 2017-09-10 08:25:37 | 1:53:59 | 0:39:57 | 1:14:02 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_workunit_loadgen_mix.yaml} | 3 | |||
fail | 1616224 | 2017-09-10 05:01:57 | 2017-09-10 06:37:40 | 2017-09-10 07:35:41 | 0:58:01 | 0:29:30 | 0:28:31 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_api_tests.yaml} | 3 | |||
Failure Reason:
"2017-09-10 07:21:09.378849 mon.a mon.0 172.21.2.1:6789/0 194 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET)" in cluster log |
||||||||||||||
pass | 1616226 | 2017-09-10 05:01:57 | 2017-09-10 06:44:01 | 2017-09-10 07:15:56 | 0:31:55 | 0:19:24 | 0:12:31 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_cli_import_export.yaml} | 3 | |||
fail | 1616228 | 2017-09-10 05:01:58 | 2017-09-10 06:47:59 | 2017-09-10 07:55:54 | 1:07:55 | 0:27:33 | 0:40:22 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} | 3 | |||
Failure Reason:
"2017-09-10 07:41:45.009525 mon.b mon.0 172.21.2.35:6789/0 127 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
pass | 1616230 | 2017-09-10 05:01:58 | 2017-09-10 06:49:50 | 2017-09-10 08:53:48 | 2:03:58 | 0:32:33 | 1:31:25 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} | 3 | |||
pass | 1616232 | 2017-09-10 05:01:59 | 2017-09-10 06:49:53 | 2017-09-10 07:33:44 | 0:43:51 | 0:34:08 | 0:09:43 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_workunit_suites_iozone.yaml} | 3 | |||
fail | 1616234 | 2017-09-10 05:02:00 | 2017-09-10 06:53:39 | 2017-09-10 09:11:41 | 2:18:02 | 0:28:34 | 1:49:28 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_ec_s3tests.yaml} | 3 | |||
Failure Reason:
Command failed (s3 tests against rgw) on vpm097 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw,!lifecycle'" |
||||||||||||||
fail | 1616236 | 2017-09-10 05:02:00 | 2017-09-10 06:53:46 | 2017-09-10 07:55:44 | 1:01:58 | 0:40:45 | 0:21:13 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_s3tests.yaml} | 3 | |||
Failure Reason:
Command failed (s3 tests against rgw) on vpm005 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw,!lifecycle'" |
||||||||||||||
pass | 1616238 | 2017-09-10 05:02:01 | 2017-09-10 06:56:05 | 2017-09-10 07:53:42 | 0:57:37 | 0:25:23 | 0:32:14 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} | 3 |