User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2017-09-07 05:00:19 | 2017-09-07 06:20:07 | 2017-09-07 12:24:49 | 6:04:42 | smoke | master | vps | 28c7813 | 12 | 15 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 1603856 | 2017-09-07 05:04:21 | 2017-09-07 05:11:21 | 2017-09-07 05:29:21 | 0:18:00 | 0:13:51 | 0:04:09 | vps | master | ubuntu | 16.04 | smoke/1node/{clusters/{fixed-1.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/ceph-deploy.yaml} | 1 | |
Failure Reason:
'check health' reached maximum tries (6) after waiting for 60 seconds |
||||||||||||||
pass | 1603857 | 2017-09-07 05:04:21 | 2017-09-07 05:13:01 | 2017-09-07 05:46:58 | 0:33:57 | 0:23:36 | 0:10:21 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_blogbench.yaml} | 3 | |||
dead | 1603858 | 2017-09-07 05:04:22 | 2017-09-07 05:16:43 | 2017-09-07 12:24:49 | 7:08:06 | 0:33:30 | 6:34:36 | vps | master | centos | 7.3 | smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} | 4 | |
Failure Reason:
Could not reconnect to ubuntu@vpm195.front.sepia.ceph.com |
||||||||||||||
pass | 1603859 | 2017-09-07 05:04:23 | 2017-09-07 05:16:48 | 2017-09-07 05:56:46 | 0:39:58 | 0:20:11 | 0:19:47 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_fsstress.yaml} | 3 | |||
pass | 1603860 | 2017-09-07 05:04:23 | 2017-09-07 05:16:48 | 2017-09-07 07:00:47 | 1:43:59 | 1:01:56 | 0:42:03 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_iozone.yaml} | 3 | |||
pass | 1603861 | 2017-09-07 05:04:24 | 2017-09-07 05:16:47 | 2017-09-07 09:16:51 | 4:00:04 | 2:01:37 | 1:58:27 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_pjd.yaml} | 3 | |||
pass | 1603862 | 2017-09-07 05:04:25 | 2017-09-07 05:22:47 | 2017-09-07 07:10:47 | 1:48:00 | 0:18:16 | 1:29:44 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} | 3 | |||
pass | 1603863 | 2017-09-07 05:04:25 | 2017-09-07 05:24:57 | 2017-09-07 06:30:57 | 1:06:00 | 0:48:34 | 0:17:26 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
pass | 1603864 | 2017-09-07 05:04:26 | 2017-09-07 05:29:30 | 2017-09-07 06:55:24 | 1:25:54 | 0:19:22 | 1:06:32 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
pass | 1603865 | 2017-09-07 05:04:27 | 2017-09-07 05:32:56 | 2017-09-07 06:54:57 | 1:22:01 | 0:52:57 | 0:29:04 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||
fail | 1603866 | 2017-09-07 05:04:28 | 2017-09-07 05:37:04 | 2017-09-07 06:54:46 | 1:17:42 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/libcephfs_interface_tests.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm013.front.sepia.ceph.com |
||||||||||||||
fail | 1603867 | 2017-09-07 05:04:28 | 2017-09-07 05:47:06 | 2017-09-07 08:19:08 | 2:32:02 | 0:36:34 | 1:55:28 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/mon_thrash.yaml} | 3 | |||
Failure Reason:
"2017-09-07 07:57:16.059695 mon.a mon.0 172.21.2.89:6789/0 156 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
fail | 1603868 | 2017-09-07 05:04:29 | 2017-09-07 05:57:04 | 2017-09-07 07:03:04 | 1:06:00 | 0:35:39 | 0:30:21 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_api_tests.yaml} | 3 | |||
Failure Reason:
"2017-09-07 06:44:54.299729 mon.b mon.0 172.21.2.123:6789/0 146 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
fail | 1603869 | 2017-09-07 05:04:30 | 2017-09-07 05:57:04 | 2017-09-07 06:41:03 | 0:43:59 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_bench.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm083.front.sepia.ceph.com |
||||||||||||||
fail | 1603870 | 2017-09-07 05:04:30 | 2017-09-07 06:04:44 | 2017-09-07 08:34:46 | 2:30:02 | 1:24:27 | 1:05:35 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cache_snaps.yaml} | 3 | |||
Failure Reason:
"2017-09-07 08:06:08.656133 mon.b mon.0 172.21.2.125:6789/0 603 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log |
||||||||||||||
fail | 1603871 | 2017-09-07 05:04:31 | 2017-09-07 06:20:07 | 2017-09-07 12:06:13 | 5:46:06 | 0:29:09 | 5:16:57 | vps | master | ubuntu | 16.04 | smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} | 4 | |
Failure Reason:
Command failed (workunit test rados/load-gen-mix.sh) on vpm041 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix.sh' |
||||||||||||||
fail | 1603872 | 2017-09-07 05:04:32 | 2017-09-07 06:23:02 | 2017-09-07 07:03:00 | 0:39:58 | 0:19:06 | 0:20:52 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} | 3 | |||
Failure Reason:
Command failed (workunit test cls/test_cls_sdk.sh) on vpm069 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_sdk.sh' |
||||||||||||||
fail | 1603873 | 2017-09-07 05:04:32 | 2017-09-07 06:29:19 | 2017-09-07 07:49:20 | 1:20:01 | 0:31:55 | 0:48:06 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_ec_snaps.yaml} | 3 | |||
Failure Reason:
"2017-09-07 07:30:07.818898 mon.b mon.0 172.21.2.65:6789/0 106 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
fail | 1603874 | 2017-09-07 05:04:33 | 2017-09-07 06:31:03 | 2017-09-07 07:31:04 | 1:00:01 | 0:23:26 | 0:36:35 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_python.yaml} | 3 | |||
Failure Reason:
"2017-09-07 07:20:51.670971 mon.a mon.0 172.21.2.129:6789/0 166 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
pass | 1603875 | 2017-09-07 05:04:34 | 2017-09-07 06:39:03 | 2017-09-07 08:33:03 | 1:54:00 | 0:32:40 | 1:21:20 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_workunit_loadgen_mix.yaml} | 3 | |||
fail | 1603876 | 2017-09-07 05:04:35 | 2017-09-07 06:39:03 | 2017-09-07 07:27:03 | 0:48:00 | 0:25:33 | 0:22:27 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_api_tests.yaml} | 3 | |||
Failure Reason:
"2017-09-07 07:15:47.892662 mon.a mon.0 172.21.2.47:6789/0 202 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET)" in cluster log |
||||||||||||||
pass | 1603877 | 2017-09-07 05:04:35 | 2017-09-07 06:40:48 | 2017-09-07 07:50:48 | 1:10:00 | 0:19:55 | 0:50:05 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_cli_import_export.yaml} | 3 | |||
fail | 1603878 | 2017-09-07 05:04:36 | 2017-09-07 06:41:56 | 2017-09-07 07:33:57 | 0:52:01 | 0:24:28 | 0:27:33 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} | 3 | |||
Failure Reason:
"2017-09-07 07:22:27.883408 mon.a mon.0 172.21.2.89:6789/0 127 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
fail | 1603879 | 2017-09-07 05:04:37 | 2017-09-07 06:54:08 | 2017-09-07 07:14:06 | 0:19:58 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm169.front.sepia.ceph.com |
||||||||||||||
pass | 1603880 | 2017-09-07 05:04:37 | 2017-09-07 06:54:51 | 2017-09-07 08:12:51 | 1:18:00 | 0:33:30 | 0:44:30 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_workunit_suites_iozone.yaml} | 3 | |||
fail | 1603881 | 2017-09-07 05:04:38 | 2017-09-07 06:54:53 | 2017-09-07 07:44:53 | 0:50:00 | 0:25:04 | 0:24:56 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_ec_s3tests.yaml} | 3 | |||
Failure Reason:
Command failed (s3 tests against rgw) on vpm013 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw,!lifecycle'" |
||||||||||||||
fail | 1603882 | 2017-09-07 05:04:39 | 2017-09-07 06:55:06 | 2017-09-07 10:05:06 | 3:10:00 | 0:27:03 | 2:42:57 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_s3tests.yaml} | 3 | |||
Failure Reason:
Command failed (s3 tests against rgw) on vpm133 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw,!lifecycle'" |
||||||||||||||
pass | 1603883 | 2017-09-07 05:04:40 | 2017-09-07 06:55:43 | 2017-09-07 07:35:38 | 0:39:55 | 0:22:56 | 0:16:59 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} | 3 |