User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2017-07-28 05:00:25 | 2017-07-28 05:02:46 | 2017-07-28 17:14:14 | 12:11:28 | smoke | master | vps | fb03938 | 11 | 16 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 1455767 | 2017-07-28 05:02:03 | 2017-07-28 05:02:46 | 2017-07-28 06:46:47 | 1:44:01 | 1:40:38 | 0:03:23 | vps | master | ubuntu | 16.04 | smoke/1node/{clusters/{fixed-1.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/ceph-deploy.yaml} | 1 | |
Failure Reason:
'check health' reached maximum tries (6) after waiting for 60 seconds |
||||||||||||||
pass | 1455770 | 2017-07-28 05:02:04 | 2017-07-28 05:04:00 | 2017-07-28 08:08:02 | 3:04:02 | 2:00:53 | 1:03:09 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_blogbench.yaml} | 3 | |||
dead | 1455773 | 2017-07-28 05:02:04 | 2017-07-28 05:11:59 | 2017-07-28 17:14:14 | 12:02:15 | vps | master | centos | 7.3 | smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} | — | |||
pass | 1455776 | 2017-07-28 05:02:05 | 2017-07-28 05:15:36 | 2017-07-28 11:29:43 | 6:14:07 | 2:11:48 | 4:02:19 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_fsstress.yaml} | 3 | |||
pass | 1455779 | 2017-07-28 05:02:06 | 2017-07-28 05:31:20 | 2017-07-28 09:37:25 | 4:06:05 | 2:52:16 | 1:13:49 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_iozone.yaml} | 3 | |||
pass | 1455782 | 2017-07-28 05:02:06 | 2017-07-28 05:35:57 | 2017-07-28 08:50:01 | 3:14:04 | 2:01:22 | 1:12:42 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_pjd.yaml} | 3 | |||
pass | 1455785 | 2017-07-28 05:02:07 | 2017-07-28 05:51:58 | 2017-07-28 10:36:03 | 4:44:05 | 1:47:35 | 2:56:30 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} | 3 | |||
pass | 1455788 | 2017-07-28 05:02:08 | 2017-07-28 05:55:48 | 2017-07-28 08:51:52 | 2:56:04 | 2:27:42 | 0:28:22 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
fail | 1455791 | 2017-07-28 05:02:08 | 2017-07-28 06:02:20 | 2017-07-28 07:10:48 | 1:08:28 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm023.front.sepia.ceph.com |
||||||||||||||
fail | 1455794 | 2017-07-28 05:02:09 | 2017-07-28 06:03:36 | 2017-07-28 07:01:36 | 0:58:00 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm181.front.sepia.ceph.com |
||||||||||||||
pass | 1455797 | 2017-07-28 05:02:10 | 2017-07-28 06:19:36 | 2017-07-28 14:37:46 | 8:18:10 | 2:06:35 | 6:11:35 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/libcephfs_interface_tests.yaml} | 3 | |||
fail | 1455800 | 2017-07-28 05:02:10 | 2017-07-28 06:31:26 | 2017-07-28 08:17:28 | 1:46:02 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/mon_thrash.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm139.front.sepia.ceph.com |
||||||||||||||
fail | 1455803 | 2017-07-28 05:02:11 | 2017-07-28 06:35:58 | 2017-07-28 08:44:00 | 2:08:02 | 1:37:07 | 0:30:55 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_api_tests.yaml} | 3 | |||
Failure Reason:
{'vpm133.front.sepia.ceph.com': {'msg': 'One or more items failed', 'failed': True, 'changed': False}} |
||||||||||||||
fail | 1455808 | 2017-07-28 05:02:12 | 2017-07-28 06:41:46 | 2017-07-28 06:53:45 | 0:11:59 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_bench.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm127.front.sepia.ceph.com |
||||||||||||||
fail | 1455811 | 2017-07-28 05:02:13 | 2017-07-28 06:44:21 | 2017-07-28 12:18:28 | 5:34:07 | 2:23:52 | 3:10:15 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cache_snaps.yaml} | 3 | |||
Failure Reason:
ceph-objectstore-tool: exp list-pgs failure with status 1 |
||||||||||||||
fail | 1455812 | 2017-07-28 05:02:15 | 2017-07-28 06:46:02 | 2017-07-28 09:44:05 | 2:58:03 | 2:37:42 | 0:20:21 | vps | master | ubuntu | 16.04 | smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} | 4 | |
Failure Reason:
Command failed (workunit test rados/load-gen-mix.sh) on vpm101 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix.sh' |
||||||||||||||
fail | 1455817 | 2017-07-28 05:02:17 | 2017-07-28 06:46:57 | 2017-07-28 09:14:59 | 2:28:02 | 1:58:31 | 0:29:31 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} | 3 | |||
Failure Reason:
Command failed (workunit test cls/test_cls_sdk.sh) on vpm083 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_sdk.sh' |
||||||||||||||
fail | 1455820 | 2017-07-28 05:02:18 | 2017-07-28 06:52:00 | 2017-07-28 07:03:59 | 0:11:59 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_ec_snaps.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm095.front.sepia.ceph.com |
||||||||||||||
fail | 1455823 | 2017-07-28 05:02:19 | 2017-07-28 06:53:25 | 2017-07-28 07:03:24 | 0:09:59 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_python.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm039.front.sepia.ceph.com |
||||||||||||||
pass | 1455826 | 2017-07-28 05:02:19 | 2017-07-28 06:53:46 | 2017-07-28 09:53:49 | 3:00:03 | 2:13:29 | 0:46:34 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_workunit_loadgen_mix.yaml} | 3 | |||
fail | 1455829 | 2017-07-28 05:02:20 | 2017-07-28 06:57:52 | 2017-07-28 10:37:56 | 3:40:04 | 2:20:58 | 1:19:06 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_api_tests.yaml} | 3 | |||
Failure Reason:
"2017-07-28 10:21:31.223029 mon.b mon.0 172.21.2.67:6789/0 176 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET)" in cluster log |
||||||||||||||
pass | 1455832 | 2017-07-28 05:02:21 | 2017-07-28 07:01:38 | 2017-07-28 09:13:40 | 2:12:02 | 2:04:50 | 0:07:12 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_cli_import_export.yaml} | 3 | |||
fail | 1455835 | 2017-07-28 05:02:21 | 2017-07-28 07:03:34 | 2017-07-28 09:49:35 | 2:46:01 | 2:05:25 | 0:40:36 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} | 3 | |||
Failure Reason:
"2017-07-28 09:38:27.992816 mon.a mon.0 172.21.2.3:6789/0 158 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
pass | 1455838 | 2017-07-28 05:02:22 | 2017-07-28 07:03:59 | 2017-07-28 10:52:03 | 3:48:04 | 2:10:42 | 1:37:22 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} | 3 | |||
pass | 1455841 | 2017-07-28 05:02:23 | 2017-07-28 07:04:00 | 2017-07-28 11:40:06 | 4:36:06 | 2:44:15 | 1:51:51 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_workunit_suites_iozone.yaml} | 3 | |||
fail | 1455844 | 2017-07-28 05:02:23 | 2017-07-28 07:10:50 | 2017-07-28 13:10:57 | 6:00:07 | 1:53:28 | 4:06:39 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_ec_s3tests.yaml} | 3 | |||
Failure Reason:
'default_idle_timeout' |
||||||||||||||
fail | 1455847 | 2017-07-28 05:02:24 | 2017-07-28 07:36:00 | 2017-07-28 11:12:32 | 3:36:32 | 1:52:37 | 1:43:55 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_s3tests.yaml} | 3 | |||
Failure Reason:
'default_idle_timeout' |
||||||||||||||
fail | 1455850 | 2017-07-28 05:02:25 | 2017-07-28 07:39:37 | 2017-07-28 11:59:42 | 4:20:05 | 2:01:41 | 2:18:24 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} | 3 | |||
Failure Reason:
'default_idle_timeout' |