User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2017-07-14 05:00:18 | 2017-07-14 05:03:23 | 2017-07-14 18:42:12 | 13:38:49 | smoke | master | vps | 7e287a4 | 9 | 17 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 1399461 | 2017-07-14 05:01:45 | 2017-07-14 05:03:23 | 2017-07-14 06:39:22 | 1:35:59 | 1:33:00 | 0:02:59 | vps | master | ubuntu | 16.04 | smoke/1node/{clusters/{fixed-1.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/ceph-deploy.yaml} | 1 | |
Failure Reason:
'check health' reached maximum tries (6) after waiting for 60 seconds |
||||||||||||||
fail | 1399464 | 2017-07-14 05:01:45 | 2017-07-14 05:03:20 | 2017-07-14 05:43:19 | 0:39:59 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_blogbench.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm139.front.sepia.ceph.com |
||||||||||||||
fail | 1399467 | 2017-07-14 05:01:46 | 2017-07-14 05:05:08 | 2017-07-14 14:53:19 | 9:48:11 | 1:31:05 | 8:17:06 | vps | master | centos | 7.3 | smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} | 4 | |
Failure Reason:
Command failed (workunit test rados/load-gen-mix.sh) on vpm167 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix.sh' |
||||||||||||||
pass | 1399470 | 2017-07-14 05:01:47 | 2017-07-14 05:11:15 | 2017-07-14 07:41:17 | 2:30:02 | 2:09:22 | 0:20:40 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_fsstress.yaml} | 3 | |||
fail | 1399473 | 2017-07-14 05:01:47 | 2017-07-14 05:11:16 | 2017-07-14 05:59:16 | 0:48:00 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_iozone.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm129.front.sepia.ceph.com |
||||||||||||||
pass | 1399476 | 2017-07-14 05:01:48 | 2017-07-14 05:18:14 | 2017-07-14 08:16:13 | 2:57:59 | 2:17:02 | 0:40:57 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_pjd.yaml} | 3 | |||
pass | 1399479 | 2017-07-14 05:01:48 | 2017-07-14 05:25:21 | 2017-07-14 08:33:25 | 3:08:04 | 1:49:39 | 1:18:25 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} | 3 | |||
dead | 1399482 | 2017-07-14 05:01:49 | 2017-07-14 05:31:46 | 2017-07-14 08:07:48 | 2:36:02 | 2:18:37 | 0:17:25 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
Failure Reason:
SSH connection to vpm135 was lost: 'sudo rm /etc/logrotate.d/ceph-test.conf' |
||||||||||||||
pass | 1399485 | 2017-07-14 05:01:50 | 2017-07-14 05:43:24 | 2017-07-14 08:03:24 | 2:20:00 | 1:57:36 | 0:22:24 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
pass | 1399488 | 2017-07-14 05:01:50 | 2017-07-14 05:46:56 | 2017-07-14 09:55:00 | 4:08:04 | 2:22:53 | 1:45:11 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||
pass | 1399491 | 2017-07-14 05:01:51 | 2017-07-14 05:48:44 | 2017-07-14 08:36:46 | 2:48:02 | 1:52:29 | 0:55:33 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/libcephfs_interface_tests.yaml} | 3 | |||
fail | 1399494 | 2017-07-14 05:01:52 | 2017-07-14 05:49:11 | 2017-07-14 08:31:14 | 2:42:03 | 2:10:59 | 0:31:04 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/mon_thrash.yaml} | 3 | |||
Failure Reason:
"2017-07-14 08:12:52.278416 mon.a mon.0 172.21.2.23:6789/0 4 : cluster [WRN] overall HEALTH_WARN 1 cache pools are missing hit_sets; 1/3 mons down, quorum b,c" in cluster log |
||||||||||||||
fail | 1399497 | 2017-07-14 05:01:52 | 2017-07-14 05:53:45 | 2017-07-14 06:15:41 | 0:21:56 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_api_tests.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm023.front.sepia.ceph.com |
||||||||||||||
fail | 1399500 | 2017-07-14 05:01:53 | 2017-07-14 05:59:36 | 2017-07-14 08:29:38 | 2:30:02 | 2:13:24 | 0:16:38 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_bench.yaml} | 3 | |||
Failure Reason:
"2017-07-14 08:06:43.073941 mon.b mon.0 172.21.2.85:6789/0 1670 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log |
||||||||||||||
fail | 1399503 | 2017-07-14 05:01:54 | 2017-07-14 05:59:36 | 2017-07-14 09:25:39 | 3:26:03 | 2:38:17 | 0:47:46 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cache_snaps.yaml} | 3 | |||
Failure Reason:
ceph-objectstore-tool: exp list-pgs failure with status 1 |
||||||||||||||
fail | 1399506 | 2017-07-14 05:01:54 | 2017-07-14 06:08:08 | 2017-07-14 13:44:16 | 7:36:08 | 2:29:38 | 5:06:30 | vps | master | ubuntu | 16.04 | smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} | 4 | |
Failure Reason:
Command failed (workunit test rados/load-gen-mix.sh) on vpm173 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix.sh' |
||||||||||||||
fail | 1399509 | 2017-07-14 05:01:55 | 2017-07-14 06:11:12 | 2017-07-14 09:39:14 | 3:28:02 | 1:56:28 | 1:31:34 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} | 3 | |||
Failure Reason:
Command failed (workunit test cls/test_cls_sdk.sh) on vpm117 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_sdk.sh' |
||||||||||||||
fail | 1399512 | 2017-07-14 05:01:56 | 2017-07-14 06:15:57 | 2017-07-14 11:10:04 | 4:54:07 | 2:05:55 | 2:48:12 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_ec_snaps.yaml} | 3 | |||
Failure Reason:
"2017-07-14 10:51:34.594243 mon.a mon.0 172.21.2.29:6789/0 168 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
fail | 1399515 | 2017-07-14 05:01:56 | 2017-07-14 06:28:11 | 2017-07-14 10:04:15 | 3:36:04 | 1:48:30 | 1:47:34 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_python.yaml} | 3 | |||
Failure Reason:
"2017-07-14 09:55:33.998108 mon.a mon.0 172.21.2.81:6789/0 225 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
pass | 1399518 | 2017-07-14 05:01:57 | 2017-07-14 06:38:49 | 2017-07-14 10:34:53 | 3:56:04 | 1:58:50 | 1:57:14 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_workunit_loadgen_mix.yaml} | 3 | |||
fail | 1399521 | 2017-07-14 05:01:58 | 2017-07-14 06:38:49 | 2017-07-14 10:32:52 | 3:54:03 | 2:09:16 | 1:44:47 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_api_tests.yaml} | 3 | |||
Failure Reason:
"2017-07-14 10:13:27.242274 mon.a mon.0 172.21.2.139:6789/0 437 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET)" in cluster log |
||||||||||||||
fail | 1399524 | 2017-07-14 05:01:59 | 2017-07-14 06:39:24 | 2017-07-14 07:43:24 | 1:04:00 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_cli_import_export.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm029.front.sepia.ceph.com |
||||||||||||||
dead | 1399527 | 2017-07-14 05:01:59 | 2017-07-14 06:39:47 | 2017-07-14 18:42:12 | 12:02:25 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} | 3 | |||||
pass | 1399530 | 2017-07-14 05:02:00 | 2017-07-14 06:52:42 | 2017-07-14 10:08:45 | 3:16:03 | 1:54:49 | 1:21:14 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} | 3 | |||
pass | 1399533 | 2017-07-14 05:02:00 | 2017-07-14 07:10:35 | 2017-07-14 10:36:36 | 3:26:01 | 2:35:02 | 0:50:59 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_workunit_suites_iozone.yaml} | 3 | |||
fail | 1399536 | 2017-07-14 05:02:01 | 2017-07-14 07:12:28 | 2017-07-14 10:26:31 | 3:14:03 | 1:48:02 | 1:26:01 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_ec_s3tests.yaml} | 3 | |||
Failure Reason:
'default_idle_timeout' |
||||||||||||||
fail | 1399539 | 2017-07-14 05:02:02 | 2017-07-14 07:17:45 | 2017-07-14 11:05:48 | 3:48:03 | 1:05:19 | 2:42:44 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_s3tests.yaml} | 3 | |||
Failure Reason:
'default_idle_timeout' |
||||||||||||||
fail | 1399542 | 2017-07-14 05:02:02 | 2017-07-14 07:26:27 | 2017-07-14 11:22:31 | 3:56:04 | 1:49:11 | 2:06:53 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} | 3 | |||
Failure Reason:
'default_idle_timeout' |