User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2018-04-19 05:02:01 | 2018-04-19 05:07:29 | 2018-04-19 17:12:07 | 12:04:38 | smoke | master | ovh | b6344f3 | 1 | 26 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 2414386 | 2018-04-19 05:02:40 | 2018-04-19 05:07:29 | 2018-04-19 05:25:29 | 0:18:00 | 0:13:45 | 0:04:15 | ovh | master | ubuntu | 16.04 | smoke/1node/{clusters/{fixed-1.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/ceph-deploy.yaml} | 1 | |
Failure Reason:
Command failed on ovh042 with status 32: 'sudo umount /dev/sdb1' |
||||||||||||||
fail | 2414387 | 2018-04-19 05:02:40 | 2018-04-19 05:07:34 | 2018-04-19 05:49:34 | 0:42:00 | 0:22:15 | 0:19:45 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_blogbench.yaml} | 3 | |||
Failure Reason:
"2018-04-19 05:38:26.640964 mon.a mon.0 158.69.91.7:6789/0 79 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log |
||||||||||||||
fail | 2414388 | 2018-04-19 05:02:41 | 2018-04-19 05:07:51 | 2018-04-19 10:13:58 | 5:06:07 | 0:16:27 | 4:49:40 | ovh | master | centos | 7.4 | smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} | 4 | |
Failure Reason:
Command failed on ovh033 with status 5: 'sudo stop ceph-all || sudo service ceph stop || sudo systemctl stop ceph.target' |
||||||||||||||
fail | 2414389 | 2018-04-19 05:02:43 | 2018-04-19 05:07:52 | 2018-04-19 06:11:53 | 1:04:01 | 0:23:42 | 0:40:19 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_fsstress.yaml} | 3 | |||
Failure Reason:
"2018-04-19 06:00:20.616342 mon.a mon.0 158.69.92.171:6789/0 76 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log |
||||||||||||||
dead | 2414390 | 2018-04-19 05:02:46 | 2018-04-19 05:09:30 | 2018-04-19 17:12:07 | 12:02:37 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_iozone.yaml} | 3 | |||||
fail | 2414391 | 2018-04-19 05:02:49 | 2018-04-19 05:13:43 | 2018-04-19 06:19:44 | 1:06:01 | 0:18:21 | 0:47:40 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_pjd.yaml} | 3 | |||
Failure Reason:
"2018-04-19 06:13:07.817376 mon.a mon.0 158.69.92.82:6789/0 106 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log |
||||||||||||||
fail | 2414392 | 2018-04-19 05:02:49 | 2018-04-19 05:18:13 | 2018-04-19 05:56:13 | 0:38:00 | 0:17:27 | 0:20:33 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} | 3 | |||
Failure Reason:
"2018-04-19 05:50:20.995096 mon.a mon.0 158.69.92.1:6789/0 73 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log |
||||||||||||||
fail | 2414393 | 2018-04-19 05:02:50 | 2018-04-19 05:19:27 | 2018-04-19 07:19:28 | 2:00:01 | 0:46:47 | 1:13:14 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
Failure Reason:
"2018-04-19 06:43:57.700423 mon.a mon.0 158.69.94.44:6789/0 83 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log |
||||||||||||||
fail | 2414394 | 2018-04-19 05:02:51 | 2018-04-19 05:19:27 | 2018-04-19 05:59:27 | 0:40:00 | 0:24:15 | 0:15:45 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
Failure Reason:
"2018-04-19 05:46:23.857447 mon.b mon.0 158.69.91.81:6789/0 72 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log |
||||||||||||||
fail | 2414395 | 2018-04-19 05:02:51 | 2018-04-19 05:25:38 | 2018-04-19 05:55:38 | 0:30:00 | 0:17:51 | 0:12:09 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||
Failure Reason:
"2018-04-19 05:48:23.965531 mon.a mon.0 158.69.91.98:6789/0 75 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log |
||||||||||||||
fail | 2414396 | 2018-04-19 05:02:52 | 2018-04-19 05:29:27 | 2018-04-19 06:11:27 | 0:42:00 | 0:22:26 | 0:19:34 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/libcephfs_interface_tests.yaml} | 3 | |||
Failure Reason:
"2018-04-19 06:00:46.956204 mon.a mon.0 158.69.92.165:6789/0 75 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log |
||||||||||||||
fail | 2414397 | 2018-04-19 05:02:53 | 2018-04-19 05:29:27 | 2018-04-19 06:23:28 | 0:54:01 | 0:29:58 | 0:24:03 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/mon_thrash.yaml} | 3 | |||
Failure Reason:
"2018-04-19 06:04:54.290027 mon.a mon.0 158.69.92.196:6789/0 75 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log |
||||||||||||||
fail | 2414398 | 2018-04-19 05:02:53 | 2018-04-19 05:41:56 | 2018-04-19 06:41:57 | 1:00:01 | 0:37:11 | 0:22:50 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_api_tests.yaml} | 3 | |||
Failure Reason:
"2018-04-19 06:17:23.225337 mon.a mon.0 158.69.93.110:6789/0 108 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log |
||||||||||||||
fail | 2414399 | 2018-04-19 05:02:54 | 2018-04-19 05:41:57 | 2018-04-19 06:43:57 | 1:02:00 | 0:34:28 | 0:27:32 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_bench.yaml} | 3 | |||
Failure Reason:
"2018-04-19 06:19:56.887288 mon.b mon.0 158.69.93.172:6789/0 74 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log |
||||||||||||||
fail | 2414400 | 2018-04-19 05:02:55 | 2018-04-19 05:45:29 | 2018-04-19 07:15:30 | 1:30:01 | 0:50:27 | 0:39:34 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cache_snaps.yaml} | 3 | |||
Failure Reason:
"2018-04-19 06:37:34.617025 mon.a mon.0 158.69.94.224:6789/0 110 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log |
||||||||||||||
pass | 2414401 | 2018-04-19 05:02:55 | 2018-04-19 05:49:38 | 2018-04-19 11:17:44 | 5:28:06 | 0:37:49 | 4:50:17 | ovh | master | ubuntu | 16.04 | smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} | 4 | |
fail | 2414402 | 2018-04-19 05:02:56 | 2018-04-19 05:49:38 | 2018-04-19 06:39:38 | 0:50:00 | 0:19:05 | 0:30:55 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} | 3 | |||
Failure Reason:
"2018-04-19 06:32:26.740486 mon.a mon.0 158.69.94.136:6789/0 104 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log |
||||||||||||||
fail | 2414403 | 2018-04-19 05:02:56 | 2018-04-19 05:49:38 | 2018-04-19 06:59:38 | 1:10:00 | 0:29:32 | 0:40:28 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_ec_snaps.yaml} | 3 | |||
Failure Reason:
"2018-04-19 06:42:15.427731 mon.b mon.0 158.69.94.30:6789/0 69 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log |
||||||||||||||
fail | 2414404 | 2018-04-19 05:02:57 | 2018-04-19 05:53:50 | 2018-04-19 06:23:50 | 0:30:00 | 0:21:08 | 0:08:52 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_python.yaml} | 3 | |||
Failure Reason:
"2018-04-19 06:15:34.969677 mon.a mon.0 158.69.92.99:6789/0 74 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log |
||||||||||||||
fail | 2414405 | 2018-04-19 05:02:58 | 2018-04-19 05:55:45 | 2018-04-19 06:51:45 | 0:56:00 | 0:32:20 | 0:23:40 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_workunit_loadgen_mix.yaml} | 3 | |||
Failure Reason:
"2018-04-19 06:32:40.701095 mon.a mon.0 158.69.94.134:6789/0 107 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log |
||||||||||||||
fail | 2414406 | 2018-04-19 05:02:58 | 2018-04-19 05:55:45 | 2018-04-19 08:25:47 | 2:30:02 | 0:28:20 | 2:01:42 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_api_tests.yaml} | 3 | |||
Failure Reason:
"2018-04-19 08:08:22.800713 mon.a mon.0 158.69.65.85:6789/0 108 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log |
||||||||||||||
fail | 2414407 | 2018-04-19 05:02:59 | 2018-04-19 05:56:22 | 2018-04-19 06:46:22 | 0:50:00 | 0:17:54 | 0:32:06 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_cli_import_export.yaml} | 3 | |||
Failure Reason:
Command failed (workunit test rbd/import_export.sh) on ovh060 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 RBD_CREATE_ARGS=--new-format adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/import_export.sh' |
||||||||||||||
fail | 2414408 | 2018-04-19 05:03:00 | 2018-04-19 05:59:35 | 2018-04-19 09:59:41 | 4:00:06 | 0:23:09 | 3:36:57 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} | 3 | |||
Failure Reason:
"2018-04-19 09:40:42.086770 mon.a mon.0 158.69.68.185:6789/0 105 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log |
||||||||||||||
fail | 2414409 | 2018-04-19 05:03:00 | 2018-04-19 06:11:37 | 2018-04-19 07:17:38 | 1:06:01 | 0:26:24 | 0:39:37 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} | 3 | |||
Failure Reason:
"2018-04-19 07:02:44.018826 mon.b mon.0 158.69.64.125:6789/0 102 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log |
||||||||||||||
fail | 2414410 | 2018-04-19 05:03:01 | 2018-04-19 06:11:54 | 2018-04-19 10:09:59 | 3:58:05 | 3:17:53 | 0:40:12 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Command failed (workunit test suites/iozone.sh) on ovh086 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/iozone.sh' |
||||||||||||||
fail | 2414411 | 2018-04-19 05:03:02 | 2018-04-19 06:19:16 | 2018-04-19 07:31:16 | 1:12:00 | 0:58:42 | 0:13:18 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_ec_s3tests.yaml} | 3 | |||
Failure Reason:
Command failed on ovh016 with status 2: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph' |
||||||||||||||
fail | 2414412 | 2018-04-19 05:03:02 | 2018-04-19 06:19:45 | 2018-04-19 07:49:46 | 1:30:01 | 1:00:32 | 0:29:29 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_s3tests.yaml} | 3 | |||
Failure Reason:
"2018-04-19 07:00:53.746247 mon.b mon.0 158.69.95.37:6789/0 106 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log |
||||||||||||||
fail | 2414413 | 2018-04-19 05:03:03 | 2018-04-19 06:21:27 | 2018-04-19 07:29:28 | 1:08:01 | 0:22:18 | 0:45:43 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} | 3 | |||
Failure Reason:
"2018-04-19 07:19:43.316337 mon.a mon.0 158.69.64.213:6789/0 70 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log |