Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 2420341 2018-04-20 14:34:23 2018-04-20 14:34:39 2018-04-20 14:50:38 0:15:59 0:06:18 0:09:41 mira master rados/rest/mgr-restful.yaml 1
Failure Reason:

"2018-04-20 14:47:25.670105 mon.a mon.0 172.21.9.120:6789/0 82 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log

fail 2420342 2018-04-20 14:34:24 2018-04-20 14:34:40 2018-04-20 14:52:39 0:17:59 0:09:14 0:08:45 mira master rados/singleton-nomsgr/{all/admin_socket_output.yaml rados.yaml} 1
Failure Reason:

"2018-04-20 14:47:29.051241 mon.a mon.0 172.21.7.130:6789/0 67 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log

fail 2420343 2018-04-20 14:34:25 2018-04-20 14:34:41 2018-04-20 14:50:39 0:15:58 0:06:08 0:09:50 mira master rados/singleton-nomsgr/{all/cache-fs-trunc.yaml rados.yaml} 1
Failure Reason:

"2018-04-20 14:46:36.753656 mon.a mon.0 172.21.6.118:6789/0 82 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log

pass 2420344 2018-04-20 14:34:26 2018-04-20 14:34:41 2018-04-20 15:06:40 0:31:59 0:22:04 0:09:55 mira master rados/singleton/{all/lost-unfound-delete.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml} 1
pass 2420345 2018-04-20 14:34:26 2018-04-20 14:34:40 2018-04-20 15:08:39 0:33:59 0:22:27 0:11:32 mira master rados/monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml thrashers/one.yaml workloads/rados_api_tests.yaml} 2
fail 2420346 2018-04-20 14:34:27 2018-04-20 14:34:40 2018-04-20 14:52:39 0:17:59 0:05:51 0:12:08 mira master rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml tasks/module_selftest.yaml} 2
Failure Reason:

Test failure: test_influx (tasks.mgr.test_module_selftest.TestModuleSelftest)

fail 2420347 2018-04-20 14:34:28 2018-04-20 14:34:42 2018-04-20 15:08:41 0:33:59 0:06:56 0:27:03 mira master rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml tasks/prometheus.yaml} 2
Failure Reason:

"2018-04-20 15:06:02.555438 mon.b mon.0 172.21.6.106:6789/0 92 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log

fail 2420348 2018-04-20 14:34:29 2018-04-20 14:34:41 2018-04-20 15:18:40 0:43:59 0:35:24 0:08:35 mira master rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on mira041 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-kefu-testing-2018-04-20-1141 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 2420349 2018-04-20 14:34:30 2018-04-20 14:34:42 2018-04-20 15:12:41 0:37:59 0:05:43 0:32:16 mira master rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore.yaml tasks/workunits.yaml} 2
Failure Reason:

"2018-04-20 15:10:15.739273 mon.a mon.0 172.21.6.134:6789/0 69 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log

fail 2420350 2018-04-20 14:34:30 2018-04-20 14:34:41 2018-04-20 15:04:40 0:29:59 0:02:52 0:27:07 mira master rhel 7.5 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml supported/rhel_latest.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
Failure Reason:

{'mira012.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['Failed to set time zone: Connection timed out'], 'cmd': ['timedatectl', 'set-timezone', 'Etc/UTC'], 'end': '2018-04-20 15:02:45.706271', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'timedatectl set-timezone Etc/UTC', 'removes': None, 'warn': True, '_uses_shell': False, 'stdin': None}}, 'start': '2018-04-20 15:02:20.687753', 'delta': '0:00:25.018518', 'stderr': 'Failed to set time zone: Connection timed out', 'rc': 1, 'msg': 'non-zero return code', 'stdout_lines': []}, 'mira065.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ["Failed to set time zone: Failed to activate service 'org.freedesktop.timedate1': timed out"], 'cmd': ['timedatectl', 'set-timezone', 'Etc/UTC'], 'end': '2018-04-20 15:02:55.320320', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'timedatectl set-timezone Etc/UTC', 'removes': None, 'warn': True, '_uses_shell': False, 'stdin': None}}, 'start': '2018-04-20 15:02:30.307655', 'delta': '0:00:25.012665', 'stderr': "Failed to set time zone: Failed to activate service 'org.freedesktop.timedate1': timed out", 'rc': 1, 'msg': 'non-zero return code', 'stdout_lines': []}}

fail 2420351 2018-04-20 14:34:31 2018-04-20 14:34:41 2018-04-20 14:54:40 0:19:59 0:07:59 0:12:00 mira master rhel 7.5 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore.yaml rados.yaml supported/rhel_latest.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
Failure Reason:

{'mira058.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ["Failed to set time zone: Failed to activate service 'org.freedesktop.timedate1': timed out"], 'cmd': ['timedatectl', 'set-timezone', 'Etc/UTC'], 'end': '2018-04-20 14:48:43.663441', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'timedatectl set-timezone Etc/UTC', 'removes': None, 'warn': True, '_uses_shell': False, 'stdin': None}}, 'start': '2018-04-20 14:48:18.643589', 'delta': '0:00:25.019852', 'stderr': "Failed to set time zone: Failed to activate service 'org.freedesktop.timedate1': timed out", 'rc': 1, 'msg': 'non-zero return code', 'stdout_lines': []}}

fail 2420352 2018-04-20 14:34:32 2018-04-20 14:34:41 2018-04-20 15:02:40 0:27:59 0:18:08 0:09:51 mira master rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml tasks/dashboard.yaml} 2
Failure Reason:

"2018-04-20 14:48:38.883858 mon.b mon.0 172.21.4.134:6789/0 67 : cluster [WRN] Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)" in cluster log

fail 2420353 2018-04-20 14:34:33 2018-04-20 14:34:41 2018-04-20 15:18:41 0:44:00 0:35:15 0:08:45 mira master rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on mira034 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-kefu-testing-2018-04-20-1141 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 2420354 2018-04-20 14:34:34 2018-04-20 14:34:40 2018-04-20 14:54:39 0:19:59 0:08:47 0:11:12 mira master rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml tasks/failover.yaml} 2
Failure Reason:

"2018-04-20 14:49:31.223919 mon.a mon.0 172.21.7.120:6789/0 90 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log

fail 2420355 2018-04-20 14:34:34 2018-04-20 14:34:41 2018-04-20 15:18:40 0:43:59 0:35:07 0:08:52 mira master rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on mira027 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-kefu-testing-2018-04-20-1141 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 2420356 2018-04-20 14:34:35 2018-04-20 14:34:42 2018-04-20 15:08:41 0:33:59 0:05:39 0:28:20 mira master rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml tasks/module_selftest.yaml} 2
Failure Reason:

Test failure: test_influx (tasks.mgr.test_module_selftest.TestModuleSelftest)

fail 2420357 2018-04-20 14:34:36 2018-04-20 14:34:42 2018-04-20 15:10:41 0:35:59 0:06:41 0:29:18 mira master rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore.yaml tasks/prometheus.yaml} 2
Failure Reason:

"2018-04-20 15:08:15.061554 mon.a mon.0 172.21.7.120:6789/0 65 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" in cluster log

fail 2420358 2018-04-20 14:34:37 2018-04-20 14:34:41 2018-04-20 15:20:40 0:45:59 0:35:27 0:10:32 mira master rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on mira053 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-kefu-testing-2018-04-20-1141 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'