Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 3864300 2019-04-19 02:41:56 2019-04-19 02:54:59 2019-04-19 03:14:58 0:19:59 0:06:10 0:13:49 smithi master ubuntu 16.04 rados:mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/orchestrator_cli.yaml} 2
Failure Reason:

"2019-04-19 03:11:08.720499 mon.b (mon.0) 102 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN)" in cluster log

pass 3864301 2019-04-19 02:41:57 2019-04-19 02:55:05 2019-04-19 03:25:05 0:30:00 0:19:47 0:10:13 smithi master rhel 7.5 rados:mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_latest.yaml} tasks/progress.yaml} 2
fail 3864302 2019-04-19 02:41:58 2019-04-19 02:55:41 2019-04-19 03:23:41 0:28:00 0:13:24 0:14:36 smithi master centos 7.5 rados:mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{centos_latest.yaml} tasks/prometheus.yaml} 2
Failure Reason:

Test failure: test_file_sd_command (tasks.mgr.test_prometheus.TestPrometheus)

pass 3864303 2019-04-19 02:41:59 2019-04-19 02:56:10 2019-04-19 03:22:10 0:26:00 0:11:12 0:14:48 smithi master centos 7.5 rados:mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{centos_latest.yaml} tasks/ssh_orchestrator.yaml} 2
fail 3864304 2019-04-19 02:41:59 2019-04-19 02:57:40 2019-04-19 06:35:43 3:38:03 3:14:31 0:23:32 smithi master rhel 7.5 rados:mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{rhel_latest.yaml} tasks/workunits.yaml} 2
Failure Reason:

Command failed (workunit test mgr/test_localpool.sh) on smithi058 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7c1ddb447f58a7c6ec8acdcd1c65284d108265de TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mgr/test_localpool.sh'

pass 3864305 2019-04-19 02:42:00 2019-04-19 02:59:27 2019-04-19 03:23:26 0:23:59 0:12:38 0:11:21 smithi master centos 7.5 rados:mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{centos_latest.yaml} tasks/crash.yaml} 2
fail 3864306 2019-04-19 02:42:01 2019-04-19 03:01:39 2019-04-19 03:25:38 0:23:59 0:09:38 0:14:21 smithi master centos 7.5 rados:mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{centos_latest.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Command failed on smithi173 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 0'

pass 3864307 2019-04-19 02:42:02 2019-04-19 03:01:39 2019-04-19 03:29:38 0:27:59 0:16:19 0:11:40 smithi master rhel 7.5 rados:mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_latest.yaml} tasks/failover.yaml} 2
fail 3864308 2019-04-19 02:42:02 2019-04-19 03:03:55 2019-04-19 03:25:54 0:21:59 0:09:15 0:12:44 smithi master ubuntu 16.04 rados:mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/insights.yaml} 2
Failure Reason:

Test failure: test_crash_history (tasks.mgr.test_insights.TestInsights)

fail 3864309 2019-04-19 02:42:03 2019-04-19 03:07:28 2019-04-19 03:41:28 0:34:00 0:09:40 0:24:20 smithi master ubuntu 16.04 rados:mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/module_selftest.yaml} 2
Failure Reason:

Test failure: test_devicehealth (tasks.mgr.test_module_selftest.TestModuleSelftest)

pass 3864310 2019-04-19 02:42:04 2019-04-19 03:07:29 2019-04-19 03:43:28 0:35:59 0:09:46 0:26:13 smithi master ubuntu 18.04 rados:mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/orchestrator_cli.yaml} 2
fail 3864311 2019-04-19 02:42:05 2019-04-19 03:07:28 2019-04-19 03:45:28 0:38:00 0:09:49 0:28:11 smithi master centos 7.5 rados:mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{centos_latest.yaml} tasks/progress.yaml} 2
Failure Reason:

"2019-04-19 03:42:53.599109 mon.a (mon.0) 105 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN)" in cluster log

fail 3864312 2019-04-19 02:42:05 2019-04-19 03:07:29 2019-04-19 03:45:28 0:37:59 0:10:14 0:27:45 smithi master ubuntu 16.04 rados:mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/prometheus.yaml} 2
Failure Reason:

Test failure: test_file_sd_command (tasks.mgr.test_prometheus.TestPrometheus)

pass 3864313 2019-04-19 02:42:06 2019-04-19 03:07:47 2019-04-19 03:37:47 0:30:00 0:07:53 0:22:07 smithi master ubuntu 18.04 rados:mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/ssh_orchestrator.yaml} 2
fail 3864314 2019-04-19 02:42:07 2019-04-19 03:10:45 2019-04-19 06:30:48 3:20:03 3:09:12 0:10:51 smithi master ubuntu 18.04 rados:mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/workunits.yaml} 2
Failure Reason:

Command failed (workunit test mgr/test_localpool.sh) on smithi049 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7c1ddb447f58a7c6ec8acdcd1c65284d108265de TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mgr/test_localpool.sh'