ID
Status
Ceph Branch
Suite Branch
Teuthology Branch
Machine
OS
Nodes
Description
Failure Reason
wip-sage2-testing-2019-02-22-0711
wip-sage2-testing-2019-02-22-0711
master
smithi
rhel 7.5
rados/rest/{mgr-restful.yaml supported-random-distro$/{rhel_latest.yaml}}
wip-sage2-testing-2019-02-22-0711
wip-sage2-testing-2019-02-22-0711
master
smithi
centos 
rados/singleton-flat/valgrind-leaks.yaml
wip-sage2-testing-2019-02-22-0711
wip-sage2-testing-2019-02-22-0711
master
smithi
rhel 7.5
rados/singleton-nomsgr/{all/ceph-kvstore-tool.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}}
"2019-02-22 21:45:12.688487 mon.a (mon.0) 90 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log
wip-sage2-testing-2019-02-22-0711
wip-sage2-testing-2019-02-22-0711
master
smithi
centos 7.5
rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{centos_latest.yaml} tasks/dashboard.yaml}
Test failure: setUpClass (tasks.mgr.dashboard.test_cephfs.CephfsTest)
wip-sage2-testing-2019-02-22-0711
wip-sage2-testing-2019-02-22-0711
master
smithi
ubuntu 16.04
rados/standalone/{supported-random-distro$/{ubuntu_16.04.yaml} workloads/mon-seesaw.yaml}
Command failed (workunit test mon/mon-seesaw.sh) on smithi112 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ef5b49ece4f22d5355ba89fcf165071c56ca7c9f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-seesaw.sh'
wip-sage2-testing-2019-02-22-0711
wip-sage2-testing-2019-02-22-0711
master
smithi
centos 7.5
rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml msgr/simple.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml}
wip-sage2-testing-2019-02-22-0711
wip-sage2-testing-2019-02-22-0711
master
smithi
ubuntu 16.04
rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/insights.yaml}
Test failure: test_health_history (tasks.mgr.test_insights.TestInsights)
wip-sage2-testing-2019-02-22-0711
wip-sage2-testing-2019-02-22-0711
master
smithi
ubuntu 16.04
rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/module_selftest.yaml}
Test failure: test_diskprediction_local (tasks.mgr.test_module_selftest.TestModuleSelftest)
wip-sage2-testing-2019-02-22-0711
wip-sage2-testing-2019-02-22-0711
master
smithi
centos 7.5
rados/standalone/{supported-random-distro$/{centos_latest.yaml} workloads/mon.yaml}
Command failed (workunit test mon/mon-osdmap-prune.sh) on smithi131 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ef5b49ece4f22d5355ba89fcf165071c56ca7c9f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-osdmap-prune.sh'
wip-sage2-testing-2019-02-22-0711
wip-sage2-testing-2019-02-22-0711
master
smithi
centos 
rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml}