User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
sage | 2019-02-22 22:01:24 | 2019-02-22 22:02:22 | 2019-02-23 10:20:15 | 12:17:53 | rados | wip-sage2-testing-2019-02-22-0711 | smithi | ef5b49e | 8 | 12 | 4 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 3627200 | 2019-02-22 22:01:33 | 2019-02-22 22:02:22 | 2019-02-22 23:20:22 | 1:18:00 | 1:10:27 | 0:07:33 | smithi | master | rhel | 7.5 | rados/rest/{mgr-restful.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
pass | 3627201 | 2019-02-22 22:01:33 | 2019-02-22 22:04:19 | 2019-02-22 22:28:19 | 0:24:00 | 0:17:44 | 0:06:16 | smithi | master | rhel | 7.5 | rados/rest/{mgr-restful.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
pass | 3627202 | 2019-02-22 22:01:34 | 2019-02-22 22:05:43 | 2019-02-22 22:29:42 | 0:23:59 | 0:15:05 | 0:08:54 | smithi | master | centos | rados/singleton-flat/valgrind-leaks.yaml | 1 | ||
pass | 3627203 | 2019-02-22 22:01:34 | 2019-02-22 22:08:42 | 2019-02-22 22:32:41 | 0:23:59 | 0:15:45 | 0:08:14 | smithi | master | centos | rados/singleton-flat/valgrind-leaks.yaml | 1 | ||
fail | 3627204 | 2019-02-22 22:01:35 | 2019-02-22 22:10:10 | 2019-02-23 00:00:11 | 1:50:01 | 1:44:35 | 0:05:26 | smithi | master | rhel | 7.5 | rados/singleton-nomsgr/{all/ceph-kvstore-tool.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
Failure Reason:
"2019-02-22 23:06:32.820929 mon.a (mon.0) 96 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
dead | 3627205 | 2019-02-22 22:01:35 | 2019-02-22 22:10:10 | 2019-02-22 22:26:10 | 0:16:00 | smithi | master | rhel | 7.5 | rados/singleton-nomsgr/{all/ceph-kvstore-tool.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | — | |||
Failure Reason:
reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
fail | 3627206 | 2019-02-22 22:01:35 | 2019-02-22 22:10:15 | 2019-02-22 23:14:16 | 1:04:01 | 0:52:34 | 0:11:27 | smithi | master | centos | 7.5 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{centos_latest.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Test failure: setUpClass (tasks.mgr.dashboard.test_cluster_configuration.ClusterConfigurationTest) |
||||||||||||||
fail | 3627207 | 2019-02-22 22:01:35 | 2019-02-22 22:14:20 | 2019-02-22 23:14:20 | 1:00:00 | 0:48:47 | 0:11:13 | smithi | master | centos | 7.5 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{centos_latest.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Test failure: setUpClass (tasks.mgr.dashboard.test_health.HealthTest) |
||||||||||||||
dead | 3627208 | 2019-02-22 22:01:36 | 2019-02-22 22:14:20 | 2019-02-23 10:16:51 | 12:02:31 | smithi | master | rhel | 7.5 | rados/objectstore/{backends/filestore-idempotent-aio-journal.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |||
dead | 3627209 | 2019-02-22 22:01:36 | 2019-02-22 22:16:21 | 2019-02-23 10:18:49 | 12:02:28 | smithi | master | rhel | 7.5 | rados/objectstore/{backends/filestore-idempotent-aio-journal.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |||
fail | 3627210 | 2019-02-22 22:01:37 | 2019-02-22 22:16:21 | 2019-02-22 22:42:21 | 0:26:00 | 0:12:41 | 0:13:19 | smithi | master | ubuntu | 16.04 | rados/standalone/{supported-random-distro$/{ubuntu_16.04.yaml} workloads/mon-seesaw.yaml} | 1 | |
Failure Reason:
Command failed (workunit test mon/mon-seesaw.sh) on smithi170 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ef5b49ece4f22d5355ba89fcf165071c56ca7c9f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-seesaw.sh' |
||||||||||||||
fail | 3627211 | 2019-02-22 22:01:37 | 2019-02-22 22:16:31 | 2019-02-22 22:40:31 | 0:24:00 | 0:13:01 | 0:10:59 | smithi | master | ubuntu | 16.04 | rados/standalone/{supported-random-distro$/{ubuntu_16.04.yaml} workloads/mon-seesaw.yaml} | 1 | |
Failure Reason:
Command failed (workunit test mon/mon-seesaw.sh) on smithi204 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ef5b49ece4f22d5355ba89fcf165071c56ca7c9f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-seesaw.sh' |
||||||||||||||
fail | 3627212 | 2019-02-22 22:01:38 | 2019-02-22 22:17:38 | 2019-02-23 02:37:48 | 4:20:10 | 4:13:06 | 0:07:04 | smithi | master | rhel | 7.5 | rados/objectstore/{backends/filestore-idempotent.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
Failure Reason:
Command failed on smithi105 with status 1: 'cp -a /home/ubuntu/cephtest/ceph.data/test.client.0 /home/ubuntu/cephtest/archive/idempotent_failure' |
||||||||||||||
dead | 3627213 | 2019-02-22 22:01:38 | 2019-02-22 22:17:53 | 2019-02-23 10:20:15 | 12:02:22 | smithi | master | rhel | 7.5 | rados/objectstore/{backends/filestore-idempotent.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |||
pass | 3627214 | 2019-02-22 22:01:38 | 2019-02-22 22:17:56 | 2019-02-22 23:03:56 | 0:46:00 | 0:32:07 | 0:13:53 | smithi | master | centos | 7.5 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml msgr/simple.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 4 | |
pass | 3627215 | 2019-02-22 22:01:39 | 2019-02-22 22:18:05 | 2019-02-22 23:06:05 | 0:48:00 | 0:32:32 | 0:15:28 | smithi | master | centos | 7.5 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml msgr/simple.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 4 | |
fail | 3627216 | 2019-02-22 22:01:39 | 2019-02-22 22:18:19 | 2019-02-22 22:52:18 | 0:33:59 | 0:23:27 | 0:10:32 | smithi | master | ubuntu | 16.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/insights.yaml} | 2 | |
Failure Reason:
Test failure: test_health_history (tasks.mgr.test_insights.TestInsights) |
||||||||||||||
fail | 3627217 | 2019-02-22 22:01:39 | 2019-02-22 22:18:21 | 2019-02-22 22:48:21 | 0:30:00 | 0:18:49 | 0:11:11 | smithi | master | ubuntu | 16.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/insights.yaml} | 2 | |
Failure Reason:
Test failure: test_insights_health (tasks.mgr.test_insights.TestInsights) |
||||||||||||||
fail | 3627218 | 2019-02-22 22:01:40 | 2019-02-22 22:20:07 | 2019-02-22 22:54:07 | 0:34:00 | 0:24:05 | 0:09:55 | smithi | master | ubuntu | 16.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/module_selftest.yaml} | 2 | |
Failure Reason:
Test failure: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) |
||||||||||||||
fail | 3627219 | 2019-02-22 22:01:40 | 2019-02-22 22:20:11 | 2019-02-22 22:52:11 | 0:32:00 | 0:21:22 | 0:10:38 | smithi | master | ubuntu | 16.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/module_selftest.yaml} | 2 | |
Failure Reason:
Test failure: test_diskprediction_local (tasks.mgr.test_module_selftest.TestModuleSelftest) |
||||||||||||||
fail | 3627220 | 2019-02-22 22:01:41 | 2019-02-22 22:20:15 | 2019-02-23 00:46:16 | 2:26:01 | 1:21:12 | 1:04:49 | smithi | master | centos | 7.5 | rados/standalone/{supported-random-distro$/{centos_latest.yaml} workloads/mon.yaml} | 1 | |
Failure Reason:
Command failed (workunit test mon/mon-osdmap-prune.sh) on smithi033 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ef5b49ece4f22d5355ba89fcf165071c56ca7c9f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-osdmap-prune.sh' |
||||||||||||||
fail | 3627221 | 2019-02-22 22:01:41 | 2019-02-22 22:20:28 | 2019-02-23 00:40:30 | 2:20:02 | 2:12:06 | 0:07:56 | smithi | master | centos | 7.5 | rados/standalone/{supported-random-distro$/{centos_latest.yaml} workloads/mon.yaml} | 1 | |
Failure Reason:
Command failed (workunit test mon/mon-osdmap-prune.sh) on smithi013 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ef5b49ece4f22d5355ba89fcf165071c56ca7c9f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-osdmap-prune.sh' |
||||||||||||||
pass | 3627222 | 2019-02-22 22:01:42 | 2019-02-22 22:20:48 | 2019-02-22 23:00:47 | 0:39:59 | 0:29:01 | 0:10:58 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} | 2 | ||
pass | 3627223 | 2019-02-22 22:01:42 | 2019-02-22 22:22:00 | 2019-02-22 23:04:00 | 0:42:00 | 0:30:02 | 0:11:58 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} | 2 |