User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
sage | 2019-09-30 13:27:04 | 2019-09-30 18:26:11 | 2019-10-01 06:42:31 | 12:16:20 | rados | wip-asok-tell | smithi | 06e81d6 | 3 | 13 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 4348348 | 2019-09-30 13:27:15 | 2019-09-30 18:26:11 | 2019-09-30 18:50:10 | 0:23:59 | 0:16:08 | 0:07:51 | smithi | master | centos | 7.6 | rados/singleton/{all/rebuild-mondb.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
fail | 4348349 | 2019-09-30 13:27:16 | 2019-09-30 18:26:11 | 2019-09-30 19:04:10 | 0:37:59 | 0:28:20 | 0:09:39 | smithi | master | centos | 7.6 | rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/scrub.yaml} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-scrub-dump.sh) on smithi129 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=06e81d654f2e5ccef9e0eebf07db461ca6074a78 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-dump.sh' |
||||||||||||||
fail | 4348350 | 2019-09-30 13:27:17 | 2019-09-30 18:26:14 | 2019-09-30 19:06:13 | 0:39:59 | 0:32:29 | 0:07:30 | smithi | master | rhel | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/rados_python.yaml} | 2 | |
Failure Reason:
Command failed (workunit test rados/test_python.sh) on smithi183 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=06e81d654f2e5ccef9e0eebf07db461ca6074a78 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh' |
||||||||||||||
dead | 4348351 | 2019-09-30 13:27:18 | 2019-09-30 18:26:14 | 2019-10-01 06:28:40 | 12:02:26 | smithi | master | rhel | 7.6 | rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |||
fail | 4348352 | 2019-09-30 13:27:19 | 2019-09-30 18:26:15 | 2019-09-30 18:56:14 | 0:29:59 | 0:20:14 | 0:09:45 | smithi | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-basic-min-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/cosbench_64K_read_write.yaml} | 1 | |
Failure Reason:
Command failed on smithi158 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' |
||||||||||||||
pass | 4348353 | 2019-09-30 13:27:20 | 2019-09-30 18:26:15 | 2019-09-30 19:14:15 | 0:48:00 | 0:39:42 | 0:08:18 | smithi | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
fail | 4348354 | 2019-09-30 13:27:21 | 2019-09-30 18:28:00 | 2019-09-30 18:55:59 | 0:27:59 | 0:17:05 | 0:10:54 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/cosbench_64K_write.yaml} | 1 | |
Failure Reason:
Command failed on smithi201 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' |
||||||||||||||
fail | 4348355 | 2019-09-30 13:27:22 | 2019-09-30 18:28:27 | 2019-09-30 19:04:26 | 0:35:59 | 0:24:36 | 0:11:23 | smithi | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{centos_7.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Test failure: test_perf_counters_mds_get (tasks.mgr.dashboard.test_perf_counters.PerfCountersControllerTest) |
||||||||||||||
pass | 4348356 | 2019-09-30 13:27:23 | 2019-09-30 18:31:54 | 2019-09-30 19:35:54 | 1:04:00 | 0:28:24 | 0:35:36 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 4 | |
fail | 4348357 | 2019-09-30 13:27:24 | 2019-09-30 18:31:54 | 2019-09-30 18:59:53 | 0:27:59 | 0:12:15 | 0:15:44 | smithi | master | ubuntu | 18.04 | rados/upgrade/mimic-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/mimic.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashosds-health.yaml} | 4 | |
Failure Reason:
Command failed on smithi112 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph -- tell 'mon.*' injectargs --mon_health_to_clog=false" |
||||||||||||||
fail | 4348358 | 2019-09-30 13:27:25 | 2019-09-30 18:32:18 | 2019-09-30 19:16:18 | 0:44:00 | 0:36:52 | 0:07:08 | smithi | master | rhel | 7.6 | rados/standalone/{supported-random-distro$/{rhel_7.yaml} workloads/erasure-code.yaml} | 1 | |
Failure Reason:
Command failed (workunit test erasure-code/test-erasure-eio.sh) on smithi061 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=06e81d654f2e5ccef9e0eebf07db461ca6074a78 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/erasure-code/test-erasure-eio.sh' |
||||||||||||||
fail | 4348359 | 2019-09-30 13:27:26 | 2019-09-30 18:34:57 | 2019-09-30 18:56:56 | 0:21:59 | 0:11:33 | 0:10:26 | smithi | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{centos_7.yaml} tasks/module_selftest.yaml} | 2 | |
Failure Reason:
Test failure: test_devicehealth (tasks.mgr.test_module_selftest.TestModuleSelftest) |
||||||||||||||
fail | 4348360 | 2019-09-30 13:27:27 | 2019-09-30 18:34:57 | 2019-09-30 19:28:57 | 0:54:00 | 0:35:30 | 0:18:30 | smithi | master | centos | 7.6 | rados/singleton/{all/ec-lost-unfound.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
Command failed on smithi068 with status 2: u'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg 2.e mark_unfound_lost delete' |
||||||||||||||
fail | 4348361 | 2019-09-30 13:27:28 | 2019-09-30 18:37:59 | 2019-09-30 19:35:59 | 0:58:00 | 0:46:40 | 0:11:20 | smithi | master | rhel | 7.6 | rados/singleton/{all/lost-unfound-delete.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
fail | 4348362 | 2019-09-30 13:27:29 | 2019-09-30 18:38:14 | 2019-09-30 19:36:14 | 0:58:00 | 0:47:25 | 0:10:35 | smithi | master | rhel | 7.6 | rados/singleton/{all/lost-unfound.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
dead | 4348363 | 2019-09-30 13:27:30 | 2019-09-30 18:40:04 | 2019-10-01 06:42:31 | 12:02:27 | smithi | master | ubuntu | 18.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/nautilus.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-octopus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashosds-health.yaml} | 4 | |||
fail | 4348364 | 2019-09-30 13:27:31 | 2019-09-30 18:42:17 | 2019-09-30 20:34:17 | 1:52:00 | 1:45:21 | 0:06:39 | smithi | master | rhel | 7.6 | rados/standalone/{supported-random-distro$/{rhel_7.yaml} workloads/osd.yaml} | 1 | |
Failure Reason:
Command failed (workunit test osd/osd-bluefs-volume-ops.sh) on smithi080 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=06e81d654f2e5ccef9e0eebf07db461ca6074a78 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-bluefs-volume-ops.sh' |
||||||||||||||
fail | 4348365 | 2019-09-30 13:27:32 | 2019-09-30 18:44:00 | 2019-09-30 19:21:59 | 0:37:59 | 0:31:48 | 0:06:11 | smithi | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/cosbench_64K_read_write.yaml} | 1 | |
Failure Reason:
Command failed on smithi120 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' |