User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
kchai | 2019-10-20 16:51:32 | 2019-10-20 16:52:02 | 2019-10-20 21:08:06 | 4:16:04 | rados | wip-kefu-testing-2019-10-18-1835 | mira | c5a53ba | 4 | 14 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 4427852 | 2019-10-20 16:51:43 | 2019-10-20 16:52:02 | 2019-10-20 17:44:02 | 0:52:00 | 0:22:52 | 0:29:08 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus-v2only.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | 4 | |
Failure Reason:
Command failed on mira075 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph_test_cls_rbd --gtest_filter=-TestClsRbd.get_features:TestClsRbd.parents'" |
||||||||||||||
fail | 4427853 | 2019-10-20 16:51:44 | 2019-10-20 16:52:02 | 2019-10-20 17:12:01 | 0:19:59 | 0:08:41 | 0:11:18 | mira | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/module_selftest.yaml} | 2 | |
Failure Reason:
Test failure: test_devicehealth (tasks.mgr.test_module_selftest.TestModuleSelftest) |
||||||||||||||
fail | 4427854 | 2019-10-20 16:51:45 | 2019-10-20 16:52:02 | 2019-10-20 21:08:06 | 4:16:04 | 3:53:40 | 0:22:24 | mira | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on mira088 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=975611fe528bc968b8e87b6393464258bd26f31b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 4427855 | 2019-10-20 16:51:46 | 2019-10-20 16:52:02 | 2019-10-20 18:06:02 | 1:14:00 | 1:02:15 | 0:11:45 | mira | master | centos | 7.6 | rados/singleton/{all/test-crash.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-avl.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
Command failed on mira035 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap' |
||||||||||||||
pass | 4427856 | 2019-10-20 16:51:47 | 2019-10-20 16:52:02 | 2019-10-20 17:42:02 | 0:50:00 | 0:35:24 | 0:14:36 | mira | master | centos | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} | 2 | |
pass | 4427857 | 2019-10-20 16:51:48 | 2019-10-20 16:52:02 | 2019-10-20 17:40:02 | 0:48:00 | 0:34:08 | 0:13:52 | mira | master | centos | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} | 2 | |
pass | 4427858 | 2019-10-20 16:51:49 | 2019-10-20 16:52:02 | 2019-10-20 17:32:02 | 0:40:00 | 0:29:08 | 0:10:52 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} | 2 | |
fail | 4427859 | 2019-10-20 16:51:49 | 2019-10-20 16:52:06 | 2019-10-20 19:52:08 | 3:00:02 | 2:40:43 | 0:19:19 | mira | master | rhel | 7.7 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/nautilus.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-octopus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} thrashosds-health.yaml} | 4 | |
Failure Reason:
Command failed on mira063 with status 22: "sudo ceph --cluster ceph --mon-client-directed-command-retry 5 tell 'mon.*' injectargs -- --no-mon-health-to-clog" |
||||||||||||||
fail | 4427860 | 2019-10-20 16:51:50 | 2019-10-20 16:53:33 | 2019-10-20 17:33:33 | 0:40:00 | 0:20:38 | 0:19:22 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous-v1only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | 4 | |
Failure Reason:
Command failed on mira057 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph_test_cls_rbd --gtest_filter=-TestClsRbd.get_features:TestClsRbd.parents'" |
||||||||||||||
pass | 4427861 | 2019-10-20 16:51:51 | 2019-10-20 16:55:40 | 2019-10-20 17:33:39 | 0:37:59 | 0:25:33 | 0:12:26 | mira | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/radosbench_omap_write.yaml} | 1 | |
fail | 4427862 | 2019-10-20 16:51:52 | 2019-10-20 17:12:04 | 2019-10-20 19:14:05 | 2:02:01 | 1:47:44 | 0:14:17 | mira | master | centos | 7.6 | rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/osd.yaml} | 1 | |
Failure Reason:
Command failed (workunit test osd/osd-bluefs-volume-ops.sh) on mira118 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=975611fe528bc968b8e87b6393464258bd26f31b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-bluefs-volume-ops.sh' |
||||||||||||||
fail | 4427863 | 2019-10-20 16:51:53 | 2019-10-20 17:32:21 | 2019-10-20 17:56:21 | 0:24:00 | 0:15:27 | 0:08:33 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/cosbench_64K_read_write.yaml} | 1 | |
Failure Reason:
Command failed on mira046 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' |
||||||||||||||
fail | 4427864 | 2019-10-20 16:51:54 | 2019-10-20 17:33:51 | 2019-10-20 18:59:52 | 1:26:01 | 1:12:50 | 0:13:11 | mira | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-avl.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
Command failed on mira107 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap' |
||||||||||||||
fail | 4427865 | 2019-10-20 16:51:54 | 2019-10-20 17:33:51 | 2019-10-20 18:39:51 | 1:06:00 | 0:56:02 | 0:09:58 | mira | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-avl.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/module_selftest.yaml} | 2 | |
Failure Reason:
Command failed on mira032 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap' |
||||||||||||||
fail | 4427866 | 2019-10-20 16:51:55 | 2019-10-20 17:40:19 | 2019-10-20 18:30:19 | 0:50:00 | 0:22:31 | 0:27:29 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus-v2only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | 4 | |
Failure Reason:
Command failed on mira083 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph_test_cls_rbd --gtest_filter=-TestClsRbd.get_features:TestClsRbd.parents'" |
||||||||||||||
fail | 4427867 | 2019-10-20 16:51:56 | 2019-10-20 17:42:22 | 2019-10-20 18:12:22 | 0:30:00 | 0:17:58 | 0:12:02 | mira | master | centos | 7.6 | rados/singleton-nomsgr/{all/admin_socket_output.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
Command crashed: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph_test_admin_socket_output --all'" |
||||||||||||||
fail | 4427868 | 2019-10-20 16:51:57 | 2019-10-20 17:44:21 | 2019-10-20 18:34:21 | 0:50:00 | 0:39:38 | 0:10:22 | mira | master | rhel | 7.7 | rados/upgrade/mimic-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/mimic.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} thrashosds-health.yaml} | 4 | |
Failure Reason:
Command failed on mira027 with status 22: "sudo ceph --cluster ceph --mon-client-directed-command-retry 5 tell 'mon.*' injectargs -- --no-mon-health-to-clog" |
||||||||||||||
fail | 4427869 | 2019-10-20 16:51:58 | 2019-10-20 17:56:37 | 2019-10-20 18:38:37 | 0:42:00 | 0:19:53 | 0:22:07 | mira | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{centos_7.yaml} tasks/orchestrator_cli.yaml} | 2 | |
Failure Reason:
"2019-10-20T18:36:25.596907+0000 mds.a (mds.0) 1 : cluster [WRN] evicting unresponsive client mira057:x (4675), after 300.351 seconds" in cluster log |