User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
kchai | 2019-07-23 06:42:58 | 2019-07-23 06:45:07 | 2019-07-23 18:56:00 | 12:10:53 | rados | wip-ceph-mutex-kefu | smithi | e6f1be0 | 7 | 13 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 4142085 | 2019-07-23 06:43:10 | 2019-07-23 06:44:10 | 2019-07-23 07:16:09 | 0:31:59 | 0:14:01 | 0:17:58 | smithi | master | rhel | 7.6 | rados/multimon/{clusters/9.yaml msgr-failures/few.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/mon_clock_no_skews.yaml} | 3 | |
pass | 4142086 | 2019-07-23 06:43:11 | 2019-07-23 06:44:10 | 2019-07-23 07:26:09 | 0:41:59 | 0:27:13 | 0:14:46 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/recovery-unfound-found.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4142087 | 2019-07-23 06:43:12 | 2019-07-23 06:44:14 | 2019-07-23 13:28:20 | 6:44:06 | 6:26:12 | 0:17:54 | smithi | master | centos | 7.6 | rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
dead | 4142088 | 2019-07-23 06:43:13 | 2019-07-23 06:45:07 | 2019-07-23 18:47:30 | 12:02:23 | smithi | master | rhel | 7.6 | rados/rest/{mgr-restful.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |||
fail | 4142089 | 2019-07-23 06:43:14 | 2019-07-23 06:45:54 | 2019-07-23 10:05:56 | 3:20:02 | 3:08:48 | 0:11:14 | smithi | master | ubuntu | 18.04 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi066 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6f1be0ff1246b698f9ed4f5b0ba229d89af89b8 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 4142090 | 2019-07-23 06:43:15 | 2019-07-23 06:47:47 | 2019-07-23 09:13:48 | 2:26:01 | 2:15:37 | 0:10:24 | smithi | master | rhel | 7.6 | rados/upgrade/mimic-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/mimic.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} thrashosds-health.yaml} | 4 | |
Failure Reason:
timed out waiting for admin_socket to appear after osd.3 restart |
||||||||||||||
pass | 4142091 | 2019-07-23 06:43:15 | 2019-07-23 06:47:56 | 2019-07-23 07:51:56 | 1:04:00 | 0:54:49 | 0:09:11 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
fail | 4142092 | 2019-07-23 06:43:16 | 2019-07-23 06:48:00 | 2019-07-23 07:15:59 | 0:27:59 | 0:15:49 | 0:12:10 | smithi | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{centos_7.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Test failure: test_full_health (tasks.mgr.dashboard.test_health.HealthTest) |
||||||||||||||
fail | 4142093 | 2019-07-23 06:43:17 | 2019-07-23 06:49:54 | 2019-07-23 07:09:53 | 0:19:59 | 0:14:24 | 0:05:35 | smithi | master | rhel | 7.6 | rados/singleton-nomsgr/{all/cache-fs-trunc.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
Fuse mount failed to populate /sys/ after 31 seconds |
||||||||||||||
fail | 4142094 | 2019-07-23 06:43:18 | 2019-07-23 06:49:55 | 2019-07-23 10:13:57 | 3:24:02 | 3:15:27 | 0:08:35 | smithi | master | rhel | 7.6 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi162 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6f1be0ff1246b698f9ed4f5b0ba229d89af89b8 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 4142095 | 2019-07-23 06:43:19 | 2019-07-23 06:49:55 | 2019-07-23 07:23:54 | 0:33:59 | 0:20:54 | 0:13:05 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/module_selftest.yaml} | 2 | |
Failure Reason:
"2019-07-23T07:11:46.743321+0000 mds.c (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi121:x (4629), after 302.873 seconds" in cluster log |
||||||||||||||
pass | 4142096 | 2019-07-23 06:43:20 | 2019-07-23 06:51:38 | 2019-07-23 07:27:37 | 0:35:59 | 0:28:29 | 0:07:30 | smithi | master | centos | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects.yaml} | 2 | |
fail | 4142097 | 2019-07-23 06:43:21 | 2019-07-23 06:51:46 | 2019-07-23 07:11:46 | 0:20:00 | 0:11:43 | 0:08:17 | smithi | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{centos_7.yaml} tasks/progress.yaml} | 2 | |
Failure Reason:
Test failure: test_osd_came_back (tasks.mgr.test_progress.TestProgress) |
||||||||||||||
dead | 4142098 | 2019-07-23 06:43:22 | 2019-07-23 06:53:32 | 2019-07-23 18:56:00 | 12:02:28 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 4 | |||
fail | 4142099 | 2019-07-23 06:43:23 | 2019-07-23 06:53:34 | 2019-07-23 07:39:33 | 0:45:59 | 0:36:18 | 0:09:41 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
"2019-07-23T07:24:29.886979+0000 mon.a (mon.0) 2025 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4142100 | 2019-07-23 06:43:24 | 2019-07-23 06:53:39 | 2019-07-23 10:17:41 | 3:24:02 | 3:11:10 | 0:12:52 | smithi | master | centos | 7.6 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi064 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6f1be0ff1246b698f9ed4f5b0ba229d89af89b8 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 4142101 | 2019-07-23 06:43:24 | 2019-07-23 06:53:56 | 2019-07-23 10:13:59 | 3:20:03 | 3:08:48 | 0:11:15 | smithi | master | ubuntu | 18.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/mon.yaml} | 1 | |
Failure Reason:
Command failed (workunit test mon/mon-bind.sh) on smithi114 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6f1be0ff1246b698f9ed4f5b0ba229d89af89b8 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-bind.sh' |
||||||||||||||
fail | 4142102 | 2019-07-23 06:43:25 | 2019-07-23 06:57:13 | 2019-07-23 10:23:16 | 3:26:03 | 3:16:11 | 0:09:52 | smithi | master | rhel | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/rados_python.yaml} | 2 | |
Failure Reason:
Command failed (workunit test rados/test_python.sh) on smithi059 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6f1be0ff1246b698f9ed4f5b0ba229d89af89b8 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh' |
||||||||||||||
pass | 4142103 | 2019-07-23 06:43:26 | 2019-07-23 06:57:38 | 2019-07-23 07:15:37 | 0:17:59 | 0:08:33 | 0:09:26 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/sample_radosbench.yaml} | 1 | |
fail | 4142104 | 2019-07-23 06:43:27 | 2019-07-23 07:00:00 | 2019-07-23 10:22:03 | 3:22:03 | 3:15:05 | 0:06:58 | smithi | master | rhel | 7.6 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi164 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6f1be0ff1246b698f9ed4f5b0ba229d89af89b8 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 4142105 | 2019-07-23 06:43:28 | 2019-07-23 07:01:38 | 2019-07-23 07:17:37 | 0:15:59 | 0:10:30 | 0:05:29 | smithi | master | centos | 7.6 | rados/objectstore/{backends/objectcacher-stress.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test osdc/stress_objectcacher.sh) on smithi166 with status 134: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6f1be0ff1246b698f9ed4f5b0ba229d89af89b8 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/osdc/stress_objectcacher.sh' |
||||||||||||||
pass | 4142106 | 2019-07-23 06:43:29 | 2019-07-23 07:08:09 | 2019-07-23 07:38:08 | 0:29:59 | 0:22:33 | 0:07:26 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 |