User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
sage | 2018-12-06 14:52:10 | 2018-12-06 16:51:55 | 2018-12-07 05:06:19 | 12:14:24 | rados | wip-sage-testing-2018-12-05-1258 | smithi | e232ed1 | 8 | 26 | 3 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 3311873 | 2018-12-06 14:52:16 | 2018-12-06 16:51:55 | 2018-12-06 17:33:54 | 0:41:59 | 0:30:06 | 0:11:53 | smithi | wip-addrvec | centos | 7.5 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 3311874 | 2018-12-06 14:52:17 | 2018-12-06 16:51:55 | 2018-12-06 17:21:54 | 0:29:59 | 0:18:46 | 0:11:13 | smithi | wip-addrvec | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
fail | 3311875 | 2018-12-06 14:52:18 | 2018-12-06 16:51:55 | 2018-12-06 17:13:54 | 0:21:59 | 0:11:02 | 0:10:57 | smithi | wip-addrvec | ubuntu | 18.04 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi007 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e232ed1e9fda5674e2bd2091b3053384471252ab TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 3311876 | 2018-12-06 14:52:19 | 2018-12-06 16:52:07 | 2018-12-06 17:34:06 | 0:41:59 | 0:16:29 | 0:25:30 | smithi | wip-addrvec | rhel | 7.5 | rados/upgrade/luminous-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} supported-random-distro$/{rhel_latest.yaml} thrashosds-health.yaml} | 3 | |
Failure Reason:
Command failed on smithi192 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph -- tell 'mon.*' injectargs --mon_health_to_clog=false" |
||||||||||||||
pass | 3311877 | 2018-12-06 14:52:20 | 2018-12-06 16:52:20 | 2018-12-06 17:16:19 | 0:23:59 | 0:11:33 | 0:12:26 | smithi | wip-addrvec | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |
dead | 3311878 | 2018-12-06 14:52:21 | 2018-12-06 16:53:50 | 2018-12-07 04:56:16 | 12:02:26 | smithi | wip-addrvec | ubuntu | 18.04 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/sync.yaml workloads/rados_api_tests.yaml} | 2 | |||
fail | 3311879 | 2018-12-06 14:52:21 | 2018-12-06 16:53:55 | 2018-12-06 17:35:54 | 0:41:59 | 0:08:49 | 0:33:10 | smithi | wip-addrvec | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml two-plus-three.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/fastclose.yaml msgr/simple.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | 3 | |
Failure Reason:
Command crashed: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph_test_cls_rbd --gtest_filter=-TestClsRbd.get_features:TestClsRbd.parents'" |
||||||||||||||
fail | 3311880 | 2018-12-06 14:52:22 | 2018-12-06 16:53:58 | 2018-12-06 17:49:58 | 0:56:00 | 0:42:56 | 0:13:04 | smithi | wip-addrvec | centos | 7.5 | rados/thrash-erasure-code-overwrites/{bluestore.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
pass | 3311881 | 2018-12-06 14:52:23 | 2018-12-06 16:54:06 | 2018-12-06 17:18:05 | 0:23:59 | 0:13:16 | 0:10:43 | smithi | wip-addrvec | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
fail | 3311882 | 2018-12-06 14:52:24 | 2018-12-06 16:54:11 | 2018-12-06 17:36:11 | 0:42:00 | 0:30:42 | 0:11:18 | smithi | wip-addrvec | rhel | 7.5 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{rhel_latest.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
failed to become clean before timeout expired |
||||||||||||||
fail | 3311883 | 2018-12-06 14:52:25 | 2018-12-06 16:54:19 | 2018-12-06 17:14:18 | 0:19:59 | 0:06:03 | 0:13:56 | smithi | wip-addrvec | centos | 7.5 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml two-plus-three.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml msgr/async.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 3 | |
Failure Reason:
Command failed on smithi201 with status 1: '\n sudo yum -y install rbd-fuse\n ' |
||||||||||||||
fail | 3311884 | 2018-12-06 14:52:26 | 2018-12-06 16:55:44 | 2018-12-06 17:53:44 | 0:58:00 | 0:42:59 | 0:15:01 | smithi | wip-addrvec | centos | 7.5 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml supported-random-distro$/{centos_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
"2018-12-06 17:46:47.103274 osd.5 (osd.5) 1 : cluster [WRN] Monitor daemon marked osd.5 down, but it is still running" in cluster log |
||||||||||||||
fail | 3311885 | 2018-12-06 14:52:26 | 2018-12-06 16:55:47 | 2018-12-06 17:49:47 | 0:54:00 | 0:42:53 | 0:11:07 | smithi | wip-addrvec | centos | 7.5 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 3311886 | 2018-12-06 14:52:27 | 2018-12-06 16:55:54 | 2018-12-06 17:17:53 | 0:21:59 | 0:10:12 | 0:11:47 | smithi | wip-addrvec | ubuntu | 16.04 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi061 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e232ed1e9fda5674e2bd2091b3053384471252ab TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 3311887 | 2018-12-06 14:52:28 | 2018-12-06 16:55:57 | 2018-12-06 17:35:56 | 0:39:59 | 0:09:18 | 0:30:41 | smithi | wip-addrvec | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml two-plus-three.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/osd-delay.yaml msgr/simple.yaml rados.yaml rocksdb.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 3 | |
Failure Reason:
Command failed on smithi076 with status 1: 'CEPH_CLIENT_ID=2 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op cache_try_flush 50 --op cache_flush 50 --op cache_evict 50 --op delete 50 --pool base' |
||||||||||||||
pass | 3311888 | 2018-12-06 14:52:29 | 2018-12-06 16:56:06 | 2018-12-06 17:50:06 | 0:54:00 | 0:37:12 | 0:16:48 | smithi | wip-addrvec | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | ||
fail | 3311889 | 2018-12-06 14:52:30 | 2018-12-06 16:56:07 | 2018-12-06 17:52:07 | 0:56:00 | 0:40:04 | 0:15:56 | smithi | wip-addrvec | ubuntu | 16.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/rados_striper.yaml} | 2 | |
Failure Reason:
"2018-12-06 17:17:35.931988 mon.a (mon.0) 149 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive, 2 pgs peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
dead | 3311890 | 2018-12-06 14:52:31 | 2018-12-06 16:56:07 | 2018-12-07 04:58:29 | 12:02:22 | smithi | wip-addrvec | ubuntu | 16.04 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/one.yaml workloads/snaps-few-objects.yaml} | 2 | |||
fail | 3311891 | 2018-12-06 14:52:31 | 2018-12-06 16:56:09 | 2018-12-06 17:14:08 | 0:17:59 | 0:11:07 | 0:06:52 | smithi | wip-addrvec | rhel | 7.5 | rados/singleton/{all/mon-seesaw.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
Failure Reason:
too many values to unpack |
||||||||||||||
fail | 3311892 | 2018-12-06 14:52:32 | 2018-12-06 16:58:18 | 2018-12-06 18:12:19 | 1:14:01 | 0:57:35 | 0:16:26 | smithi | wip-addrvec | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/hammer.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml two-plus-three.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/fastclose.yaml msgr/async.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 3 | |
Failure Reason:
Command failed on smithi125 with status 1: 'sudo ceph --cluster ceph osd crush tunables hammer' |
||||||||||||||
pass | 3311893 | 2018-12-06 14:52:33 | 2018-12-06 16:58:18 | 2018-12-06 18:12:19 | 1:14:01 | 0:28:41 | 0:45:20 | smithi | wip-addrvec | centos | 7.5 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/bluestore.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
fail | 3311894 | 2018-12-06 14:52:34 | 2018-12-06 17:00:09 | 2018-12-06 17:22:08 | 0:21:59 | 0:14:01 | 0:07:58 | smithi | wip-addrvec | centos | 7.5 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi003 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e232ed1e9fda5674e2bd2091b3053384471252ab TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 3311895 | 2018-12-06 14:52:35 | 2018-12-06 17:00:09 | 2018-12-06 17:16:08 | 0:15:59 | 0:05:24 | 0:10:35 | smithi | wip-addrvec | ubuntu | 16.04 | rados/standalone/{supported-random-distro$/{ubuntu_16.04.yaml} workloads/mon.yaml} | 1 | |
Failure Reason:
Command failed (workunit test mon/misc.sh) on smithi099 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e232ed1e9fda5674e2bd2091b3053384471252ab TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/misc.sh' |
||||||||||||||
pass | 3311896 | 2018-12-06 14:52:36 | 2018-12-06 17:00:09 | 2018-12-06 17:24:08 | 0:23:59 | 0:14:04 | 0:09:55 | smithi | wip-addrvec | rhel | 7.5 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
fail | 3311897 | 2018-12-06 14:52:36 | 2018-12-06 17:01:53 | 2018-12-06 18:11:54 | 1:10:01 | 0:45:41 | 0:24:20 | smithi | wip-addrvec | centos | 7.5 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/jewel.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml two-plus-three.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml msgr/random.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | 3 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 3311898 | 2018-12-06 14:52:37 | 2018-12-06 17:01:55 | 2018-12-06 17:39:55 | 0:38:00 | 0:30:50 | 0:07:10 | smithi | wip-addrvec | rhel | 7.5 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore.yaml supported-random-distro$/{rhel_latest.yaml} tasks/prometheus.yaml} | 2 | |
Failure Reason:
failed to become clean before timeout expired |
||||||||||||||
fail | 3311899 | 2018-12-06 14:52:38 | 2018-12-06 17:02:06 | 2018-12-06 17:58:05 | 0:55:59 | 0:42:41 | 0:13:18 | smithi | wip-addrvec | centos | 7.5 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 3311900 | 2018-12-06 14:52:39 | 2018-12-06 17:02:07 | 2018-12-06 17:32:07 | 0:30:00 | 0:14:13 | 0:15:47 | smithi | wip-addrvec | centos | 7.5 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml two-plus-three.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml msgr/simple.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 3 | |
Failure Reason:
Command failed on smithi160 with status 1: 'CEPH_CLIENT_ID=2 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 3311901 | 2018-12-06 14:52:40 | 2018-12-06 17:02:26 | 2018-12-06 18:44:28 | 1:42:02 | 1:32:14 | 0:09:48 | smithi | wip-addrvec | ubuntu | 16.04 | rados/standalone/{supported-random-distro$/{ubuntu_16.04.yaml} workloads/osd.yaml} | 1 | |
Failure Reason:
Command failed (workunit test osd/osd-fast-mark-down.sh) on smithi149 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e232ed1e9fda5674e2bd2091b3053384471252ab TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-fast-mark-down.sh' |
||||||||||||||
dead | 3311902 | 2018-12-06 14:52:40 | 2018-12-06 17:03:54 | 2018-12-07 05:06:19 | 12:02:25 | smithi | wip-addrvec | rhel | 7.5 | rados/singleton-nomsgr/{all/msgr.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |||
fail | 3311903 | 2018-12-06 14:52:41 | 2018-12-06 17:03:54 | 2018-12-06 17:55:54 | 0:52:00 | 0:44:17 | 0:07:43 | smithi | wip-addrvec | rhel | 7.5 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/force-sync-many.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
"2018-12-06 17:49:59.506200 mon.a (mon.0) 365 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 3311904 | 2018-12-06 14:52:42 | 2018-12-06 17:04:06 | 2018-12-06 18:42:07 | 1:38:01 | 0:58:13 | 0:39:48 | smithi | wip-addrvec | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml two-plus-three.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/fastclose.yaml msgr/async.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | 3 | |
Failure Reason:
Command failed on smithi059 with status 1: 'sudo ceph --cluster ceph osd crush tunables hammer' |
||||||||||||||
fail | 3311905 | 2018-12-06 14:52:43 | 2018-12-06 17:04:07 | 2018-12-06 17:26:06 | 0:21:59 | 0:16:19 | 0:05:40 | smithi | wip-addrvec | rhel | 7.5 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi193 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e232ed1e9fda5674e2bd2091b3053384471252ab TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 3311906 | 2018-12-06 14:52:44 | 2018-12-06 17:04:13 | 2018-12-06 17:34:13 | 0:30:00 | 0:19:32 | 0:10:28 | smithi | wip-addrvec | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rgw_snaps.yaml} | 2 | |
Failure Reason:
"2018-12-06 17:20:27.706794 mon.b (mon.0) 92 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive, 2 pgs peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 3311907 | 2018-12-06 14:52:44 | 2018-12-06 17:04:22 | 2018-12-06 17:56:22 | 0:52:00 | 0:41:25 | 0:10:35 | smithi | wip-addrvec | ubuntu | 18.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/scrub.yaml} | 1 | |
fail | 3311908 | 2018-12-06 14:52:45 | 2018-12-06 17:05:28 | 2018-12-06 17:55:28 | 0:50:00 | 0:20:46 | 0:29:14 | smithi | wip-addrvec | rhel | 7.5 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_latest.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Test failure: test_invalid_user_id (tasks.mgr.dashboard.test_rgw.RgwApiCredentialsTest) |
||||||||||||||
fail | 3311909 | 2018-12-06 14:52:46 | 2018-12-06 17:05:29 | 2018-12-06 17:33:28 | 0:27:59 | 0:12:54 | 0:15:05 | smithi | wip-addrvec | centos | 7.5 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml two-plus-three.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml msgr/random.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 3 | |
Failure Reason:
Command failed on smithi130 with status 1: 'CEPH_CLIENT_ID=2 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op cache_try_flush 50 --op cache_flush 50 --op cache_evict 50 --op delete 50 --pool base' |