Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 3312554 2018-12-06 22:22:32 2018-12-06 22:22:57 2018-12-06 22:52:56 0:29:59 0:16:32 0:13:27 smithi wip-addrvec rhel 7.5 rados/upgrade/luminous-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} supported-random-distro$/{rhel_latest.yaml} thrashosds-health.yaml} 3
Failure Reason:

Command failed on smithi118 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph -- tell 'mon.*' injectargs --mon_health_to_clog=false"

fail 3312555 2018-12-06 22:22:33 2018-12-06 22:24:37 2018-12-06 23:02:36 0:37:59 0:08:48 0:29:11 smithi wip-addrvec ubuntu 16.04 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml two-plus-three.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/fastclose.yaml msgr/simple.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} 3
Failure Reason:

Command crashed: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph_test_cls_rbd --gtest_filter=-TestClsRbd.get_features:TestClsRbd.parents'"

fail 3312556 2018-12-06 22:22:33 2018-12-06 22:24:37 2018-12-06 22:52:36 0:27:59 0:21:27 0:06:32 smithi wip-addrvec rhel 7.5 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{rhel_latest.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_invalid_user_id (tasks.mgr.dashboard.test_rgw.RgwApiCredentialsTest)

fail 3312557 2018-12-06 22:22:34 2018-12-06 22:24:37 2018-12-06 23:00:36 0:35:59 0:05:58 0:30:01 smithi wip-addrvec centos 7.5 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml two-plus-three.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml msgr/async.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 3
Failure Reason:

Command failed on smithi202 with status 1: '\n sudo yum -y install rbd-fuse\n '

pass 3312558 2018-12-06 22:22:35 2018-12-06 22:24:37 2018-12-06 23:00:37 0:36:00 0:20:32 0:15:28 smithi wip-addrvec centos 7.5 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml supported-random-distro$/{centos_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
pass 3312559 2018-12-06 22:22:36 2018-12-06 22:24:50 2018-12-06 23:04:50 0:40:00 0:27:11 0:12:49 smithi wip-addrvec centos 7.5 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
fail 3312560 2018-12-06 22:22:36 2018-12-06 22:24:54 2018-12-06 22:48:53 0:23:59 0:09:28 0:14:31 smithi wip-addrvec ubuntu 16.04 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml two-plus-three.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/osd-delay.yaml msgr/simple.yaml rados.yaml rocksdb.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 3
Failure Reason:

Command failed on smithi038 with status 1: 'CEPH_CLIENT_ID=2 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op cache_try_flush 50 --op cache_flush 50 --op cache_evict 50 --op delete 50 --pool base'

pass 3312561 2018-12-06 22:22:37 2018-12-06 22:24:55 2018-12-06 22:44:55 0:20:00 0:07:00 0:13:00 smithi wip-addrvec ubuntu 16.04 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/rados_striper.yaml} 2
fail 3312562 2018-12-06 22:22:38 2018-12-06 22:24:56 2018-12-06 22:42:55 0:17:59 0:10:35 0:07:24 smithi wip-addrvec rhel 7.5 rados/singleton/{all/mon-seesaw.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
Failure Reason:

too many values to unpack

fail 3312563 2018-12-06 22:22:39 2018-12-06 22:26:36 2018-12-06 23:36:36 1:10:00 0:57:32 0:12:28 smithi wip-addrvec ubuntu 16.04 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/hammer.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml two-plus-three.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/fastclose.yaml msgr/async.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} 3
Failure Reason:

Command failed on smithi188 with status 1: 'sudo ceph --cluster ceph osd crush tunables hammer'

pass 3312564 2018-12-06 22:22:40 2018-12-06 22:26:38 2018-12-06 23:00:37 0:33:59 0:26:43 0:07:16 smithi wip-addrvec ubuntu 16.04 rados/standalone/{supported-random-distro$/{ubuntu_16.04.yaml} workloads/mon.yaml} 1
fail 3312565 2018-12-06 22:22:40 2018-12-06 22:26:50 2018-12-06 22:58:50 0:32:00 0:12:34 0:19:26 smithi wip-addrvec centos 7.5 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/jewel.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml two-plus-three.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml msgr/random.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} 3
Failure Reason:

Command crashed: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph_test_cls_rbd --gtest_filter=-TestClsRbd.get_features:TestClsRbd.parents'"

fail 3312566 2018-12-06 22:22:41 2018-12-06 22:28:39 2018-12-06 23:32:39 1:04:00 0:52:09 0:11:51 smithi wip-addrvec centos 7.5 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} 2
Failure Reason:

failed to recover before timeout expired

fail 3312567 2018-12-06 22:22:42 2018-12-06 22:28:39 2018-12-06 23:08:39 0:40:00 0:12:33 0:27:27 smithi wip-addrvec centos 7.5 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml two-plus-three.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml msgr/simple.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 3
Failure Reason:

Command failed on smithi163 with status 1: 'CEPH_CLIENT_ID=2 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0'

fail 3312568 2018-12-06 22:22:43 2018-12-06 22:28:39 2018-12-07 00:12:40 1:44:01 1:33:27 0:10:34 smithi wip-addrvec ubuntu 16.04 rados/standalone/{supported-random-distro$/{ubuntu_16.04.yaml} workloads/osd.yaml} 1
Failure Reason:

Command failed (workunit test osd/osd-fast-mark-down.sh) on smithi145 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0d5b0ae903646b928a82a6816096be1e569f9c50 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-fast-mark-down.sh'

dead 3312569 2018-12-06 22:22:43 2018-12-06 22:28:39 2018-12-07 10:31:04 12:02:25 smithi wip-addrvec rhel 7.5 rados/monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/force-sync-many.yaml workloads/rados_api_tests.yaml} 2
fail 3312570 2018-12-06 22:22:44 2018-12-06 22:28:51 2018-12-06 23:42:52 1:14:01 0:58:24 0:15:37 smithi wip-addrvec ubuntu 16.04 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml two-plus-three.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/fastclose.yaml msgr/async.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} 3
Failure Reason:

Command failed on smithi121 with status 1: 'sudo ceph --cluster ceph osd crush tunables hammer'

fail 3312571 2018-12-06 22:22:45 2018-12-06 22:28:56 2018-12-06 23:06:56 0:38:00 0:30:58 0:07:02 smithi wip-addrvec rhel 7.5 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_latest.yaml} tasks/dashboard.yaml} 2
Failure Reason:

"2018-12-06 22:45:55.276025 mon.a (mon.0) 112 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log

fail 3312572 2018-12-06 22:22:46 2018-12-06 22:30:37 2018-12-06 22:58:36 0:27:59 0:12:39 0:15:20 smithi wip-addrvec centos 7.5 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml two-plus-three.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml msgr/random.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 3
Failure Reason:

Command failed on smithi144 with status 1: 'CEPH_CLIENT_ID=2 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op cache_try_flush 50 --op cache_flush 50 --op cache_evict 50 --op delete 50 --pool base'