User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
sage | 2019-01-13 15:02:25 | 2019-01-13 15:04:44 | 2019-01-14 03:07:11 | 12:02:27 | rados | wip-sage4-testing-2019-01-12-0651 | smithi | 983f268 | 6 | 6 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 3458343 | 2019-01-13 15:02:30 | 2019-01-13 15:04:44 | 2019-01-14 03:07:11 | 12:02:27 | smithi | master | rhel | 7.5 | rados/singleton-nomsgr/{all/msgr.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |||
fail | 3458344 | 2019-01-13 15:02:31 | 2019-01-13 15:05:14 | 2019-01-13 16:25:15 | 1:20:01 | 1:07:50 | 0:12:11 | smithi | master | ubuntu | 16.04 | rados/standalone/{supported-random-distro$/{ubuntu_16.04.yaml} workloads/osd.yaml} | 1 | |
Failure Reason:
Command failed (workunit test osd/osd-rep-recov-eio.sh) on smithi104 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=983f2685ad3afaea8d10031bd48e25bd6cb89340 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-rep-recov-eio.sh' |
||||||||||||||
fail | 3458345 | 2019-01-13 15:02:32 | 2019-01-13 15:05:17 | 2019-01-13 15:49:16 | 0:43:59 | 0:35:36 | 0:08:23 | smithi | master | rhel | 7.5 | rados/standalone/{supported-random-distro$/{rhel_latest.yaml} workloads/scrub.yaml} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi159 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=983f2685ad3afaea8d10031bd48e25bd6cb89340 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh' |
||||||||||||||
pass | 3458346 | 2019-01-13 15:02:32 | 2019-01-13 15:05:22 | 2019-01-13 15:59:22 | 0:54:00 | 0:42:43 | 0:11:17 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | ||
pass | 3458347 | 2019-01-13 15:02:33 | 2019-01-13 15:07:15 | 2019-01-13 18:25:18 | 3:18:03 | 3:03:57 | 0:14:06 | smithi | master | centos | 7.5 | rados/upgrade/luminous-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} supported-random-distro$/{centos_latest.yaml} thrashosds-health.yaml} | 3 | |
fail | 3458348 | 2019-01-13 15:02:34 | 2019-01-13 15:07:15 | 2019-01-13 15:37:14 | 0:29:59 | 0:22:16 | 0:07:43 | smithi | master | rhel | 7.5 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{rhel_latest.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Test failure: test_add_osd_flag (tasks.mgr.dashboard.test_osd.OsdFlagsTest) |
||||||||||||||
pass | 3458349 | 2019-01-13 15:02:35 | 2019-01-13 15:08:05 | 2019-01-13 15:40:05 | 0:32:00 | 0:21:09 | 0:10:51 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
fail | 3458350 | 2019-01-13 15:02:36 | 2019-01-13 15:09:31 | 2019-01-13 15:35:30 | 0:25:59 | 0:16:38 | 0:09:21 | smithi | master | ubuntu | 18.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/erasure-code.yaml} | 1 | |
Failure Reason:
Command failed (workunit test erasure-code/test-erasure-eio.sh) on smithi062 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=983f2685ad3afaea8d10031bd48e25bd6cb89340 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/erasure-code/test-erasure-eio.sh' |
||||||||||||||
pass | 3458351 | 2019-01-13 15:02:37 | 2019-01-13 15:13:02 | 2019-01-13 15:37:01 | 0:23:59 | 0:10:38 | 0:13:21 | smithi | master | centos | 7.5 | rados/multimon/{clusters/9.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/mon_clock_no_skews.yaml} | 3 | |
fail | 3458352 | 2019-01-13 15:02:38 | 2019-01-13 15:13:02 | 2019-01-13 15:31:01 | 0:17:59 | 0:07:34 | 0:10:25 | smithi | master | ubuntu | 16.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/module_selftest.yaml} | 2 | |
Failure Reason:
Test failure: test_devicehealth (tasks.mgr.test_module_selftest.TestModuleSelftest) |
||||||||||||||
pass | 3458353 | 2019-01-13 15:02:38 | 2019-01-13 15:15:53 | 2019-01-13 15:47:53 | 0:32:00 | 0:17:15 | 0:14:45 | smithi | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
pass | 3458354 | 2019-01-13 15:02:39 | 2019-01-13 15:21:17 | 2019-01-13 16:11:16 | 0:49:59 | 0:34:28 | 0:15:31 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
fail | 3458355 | 2019-01-13 15:02:40 | 2019-01-13 15:27:24 | 2019-01-13 15:49:23 | 0:21:59 | 0:12:37 | 0:09:22 | smithi | master | centos | 7.5 | rados/standalone/{supported-random-distro$/{centos_latest.yaml} workloads/mon.yaml} | 1 | |
Failure Reason:
Command failed (workunit test mon/mon-handle-forward.sh) on smithi139 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=983f2685ad3afaea8d10031bd48e25bd6cb89340 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-handle-forward.sh' |