User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|
ideepika | 2021-06-14 09:59:26 | 2021-06-14 10:18:42 | 2021-06-14 22:30:29 | 12:11:47 | rados | wip-yuri7-testing-2021-06-08-0747-octopus | smithi | 8d06216 | 18 | 3 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6171396 | 2021-06-14 10:00:36 | 2021-06-14 10:18:21 | 2021-06-14 10:46:20 | 0:27:59 | 0:16:54 | 0:11:05 | smithi | master | centos | 8.2 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/defaut 3-start-upgrade 4-wait fixed-2} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6171397 | 2021-06-14 10:00:37 | 2021-06-14 10:18:23 | 2021-06-14 10:20:22 | 0:01:59 | 0 | smithi | master | ubuntu | 20.04 | rados/cephadm/thrash/0-distro/ubuntu_20.04_kubic_stable | — | ||
Failure Reason:
list index out of range |
||||||||||||||
fail | 6171398 | 2021-06-14 10:00:38 | 2021-06-14 10:18:21 | 2021-06-14 10:53:57 | 0:35:36 | 0:24:05 | 0:11:31 | smithi | master | ubuntu | 20.04 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-stupid supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_host_devices (tasks.mgr.dashboard.test_host.HostControllerTest) |
||||||||||||||
fail | 6171399 | 2021-06-14 10:00:39 | 2021-06-14 10:18:21 | 2021-06-14 10:54:53 | 0:36:32 | 0:23:57 | 0:12:35 | smithi | master | ubuntu | 20.04 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/filestore-xfs supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_host_devices (tasks.mgr.dashboard.test_host.HostControllerTest) |
||||||||||||||
fail | 6171400 | 2021-06-14 10:00:40 | 2021-06-14 10:18:43 | 2021-06-14 10:20:42 | 0:01:59 | 0 | smithi | master | ubuntu | 20.04 | rados/cephadm/thrash/0-distro/ubuntu_20.04_kubic_testing | — | ||
Failure Reason:
list index out of range |
||||||||||||||
fail | 6171401 | 2021-06-14 10:00:41 | 2021-06-14 10:18:41 | 2021-06-14 10:34:03 | 0:15:22 | 0:05:33 | 0:09:49 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_stable task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi094 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=84f670fe9103bd26db78242e88b91adcb76fa5c6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
fail | 6171402 | 2021-06-14 10:00:42 | 2021-06-14 10:18:42 | 2021-06-14 10:45:34 | 0:26:52 | 0:17:01 | 0:09:51 | smithi | master | centos | 8.2 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/repo_digest 3-start-upgrade 4-wait fixed-2} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6171403 | 2021-06-14 10:00:43 | 2021-06-14 10:18:42 | 2021-06-14 10:36:57 | 0:18:15 | 0:04:11 | 0:14:04 | smithi | master | ubuntu | 20.04 | rados/upgrade/mimic-x-singleton/{0-cluster/{openstack start} 1-install/mimic 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-octopus 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap thrashosds-health ubuntu_latest} | 4 | |
Failure Reason:
Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F20.04%2Fx86_64&ref=mimic |
||||||||||||||
fail | 6171404 | 2021-06-14 10:00:44 | 2021-06-14 10:19:14 | 2021-06-14 10:21:13 | 0:01:59 | 0 | smithi | master | centos | 8.2 | rados/cephadm/thrash/0-distro/centos_8.2_kubic_stable | — | ||
Failure Reason:
list index out of range |
||||||||||||||
fail | 6171405 | 2021-06-14 10:00:45 | 2021-06-14 10:19:12 | 2021-06-14 11:12:31 | 0:53:19 | 0:44:34 | 0:08:45 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/mon-config-keys msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command crashed: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=84f670fe9103bd26db78242e88b91adcb76fa5c6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/test_mon_config_key.py' |
||||||||||||||
fail | 6171406 | 2021-06-14 10:00:46 | 2021-06-14 10:19:22 | 2021-06-14 10:47:27 | 0:28:05 | 0:16:50 | 0:11:15 | smithi | master | centos | 8.2 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/defaut 3-start-upgrade 4-wait fixed-2} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
dead | 6171407 | 2021-06-14 10:00:47 | 2021-06-14 10:20:23 | 2021-06-14 22:30:12 | 12:09:49 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph objectstore/bluestore-bitmap openstack settings/optimized ubuntu_latest workloads/cosbench_64K_read_write} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6171408 | 2021-06-14 10:00:48 | 2021-06-14 10:20:23 | 2021-06-14 10:56:37 | 0:36:14 | 0:24:10 | 0:12:04 | smithi | master | ubuntu | 20.04 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-snappy supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_host_devices (tasks.mgr.dashboard.test_host.HostControllerTest) |
||||||||||||||
fail | 6171409 | 2021-06-14 10:00:49 | 2021-06-14 10:20:35 | 2021-06-14 10:22:34 | 0:01:59 | 0 | smithi | master | ubuntu | 20.04 | rados/cephadm/thrash/0-distro/ubuntu_20.04_kubic_stable | — | ||
Failure Reason:
list index out of range |
||||||||||||||
dead | 6171410 | 2021-06-14 10:00:50 | 2021-06-14 10:20:33 | 2021-06-14 22:30:29 | 12:09:56 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph objectstore/bluestore-comp openstack settings/optimized ubuntu_latest workloads/cosbench_64K_write} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6171411 | 2021-06-14 10:00:51 | 2021-06-14 10:20:44 | 2021-06-14 10:35:39 | 0:14:55 | 0:05:30 | 0:09:25 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_testing task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi052 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=84f670fe9103bd26db78242e88b91adcb76fa5c6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
fail | 6171412 | 2021-06-14 10:00:52 | 2021-06-14 10:20:46 | 2021-06-14 10:22:45 | 0:01:59 | 0 | smithi | master | ubuntu | 20.04 | rados/cephadm/thrash/0-distro/ubuntu_20.04_kubic_testing | — | ||
Failure Reason:
list index out of range |
||||||||||||||
fail | 6171413 | 2021-06-14 10:00:53 | 2021-06-14 10:20:44 | 2021-06-14 10:49:06 | 0:28:22 | 0:16:42 | 0:11:40 | smithi | master | centos | 8.2 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/repo_digest 3-start-upgrade 4-wait fixed-2} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
dead | 6171414 | 2021-06-14 10:00:54 | 2021-06-14 10:22:14 | 2021-06-14 10:28:02 | 0:05:48 | smithi | master | ubuntu | 20.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-octopus 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap thrashosds-health ubuntu_latest} | 4 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi095 |
||||||||||||||
fail | 6171415 | 2021-06-14 10:00:55 | 2021-06-14 10:23:35 | 2021-06-14 11:00:00 | 0:36:25 | 0:24:14 | 0:12:11 | smithi | master | ubuntu | 20.04 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-hybrid supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_host_devices (tasks.mgr.dashboard.test_host.HostControllerTest) |
||||||||||||||
fail | 6171416 | 2021-06-14 10:00:56 | 2021-06-14 10:24:07 | 2021-06-14 10:26:06 | 0:01:59 | 0 | smithi | master | centos | 8.2 | rados/cephadm/thrash/0-distro/centos_8.2_kubic_stable | — | ||
Failure Reason:
list index out of range |