User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
ideepika | 2021-06-11 06:09:22 | 2021-06-11 06:11:29 | 2021-06-11 18:26:31 | 12:15:02 | rados | wip-yuri7-testing-2021-06-08-0747-octopus | smithi | 8d06216 | 6 | 18 | 6 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 6166192 | 2021-06-11 06:10:31 | 2021-06-11 06:11:29 | 2021-06-11 18:19:46 | 12:08:17 | smithi | master | centos | 8.2 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/defaut 3-start-upgrade 4-wait fixed-2} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6166193 | 2021-06-11 06:10:32 | 2021-06-11 06:11:29 | 2021-06-11 06:39:59 | 0:28:30 | 0:18:55 | 0:09:35 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_stable task/test_orch_cli} | 1 | |
fail | 6166194 | 2021-06-11 06:10:33 | 2021-06-11 06:12:39 | 2021-06-11 06:28:02 | 0:15:23 | 0:03:50 | 0:11:33 | smithi | master | ubuntu | 20.04 | rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable distro/ubuntu_20.04 fixed-2 mode/root msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi083 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo dnf -y module disable container-tools'" |
||||||||||||||
fail | 6166195 | 2021-06-11 06:10:34 | 2021-06-11 06:12:41 | 2021-06-11 06:14:41 | 0:02:00 | 0 | smithi | master | ubuntu | 20.04 | rados/cephadm/thrash/0-distro/ubuntu_20.04_kubic_stable | — | ||
Failure Reason:
list index out of range |
||||||||||||||
fail | 6166196 | 2021-06-11 06:10:35 | 2021-06-11 06:12:39 | 2021-06-11 06:49:41 | 0:37:02 | 0:25:17 | 0:11:45 | smithi | master | ubuntu | 20.04 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-stupid supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_host_devices (tasks.mgr.dashboard.test_host.HostControllerTest) |
||||||||||||||
pass | 6166197 | 2021-06-11 06:10:36 | 2021-06-11 06:14:20 | 2021-06-11 06:31:41 | 0:17:21 | 0:08:01 | 0:09:20 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_testing task/test_adoption} | 1 | |
fail | 6166198 | 2021-06-11 06:10:36 | 2021-06-11 06:14:20 | 2021-06-11 06:50:21 | 0:36:01 | 0:24:51 | 0:11:10 | smithi | master | ubuntu | 20.04 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/filestore-xfs supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_host_devices (tasks.mgr.dashboard.test_host.HostControllerTest) |
||||||||||||||
fail | 6166199 | 2021-06-11 06:10:37 | 2021-06-11 06:14:42 | 2021-06-11 06:16:41 | 0:01:59 | 0 | smithi | master | ubuntu | 20.04 | rados/cephadm/thrash/0-distro/ubuntu_20.04_kubic_testing | — | ||
Failure Reason:
list index out of range |
||||||||||||||
fail | 6166200 | 2021-06-11 06:10:38 | 2021-06-11 06:14:40 | 2021-06-11 06:30:11 | 0:15:31 | 0:05:54 | 0:09:37 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_stable task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi068 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3a2d16401d96c169743beb2f35cb7e5b7dbd2a9a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
pass | 6166201 | 2021-06-11 06:10:39 | 2021-06-11 06:14:41 | 2021-06-11 06:48:34 | 0:33:53 | 0:23:11 | 0:10:42 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/off msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/rados_api_tests} | 2 | |
dead | 6166202 | 2021-06-11 06:10:40 | 2021-06-11 06:15:11 | 2021-06-11 18:24:27 | 12:09:16 | smithi | master | centos | 8.2 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/repo_digest 3-start-upgrade 4-wait fixed-2} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6166203 | 2021-06-11 06:10:41 | 2021-06-11 06:15:21 | 2021-06-11 06:42:22 | 0:27:01 | 0:18:00 | 0:09:01 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_testing task/test_orch_cli} | 1 | |
fail | 6166204 | 2021-06-11 06:10:42 | 2021-06-11 06:15:21 | 2021-06-11 06:32:52 | 0:17:31 | 0:04:08 | 0:13:23 | smithi | master | ubuntu | 20.04 | rados/upgrade/mimic-x-singleton/{0-cluster/{openstack start} 1-install/mimic 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-octopus 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap thrashosds-health ubuntu_latest} | 4 | |
Failure Reason:
Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F20.04%2Fx86_64&ref=mimic |
||||||||||||||
fail | 6166205 | 2021-06-11 06:10:43 | 2021-06-11 06:15:24 | 2021-06-11 06:17:23 | 0:01:59 | 0 | smithi | master | centos | 8.2 | rados/cephadm/dashboard/0-distro/centos_8.2_kubic_stable | — | ||
Failure Reason:
list index out of range |
||||||||||||||
fail | 6166206 | 2021-06-11 06:10:44 | 2021-06-11 06:15:22 | 2021-06-11 09:35:16 | 3:19:54 | 3:09:44 | 0:10:10 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/mon-config-keys msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command crashed: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3a2d16401d96c169743beb2f35cb7e5b7dbd2a9a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/test_mon_config_key.py' |
||||||||||||||
fail | 6166207 | 2021-06-11 06:10:45 | 2021-06-11 06:15:44 | 2021-06-11 06:17:43 | 0:01:59 | 0 | smithi | master | centos | 8.2 | rados/cephadm/thrash/0-distro/centos_8.2_kubic_stable | — | ||
Failure Reason:
list index out of range |
||||||||||||||
dead | 6166208 | 2021-06-11 06:10:46 | 2021-06-11 06:15:42 | 2021-06-11 18:25:05 | 12:09:23 | smithi | master | centos | 8.2 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/defaut 3-start-upgrade 4-wait fixed-2} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6166209 | 2021-06-11 06:10:47 | 2021-06-11 06:15:42 | 2021-06-11 06:39:08 | 0:23:26 | 0:14:05 | 0:09:21 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_stable task/test_cephadm} | 1 | |
dead | 6166210 | 2021-06-11 06:10:48 | 2021-06-11 06:15:52 | 2021-06-11 18:25:13 | 12:09:21 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph objectstore/bluestore-bitmap openstack settings/optimized ubuntu_latest workloads/cosbench_64K_read_write} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6166211 | 2021-06-11 06:10:49 | 2021-06-11 06:15:53 | 2021-06-11 06:52:01 | 0:36:08 | 0:24:23 | 0:11:45 | smithi | master | ubuntu | 20.04 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-snappy supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_host_devices (tasks.mgr.dashboard.test_host.HostControllerTest) |
||||||||||||||
dead | 6166212 | 2021-06-11 06:10:50 | 2021-06-11 06:16:23 | 2021-06-11 18:26:17 | 12:09:54 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph objectstore/bluestore-comp openstack settings/optimized ubuntu_latest workloads/cosbench_64K_write} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6166213 | 2021-06-11 06:10:51 | 2021-06-11 06:16:25 | 2021-06-11 06:18:24 | 0:01:59 | 0 | smithi | master | ubuntu | 20.04 | rados/cephadm/thrash/0-distro/ubuntu_20.04_kubic_stable | — | ||
Failure Reason:
list index out of range |
||||||||||||||
fail | 6166214 | 2021-06-11 06:10:52 | 2021-06-11 06:16:23 | 2021-06-11 06:31:09 | 0:14:46 | 0:05:35 | 0:09:11 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_testing task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi058 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3a2d16401d96c169743beb2f35cb7e5b7dbd2a9a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
fail | 6166215 | 2021-06-11 06:10:52 | 2021-06-11 06:16:26 | 2021-06-11 06:18:25 | 0:01:59 | 0 | smithi | master | ubuntu | 20.04 | rados/cephadm/thrash/0-distro/ubuntu_20.04_kubic_testing | — | ||
Failure Reason:
list index out of range |
||||||||||||||
pass | 6166216 | 2021-06-11 06:10:53 | 2021-06-11 06:16:23 | 2021-06-11 06:34:31 | 0:18:08 | 0:08:20 | 0:09:48 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_stable task/test_adoption} | 1 | |
dead | 6166217 | 2021-06-11 06:10:54 | 2021-06-11 06:17:04 | 2021-06-11 18:26:31 | 12:09:27 | smithi | master | centos | 8.2 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/repo_digest 3-start-upgrade 4-wait fixed-2} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6166218 | 2021-06-11 06:10:55 | 2021-06-11 06:17:04 | 2021-06-11 06:37:30 | 0:20:26 | 0:04:16 | 0:16:10 | smithi | master | ubuntu | 20.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-octopus 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap thrashosds-health ubuntu_latest} | 4 | |
Failure Reason:
Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F20.04%2Fx86_64&ref=nautilus |
||||||||||||||
fail | 6166219 | 2021-06-11 06:10:56 | 2021-06-11 06:19:27 | 2021-06-11 06:21:26 | 0:01:59 | 0 | smithi | master | centos | 8.2 | rados/cephadm/dashboard/0-distro/centos_8.2_kubic_stable | — | ||
Failure Reason:
list index out of range |
||||||||||||||
fail | 6166220 | 2021-06-11 06:10:57 | 2021-06-11 06:19:25 | 2021-06-11 06:55:37 | 0:36:12 | 0:24:40 | 0:11:32 | smithi | master | ubuntu | 20.04 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-hybrid supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_host_devices (tasks.mgr.dashboard.test_host.HostControllerTest) |
||||||||||||||
fail | 6166221 | 2021-06-11 06:10:58 | 2021-06-11 06:19:27 | 2021-06-11 06:21:26 | 0:01:59 | 0 | smithi | master | centos | 8.2 | rados/cephadm/thrash/0-distro/centos_8.2_kubic_stable | — | ||
Failure Reason:
list index out of range |