User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
ideepika | 2021-06-15 06:48:17 | 2021-06-15 06:51:04 | 2021-06-15 19:08:50 | 12:17:46 | rados | wip-yuri7-testing-2021-06-08-0747-octopus | smithi | 8d06216 | 4 | 10 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6173647 | 2021-06-15 06:49:27 | 2021-06-15 06:51:04 | 2021-06-15 07:22:11 | 0:31:07 | 0:16:28 | 0:14:39 | smithi | master | centos | 8.2 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/defaut 3-start-upgrade 4-wait fixed-2} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6173648 | 2021-06-15 06:49:28 | 2021-06-15 06:55:04 | 2021-06-15 07:25:03 | 0:29:59 | 0:19:15 | 0:10:44 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_stable task/test_orch_cli} | 1 | |
Failure Reason:
Test failure: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS) |
||||||||||||||
pass | 6173649 | 2021-06-15 06:49:29 | 2021-06-15 06:55:45 | 2021-06-15 07:13:21 | 0:17:36 | 0:08:27 | 0:09:09 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_testing task/test_adoption} | 1 | |
fail | 6173650 | 2021-06-15 06:49:30 | 2021-06-15 06:55:45 | 2021-06-15 07:10:43 | 0:14:58 | 0:05:38 | 0:09:20 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_stable task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi005 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c0a29b879312e8e71dc757e2134195f987ce8f64 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
fail | 6173651 | 2021-06-15 06:49:31 | 2021-06-15 06:55:45 | 2021-06-15 07:23:41 | 0:27:56 | 0:17:11 | 0:10:45 | smithi | master | centos | 8.2 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/repo_digest 3-start-upgrade 4-wait fixed-2} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6173652 | 2021-06-15 06:49:32 | 2021-06-15 06:56:36 | 2021-06-15 07:24:35 | 0:27:59 | 0:17:56 | 0:10:03 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_testing task/test_orch_cli} | 1 | |
fail | 6173653 | 2021-06-15 06:49:33 | 2021-06-15 06:57:06 | 2021-06-15 07:15:54 | 0:18:48 | 0:04:08 | 0:14:40 | smithi | master | ubuntu | 20.04 | rados/upgrade/mimic-x-singleton/{0-cluster/{openstack start} 1-install/mimic 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-octopus 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap thrashosds-health ubuntu_latest} | 4 | |
Failure Reason:
Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F20.04%2Fx86_64&ref=mimic |
||||||||||||||
fail | 6173654 | 2021-06-15 06:49:35 | 2021-06-15 06:58:16 | 2021-06-15 07:25:55 | 0:27:39 | 0:16:52 | 0:10:47 | smithi | master | centos | 8.2 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/defaut 3-start-upgrade 4-wait fixed-2} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6173655 | 2021-06-15 06:49:36 | 2021-06-15 06:58:37 | 2021-06-15 07:22:11 | 0:23:34 | 0:14:11 | 0:09:23 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_stable task/test_cephadm} | 1 | |
dead | 6173656 | 2021-06-15 06:49:37 | 2021-06-15 06:58:37 | 2021-06-15 19:08:18 | 12:09:41 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph objectstore/bluestore-bitmap openstack settings/optimized ubuntu_latest workloads/cosbench_64K_read_write} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6173657 | 2021-06-15 06:49:38 | 2021-06-15 07:00:07 | 2021-06-15 19:08:50 | 12:08:43 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph objectstore/bluestore-comp openstack settings/optimized ubuntu_latest workloads/cosbench_64K_write} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6173658 | 2021-06-15 06:49:39 | 2021-06-15 07:00:08 | 2021-06-15 07:15:23 | 0:15:15 | 0:05:32 | 0:09:43 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_testing task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi109 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c0a29b879312e8e71dc757e2134195f987ce8f64 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
pass | 6173659 | 2021-06-15 06:49:40 | 2021-06-15 07:00:08 | 2021-06-15 07:17:56 | 0:17:48 | 0:08:09 | 0:09:39 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_stable task/test_adoption} | 1 | |
fail | 6173660 | 2021-06-15 06:49:41 | 2021-06-15 07:00:18 | 2021-06-15 07:28:16 | 0:27:58 | 0:17:36 | 0:10:22 | smithi | master | centos | 8.2 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/repo_digest 3-start-upgrade 4-wait fixed-2} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6173661 | 2021-06-15 06:49:43 | 2021-06-15 07:00:48 | 2021-06-15 07:19:01 | 0:18:13 | 0:04:08 | 0:14:05 | smithi | master | ubuntu | 20.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-octopus 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap thrashosds-health ubuntu_latest} | 4 | |
Failure Reason:
Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F20.04%2Fx86_64&ref=nautilus |
||||||||||||||
fail | 6173662 | 2021-06-15 06:49:44 | 2021-06-15 07:00:59 | 2021-06-15 07:36:37 | 0:35:38 | 0:24:45 | 0:10:53 | smithi | master | ubuntu | 20.04 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-hybrid supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_host_devices (tasks.mgr.dashboard.test_host.HostControllerTest) |