User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2021-05-03 16:25:32 | 2021-05-03 17:02:29 | 2021-05-03 18:02:57 | 1:00:28 | rados | wip-yuri-testing-2021-04-29-1033-octopus | smithi | 0e9f14e | 3 | 20 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6092796 | 2021-05-03 16:26:48 | 2021-05-03 17:02:29 | 2021-05-03 17:38:11 | 0:35:42 | 0:24:59 | 0:10:43 | smithi | master | centos | 8.1 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/filestore-xfs supported-random-distro$/{centos_latest} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_host_devices (tasks.mgr.dashboard.test_host.HostControllerTest) |
||||||||||||||
fail | 6092797 | 2021-05-03 16:26:49 | 2021-05-03 17:03:19 | 2021-05-03 17:30:31 | 0:27:12 | 0:17:21 | 0:09:51 | smithi | master | centos | 8.2 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/repo_digest 3-start-upgrade 4-wait fixed-2} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6092798 | 2021-05-03 16:26:50 | 2021-05-03 17:03:19 | 2021-05-03 17:43:31 | 0:40:12 | 0:32:54 | 0:07:18 | smithi | master | rhel | 8.2 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/off msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_latest} thrashers/careful thrashosds-health workloads/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi042 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0e9f14e1cb74cfb996312a3e4394b9b121669cc3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 6092799 | 2021-05-03 16:26:51 | 2021-05-03 17:04:30 | 2021-05-03 17:32:27 | 0:27:57 | 0:17:00 | 0:10:57 | smithi | master | centos | 8.2 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/defaut 3-start-upgrade 4-wait fixed-2} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6092800 | 2021-05-03 16:26:52 | 2021-05-03 17:04:40 | 2021-05-03 17:43:52 | 0:39:12 | 0:32:17 | 0:06:55 | smithi | master | rhel | 8.2 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-lz4 supported-random-distro$/{rhel_latest} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_host_devices (tasks.mgr.dashboard.test_host.HostControllerTest) |
||||||||||||||
fail | 6092801 | 2021-05-03 16:26:53 | 2021-05-03 17:05:00 | 2021-05-03 17:19:45 | 0:14:45 | 0:06:31 | 0:08:14 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi038 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0e9f14e1cb74cfb996312a3e4394b9b121669cc3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 6092802 | 2021-05-03 16:26:54 | 2021-05-03 17:05:01 | 2021-05-03 17:27:36 | 0:22:35 | 0:11:54 | 0:10:41 | smithi | master | centos | 8.1 | rados/cephadm/smoke-roleless/{distro/centos_latest start} | 2 | |
fail | 6092803 | 2021-05-03 16:26:55 | 2021-05-03 17:06:11 | 2021-05-03 17:29:33 | 0:23:22 | 0:07:56 | 0:15:26 | smithi | master | ubuntu | 18.04 | rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi072 with status 5: 'sudo systemctl stop ceph-ce028aea-ac34-11eb-8224-001a4aab830c@mon.a' |
||||||||||||||
fail | 6092804 | 2021-05-03 16:26:56 | 2021-05-03 17:07:34 | 2021-05-03 17:46:30 | 0:38:56 | 0:31:40 | 0:07:16 | smithi | master | rhel | 8.2 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-snappy supported-random-distro$/{rhel_latest} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_host_devices (tasks.mgr.dashboard.test_host.HostControllerTest) |
||||||||||||||
fail | 6092805 | 2021-05-03 16:26:56 | 2021-05-03 17:07:34 | 2021-05-03 17:22:50 | 0:15:16 | 0:06:03 | 0:09:13 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi064 with status 5: 'sudo systemctl stop ceph-0e1de68e-ac34-11eb-8224-001a4aab830c@mon.a' |
||||||||||||||
fail | 6092806 | 2021-05-03 16:26:57 | 2021-05-03 17:07:34 | 2021-05-03 17:22:34 | 0:15:00 | 0:06:30 | 0:08:30 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_adoption.sh) on smithi003 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0e9f14e1cb74cfb996312a3e4394b9b121669cc3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh' |
||||||||||||||
fail | 6092807 | 2021-05-03 16:26:58 | 2021-05-03 17:07:34 | 2021-05-03 17:34:41 | 0:27:07 | 0:16:31 | 0:10:36 | smithi | master | centos | 8.2 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/repo_digest 3-start-upgrade 4-wait fixed-2} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6092808 | 2021-05-03 16:26:59 | 2021-05-03 17:07:35 | 2021-05-03 17:42:34 | 0:34:59 | 0:25:08 | 0:09:51 | smithi | master | centos | 8.1 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-zstd supported-random-distro$/{centos_latest} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_host_devices (tasks.mgr.dashboard.test_host.HostControllerTest) |
||||||||||||||
fail | 6092809 | 2021-05-03 16:27:00 | 2021-05-03 17:07:35 | 2021-05-03 17:35:47 | 0:28:12 | 0:16:18 | 0:11:54 | smithi | master | centos | 8.2 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/defaut 3-start-upgrade 4-wait fixed-2} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6092810 | 2021-05-03 16:27:01 | 2021-05-03 17:08:15 | 2021-05-03 17:47:18 | 0:39:03 | 0:32:54 | 0:06:09 | smithi | master | rhel | 8.2 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-hybrid supported-random-distro$/{rhel_latest} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_host_devices (tasks.mgr.dashboard.test_host.HostControllerTest) |
||||||||||||||
pass | 6092811 | 2021-05-03 16:27:02 | 2021-05-03 17:08:56 | 2021-05-03 18:02:57 | 0:54:01 | 0:43:39 | 0:10:22 | smithi | master | centos | 8.1 | rados/standalone/{supported-random-distro$/{centos_latest} workloads/mon} | 1 | |
fail | 6092812 | 2021-05-03 16:27:03 | 2021-05-03 17:09:36 | 2021-05-03 17:25:05 | 0:15:29 | 0:06:30 | 0:08:59 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi077 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0e9f14e1cb74cfb996312a3e4394b9b121669cc3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 6092813 | 2021-05-03 16:27:04 | 2021-05-03 17:09:36 | 2021-05-03 17:27:10 | 0:17:34 | 0:07:33 | 0:10:01 | smithi | master | ubuntu | 18.04 | rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi148 with status 5: 'sudo systemctl stop ceph-ad1a1dc0-ac34-11eb-8224-001a4aab830c@mon.a' |
||||||||||||||
pass | 6092814 | 2021-05-03 16:27:05 | 2021-05-03 17:09:37 | 2021-05-03 17:31:41 | 0:22:04 | 0:11:59 | 0:10:05 | smithi | master | centos | 8.1 | rados/cephadm/smoke-roleless/{distro/centos_latest start} | 2 | |
fail | 6092815 | 2021-05-03 16:27:06 | 2021-05-03 17:10:07 | 2021-05-03 17:50:02 | 0:39:55 | 0:31:36 | 0:08:19 | smithi | master | rhel | 8.2 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-low-osd-mem-target supported-random-distro$/{rhel_latest} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_host_devices (tasks.mgr.dashboard.test_host.HostControllerTest) |
||||||||||||||
fail | 6092816 | 2021-05-03 16:27:07 | 2021-05-03 17:11:07 | 2021-05-03 17:26:30 | 0:15:23 | 0:05:55 | 0:09:28 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi062 with status 5: 'sudo systemctl stop ceph-8d34a372-ac34-11eb-8224-001a4aab830c@mon.a' |
||||||||||||||
fail | 6092817 | 2021-05-03 16:27:08 | 2021-05-03 17:11:08 | 2021-05-03 17:50:53 | 0:39:45 | 0:31:53 | 0:07:52 | smithi | master | rhel | 8.2 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-stupid supported-random-distro$/{rhel_latest} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_host_devices (tasks.mgr.dashboard.test_host.HostControllerTest) |
||||||||||||||
fail | 6092818 | 2021-05-03 16:27:09 | 2021-05-03 17:11:58 | 2021-05-03 17:27:56 | 0:15:58 | 0:06:28 | 0:09:30 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_adoption.sh) on smithi199 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0e9f14e1cb74cfb996312a3e4394b9b121669cc3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh' |