Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 5601998 2020-11-08 10:02:04 2020-11-08 10:02:36 2020-11-08 10:34:36 0:32:00 0:24:51 0:07:09 smithi master centos 8.1 rados/cephadm/dashboard/{distro/centos_latest task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi118 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7e9cbdbfadc6457d1ec35b466316793c91786920 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 5601999 2020-11-08 10:02:05 2020-11-08 10:02:46 2020-11-08 10:38:46 0:36:00 0:29:13 0:06:47 smithi master rhel 8.1 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/rados_api_tests} 2
fail 5602000 2020-11-08 10:02:06 2020-11-08 10:03:08 2020-11-08 10:23:08 0:20:00 0:13:23 0:06:37 smithi master centos 8.1 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/rados_python} 2
Failure Reason:

Command failed (workunit test rados/test_python.sh) on smithi091 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7e9cbdbfadc6457d1ec35b466316793c91786920 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh'

fail 5602001 2020-11-08 10:02:07 2020-11-08 10:04:28 2020-11-08 10:30:27 0:25:59 0:18:43 0:07:16 smithi master rhel 8.0 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{rhel_8.0} fixed-2 mon_election/connectivity} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5602002 2020-11-08 10:02:08 2020-11-08 10:04:28 2020-11-08 10:34:28 0:30:00 0:23:35 0:06:25 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/rados_api_tests} 2
Failure Reason:

"2020-11-08T10:26:10.546956+0000 osd.1 (osd.1) 209 : cluster [ERR] scrub 59.1 59:8544ebef:test-rados-api-smithi012-41058-74::foo:24 : size 0 != clone_size 10" in cluster log

fail 5602003 2020-11-08 10:02:08 2020-11-08 10:04:29 2020-11-08 10:36:28 0:31:59 0:24:32 0:07:27 smithi master centos 8.1 rados/cephadm/dashboard/{distro/centos_latest task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi162 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7e9cbdbfadc6457d1ec35b466316793c91786920 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'