Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 5682573 2020-12-05 08:42:21 2020-12-05 08:44:14 2020-12-05 12:20:18 3:36:04 3:27:40 0:08:24 smithi master centos 8.2 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados tasks/rados_cls_all validater/valgrind} 2
fail 5682574 2020-12-05 08:42:22 2020-12-05 08:44:14 2020-12-05 09:14:13 0:29:59 0:18:25 0:11:34 smithi master rhel 8.0 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{rhel_8.0} fixed-2 mon_election/connectivity} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5682575 2020-12-05 08:42:22 2020-12-05 08:45:37 2020-12-05 09:05:37 0:20:00 0:10:52 0:09:08 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/classic task/test_orch_cli} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)

pass 5682576 2020-12-05 08:42:23 2020-12-05 08:45:43 2020-12-05 09:29:43 0:44:00 0:34:40 0:09:20 smithi master centos 8.2 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/mon_recovery validater/valgrind} 2
fail 5682577 2020-12-05 08:42:24 2020-12-05 08:45:46 2020-12-05 09:01:45 0:15:59 0:07:21 0:08:38 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/classic task/test_adoption} 1
Failure Reason:

Command failed (workunit test cephadm/test_adoption.sh) on smithi083 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0c181d070c46e8266b4307971bc43b2cf1aa276d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'

fail 5682578 2020-12-05 08:42:25 2020-12-05 08:47:41 2020-12-05 09:09:40 0:21:59 0:14:54 0:07:05 smithi master rhel 8.3 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{rhel_latest} fixed-2 mon_election/connectivity} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

dead 5682579 2020-12-05 08:42:25 2020-12-05 08:47:41 2020-12-05 20:52:12 12:04:31 11:52:05 0:12:26 smithi master centos 8.2 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados tasks/rados_cls_all validater/valgrind}
Failure Reason:

psutil.NoSuchProcess process no longer exists (pid=22319)

fail 5682580 2020-12-05 08:42:26 2020-12-05 08:47:44 2020-12-05 09:11:44 0:24:00 0:14:21 0:09:39 smithi master centos 8.2 rados/cephadm/dashboard/{distro/centos_latest task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi131 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0c181d070c46e8266b4307971bc43b2cf1aa276d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 5682581 2020-12-05 08:42:27 2020-12-05 08:47:46 2020-12-05 09:17:46 0:30:00 0:18:46 0:11:14 smithi master rhel 8.0 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{rhel_8.0} fixed-2 mon_election/classic} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

dead 5682582 2020-12-05 08:42:27 2020-12-05 08:47:56 2020-12-05 20:50:25 12:02:29 11:53:02 0:09:27 smithi master centos 8.2 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_api_tests validater/valgrind}
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi195 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0c181d070c46e8266b4307971bc43b2cf1aa276d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 5682583 2020-12-05 08:42:28 2020-12-05 08:50:12 2020-12-05 09:06:11 0:15:59 0:07:15 0:08:44 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/connectivity task/test_adoption} 1
Failure Reason:

Command failed (workunit test cephadm/test_adoption.sh) on smithi190 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0c181d070c46e8266b4307971bc43b2cf1aa276d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'

pass 5682584 2020-12-05 08:42:29 2020-12-05 08:50:12 2020-12-05 09:44:13 0:54:01 0:28:17 0:25:44 smithi master centos 8.2 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/mon_recovery validater/valgrind} 2
pass 5682585 2020-12-05 08:42:30 2020-12-05 08:50:13 2020-12-05 09:26:12 0:35:59 0:28:53 0:07:06 smithi master centos 8.2 rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
fail 5682586 2020-12-05 08:42:30 2020-12-05 08:50:12 2020-12-05 09:26:12 0:36:00 0:14:30 0:21:30 smithi master centos 8.2 rados/cephadm/dashboard/{distro/centos_latest task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi186 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0c181d070c46e8266b4307971bc43b2cf1aa276d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

dead 5682587 2020-12-05 08:42:31 2020-12-05 08:50:12 2020-12-05 20:52:42 12:02:30 smithi master centos 8.2 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados tasks/rados_cls_all validater/valgrind} 2