Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 5626376 2020-11-15 16:17:10 2020-11-15 16:19:16 2020-11-15 16:29:15 0:09:59 0:01:59 0:08:00 smithi master ubuntu 18.04 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{ubuntu_18.04_podman} fixed-2 mon_election/classic} 2
Failure Reason:

Command failed on smithi012 with status 5: 'sudo systemctl stop ceph-65f00fa4-275f-11eb-a2b0-001a4aab830c@mon.a'

pass 5626377 2020-11-15 16:17:10 2020-11-15 16:19:16 2020-11-15 17:21:16 1:02:00 0:54:05 0:07:55 smithi master centos 8.1 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/valgrind} 2
pass 5626378 2020-11-15 16:17:11 2020-11-15 16:19:16 2020-11-15 16:51:16 0:32:00 0:25:27 0:06:33 smithi master centos 8.1 rados/monthrash/{ceph clusters/3-mons mon_election/connectivity msgr-failures/mon-delay msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/one workloads/rados_api_tests} 2
fail 5626379 2020-11-15 16:17:12 2020-11-15 16:19:59 2020-11-15 16:51:59 0:32:00 0:23:59 0:08:01 smithi master centos 8.1 rados/cephadm/dashboard/{distro/centos_latest task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi146 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c010b3a71afd356aa552f43d5231dfdfbab53b68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 5626380 2020-11-15 16:17:13 2020-11-15 16:21:10 2020-11-15 20:01:14 3:40:04 3:21:21 0:18:43 smithi master rhel 8.1 rados/monthrash/{ceph clusters/3-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/sync workloads/rados_mon_workunits} 2
Failure Reason:

Command failed (workunit test mon/caps.sh) on smithi203 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c010b3a71afd356aa552f43d5231dfdfbab53b68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/caps.sh'

fail 5626381 2020-11-15 16:17:13 2020-11-15 16:21:11 2020-11-15 16:55:10 0:33:59 0:28:33 0:05:26 smithi master rhel 8.1 rados/standalone/{mon_election/connectivity supported-random-distro$/{rhel_8} workloads/misc} 1
Failure Reason:

Command failed (workunit test misc/ver-health.sh) on smithi170 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c010b3a71afd356aa552f43d5231dfdfbab53b68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/misc/ver-health.sh'

fail 5626382 2020-11-15 16:17:14 2020-11-15 16:21:11 2020-11-15 16:41:10 0:19:59 0:07:02 0:12:57 smithi master rhel 8.1 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{rhel_latest} fixed-2 mon_election/classic} 2
Failure Reason:

Command failed on smithi201 with status 5: 'sudo systemctl stop ceph-2019ff1a-2761-11eb-a2b0-001a4aab830c@mon.a'

fail 5626383 2020-11-15 16:17:15 2020-11-15 16:21:11 2020-11-15 16:47:10 0:25:59 0:06:42 0:19:17 smithi master rhel 8.1 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{rhel_latest} fixed-2 mon_election/connectivity} 2
Failure Reason:

Command failed on smithi174 with status 5: 'sudo systemctl stop ceph-f2d4429e-2761-11eb-a2b0-001a4aab830c@mon.a'

fail 5626384 2020-11-15 16:17:15 2020-11-15 16:22:51 2020-11-15 17:04:51 0:42:00 0:31:12 0:10:48 smithi master rhel 8.1 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/rados_api_tests} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi156 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c010b3a71afd356aa552f43d5231dfdfbab53b68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 5626385 2020-11-15 16:17:16 2020-11-15 16:22:51 2020-11-15 16:40:50 0:17:59 0:12:00 0:05:59 smithi master ubuntu 18.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} tasks/rados_python} 2
fail 5626386 2020-11-15 16:17:17 2020-11-15 16:23:45 2020-11-15 17:03:45 0:40:00 0:23:31 0:16:29 smithi master centos 8.1 rados/cephadm/dashboard/{distro/centos_latest task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi187 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c010b3a71afd356aa552f43d5231dfdfbab53b68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 5626387 2020-11-15 16:17:18 2020-11-15 16:24:33 2020-11-15 16:36:33 0:12:00 0:01:57 0:10:03 smithi master ubuntu 18.04 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{ubuntu_18.04_podman} fixed-2 mon_election/classic} 2
Failure Reason:

Command failed on smithi043 with status 5: 'sudo systemctl stop ceph-6b0510c4-2760-11eb-a2b0-001a4aab830c@mon.a'

fail 5626388 2020-11-15 16:17:18 2020-11-15 16:24:34 2020-11-15 19:46:37 3:22:03 3:13:42 0:08:21 smithi master centos 8.1 rados/monthrash/{ceph clusters/3-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/force-sync-many workloads/rados_mon_workunits} 2
Failure Reason:

Command failed (workunit test mon/caps.sh) on smithi118 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c010b3a71afd356aa552f43d5231dfdfbab53b68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/caps.sh'

fail 5626389 2020-11-15 16:17:19 2020-11-15 16:24:33 2020-11-15 16:38:33 0:14:00 0:07:00 0:07:00 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi059 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c010b3a71afd356aa552f43d5231dfdfbab53b68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 5626390 2020-11-15 16:17:20 2020-11-15 16:24:33 2020-11-15 16:58:33 0:34:00 0:22:58 0:11:02 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/rados_api_tests} 2