Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 5625032 2020-11-15 03:36:27 2020-11-15 03:41:43 2020-11-15 04:21:43 0:40:00 0:02:46 0:37:14 smithi master ubuntu 18.04 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{ubuntu_18.04_podman} fixed-2 mon_election/classic} 2
Failure Reason:

Command failed on smithi203 with status 5: 'sudo systemctl stop ceph-fc3b3238-26f9-11eb-a2b0-001a4aab830c@mon.a'

pass 5625033 2020-11-15 03:36:28 2020-11-15 03:43:22 2020-11-15 03:59:22 0:16:00 0:06:51 0:09:09 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/classic task/test_adoption} 1
pass 5625034 2020-11-15 03:36:29 2020-11-15 03:43:23 2020-11-15 04:11:22 0:27:59 0:20:03 0:07:56 smithi master centos 8.1 rados/cephadm/with-work/{distro/centos_latest fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
dead 5625035 2020-11-15 03:36:30 2020-11-15 03:43:23 2020-11-15 15:45:52 12:02:29 smithi master centos 8.1 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/valgrind} 2
fail 5625036 2020-11-15 03:36:30 2020-11-15 03:43:23 2020-11-15 07:31:27 3:48:04 3:10:36 0:37:28 smithi master centos 8.1 rados/monthrash/{ceph clusters/3-mons mon_election/connectivity msgr-failures/mon-delay msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/one workloads/rados_api_tests} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi190 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c010b3a71afd356aa552f43d5231dfdfbab53b68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 5625037 2020-11-15 03:36:31 2020-11-15 03:45:13 2020-11-15 04:05:13 0:20:00 0:12:19 0:07:41 smithi master centos 8.1 rados/cephadm/workunits/{distro/centos_latest mon_election/connectivity task/test_cephadm} 1
pass 5625038 2020-11-15 03:36:32 2020-11-15 03:45:20 2020-11-15 04:15:20 0:30:00 0:19:21 0:10:39 smithi master ubuntu 18.04 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{ubuntu_18.04_podman} fixed-2 mon_election/connectivity} 2
pass 5625039 2020-11-15 03:36:33 2020-11-15 03:45:41 2020-11-15 04:17:41 0:32:00 0:24:04 0:07:56 smithi master rhel 8.1 rados/cephadm/with-work/{distro/rhel_latest fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
fail 5625040 2020-11-15 03:36:33 2020-11-15 03:46:51 2020-11-15 05:22:53 1:36:02 0:25:07 1:10:55 smithi master centos 8.1 rados/cephadm/dashboard/{distro/centos_latest task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi073 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c010b3a71afd356aa552f43d5231dfdfbab53b68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 5625041 2020-11-15 03:36:34 2020-11-15 03:46:51 2020-11-15 07:38:55 3:52:04 3:19:01 0:33:03 smithi master rhel 8.1 rados/monthrash/{ceph clusters/3-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/sync workloads/rados_mon_workunits} 2
Failure Reason:

Command failed (workunit test mon/caps.sh) on smithi016 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c010b3a71afd356aa552f43d5231dfdfbab53b68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/caps.sh'

pass 5625042 2020-11-15 03:36:35 2020-11-15 03:47:22 2020-11-15 04:11:21 0:23:59 0:15:53 0:08:06 smithi master centos 8.0 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{centos_8.0} fixed-2 mon_election/classic} 2
pass 5625043 2020-11-15 03:36:36 2020-11-15 03:47:38 2020-11-15 04:41:38 0:54:00 0:22:24 0:31:36 smithi master ubuntu 18.04 rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
pass 5625044 2020-11-15 03:36:36 2020-11-15 03:48:30 2020-11-15 04:28:30 0:40:00 0:29:53 0:10:07 smithi master rhel 8.1 rados/monthrash/{ceph clusters/9-mons mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/snaps-few-objects} 2
pass 5625045 2020-11-15 03:36:37 2020-11-15 03:48:31 2020-11-15 04:38:32 0:50:01 0:27:07 0:22:54 smithi master centos 8.1 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 5625046 2020-11-15 03:36:38 2020-11-15 03:48:53 2020-11-15 04:34:53 0:46:00 0:37:04 0:08:56 smithi master centos 8.1 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{centos_latest} fixed-2 mon_election/connectivity} 2
pass 5625047 2020-11-15 03:36:39 2020-11-15 03:50:11 2020-11-15 04:06:10 0:15:59 0:06:42 0:09:17 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/connectivity task/test_adoption} 1
pass 5625048 2020-11-15 03:36:39 2020-11-15 03:52:15 2020-11-15 04:36:15 0:44:00 0:21:24 0:22:36 smithi master centos 8.0 rados/cephadm/with-work/{distro/centos_8.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} 2
fail 5625049 2020-11-15 03:36:40 2020-11-15 03:52:15 2020-11-15 04:30:14 0:37:59 0:30:57 0:07:02 smithi master rhel 8.1 rados/standalone/{mon_election/connectivity supported-random-distro$/{rhel_8} workloads/misc} 1
Failure Reason:

Command failed (workunit test misc/ver-health.sh) on smithi105 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c010b3a71afd356aa552f43d5231dfdfbab53b68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/misc/ver-health.sh'

pass 5625050 2020-11-15 03:36:41 2020-11-15 03:52:47 2020-11-15 04:24:47 0:32:00 0:24:09 0:07:51 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/cache-pool-snaps-readproxy} 2
pass 5625051 2020-11-15 03:36:42 2020-11-15 03:52:53 2020-11-15 08:00:58 4:08:05 3:19:51 0:48:14 smithi master centos 8.1 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/rados_cls_all validater/valgrind} 2
pass 5625052 2020-11-15 03:36:42 2020-11-15 03:54:40 2020-11-15 06:28:42 2:34:02 1:42:34 0:51:28 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/radosbench} 2
fail 5625053 2020-11-15 03:36:43 2020-11-15 03:55:35 2020-11-15 04:19:34 0:23:59 0:06:56 0:17:03 smithi master rhel 8.1 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{rhel_latest} fixed-2 mon_election/classic} 2
Failure Reason:

Command failed on smithi007 with status 5: 'sudo systemctl stop ceph-94a9f852-26f9-11eb-a2b0-001a4aab830c@mon.a'

pass 5625054 2020-11-15 03:36:44 2020-11-15 03:59:59 2020-11-15 04:41:59 0:42:00 0:24:55 0:17:05 smithi master rhel 8.0 rados/cephadm/with-work/{distro/rhel_8.0 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} 2
fail 5625055 2020-11-15 03:36:45 2020-11-15 04:04:38 2020-11-15 04:44:38 0:40:00 0:18:55 0:21:05 smithi master rhel 8.1 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{rhel_latest} fixed-2 mon_election/connectivity} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

pass 5625056 2020-11-15 03:36:46 2020-11-15 04:05:02 2020-11-15 04:37:01 0:31:59 0:21:35 0:10:24 smithi master ubuntu 18.04 rados/cephadm/with-work/{distro/ubuntu_18.04 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} 2
fail 5625057 2020-11-15 03:36:46 2020-11-15 04:05:14 2020-11-15 04:55:14 0:50:00 0:30:15 0:19:45 smithi master rhel 8.1 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/rados_api_tests} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi131 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c010b3a71afd356aa552f43d5231dfdfbab53b68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 5625058 2020-11-15 03:36:47 2020-11-15 04:05:28 2020-11-15 04:39:28 0:34:00 0:12:02 0:21:58 smithi master ubuntu 18.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} tasks/rados_python} 2
Failure Reason:

Command failed (workunit test rados/test_python.sh) on smithi117 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c010b3a71afd356aa552f43d5231dfdfbab53b68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh'

fail 5625059 2020-11-15 03:36:48 2020-11-15 04:06:47 2020-11-15 04:40:47 0:34:00 0:24:46 0:09:14 smithi master centos 8.1 rados/cephadm/dashboard/{distro/centos_latest task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi087 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c010b3a71afd356aa552f43d5231dfdfbab53b68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 5625060 2020-11-15 03:36:49 2020-11-15 04:07:35 2020-11-15 04:23:35 0:16:00 0:02:49 0:13:11 smithi master ubuntu 18.04 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{ubuntu_18.04_podman} fixed-2 mon_election/classic} 2
Failure Reason:

Command failed on smithi196 with status 5: 'sudo systemctl stop ceph-3731e2e2-26fa-11eb-a2b0-001a4aab830c@mon.a'

fail 5625061 2020-11-15 03:36:49 2020-11-15 04:08:54 2020-11-15 07:38:58 3:30:04 3:15:52 0:14:12 smithi master centos 8.1 rados/monthrash/{ceph clusters/3-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/force-sync-many workloads/rados_mon_workunits} 2
Failure Reason:

Command failed (workunit test mon/caps.sh) on smithi104 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c010b3a71afd356aa552f43d5231dfdfbab53b68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/caps.sh'

pass 5625062 2020-11-15 03:36:50 2020-11-15 04:09:19 2020-11-15 04:49:19 0:40:00 0:20:46 0:19:14 smithi master centos 8.0 rados/cephadm/with-work/{distro/centos_8.0 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} 2
fail 5625063 2020-11-15 03:36:51 2020-11-15 04:09:30 2020-11-15 04:27:30 0:18:00 0:07:53 0:10:07 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi103 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c010b3a71afd356aa552f43d5231dfdfbab53b68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 5625064 2020-11-15 03:36:52 2020-11-15 04:09:42 2020-11-15 04:51:42 0:42:00 0:17:32 0:24:28 smithi master ubuntu 18.04 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{ubuntu_18.04} fixed-2 mon_election/connectivity} 2
fail 5625065 2020-11-15 03:36:53 2020-11-15 04:12:00 2020-11-15 05:12:00 1:00:00 0:26:13 0:33:47 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/rados_api_tests} 2
Failure Reason:

"2020-11-15T05:00:31.568135+0000 osd.1 (osd.1) 243 : cluster [ERR] scrub 59.11 59:8b3248e7:test-rados-api-smithi087-39127-74::foo:24 : size 0 != clone_size 10" in cluster log

pass 5625066 2020-11-15 03:36:53 2020-11-15 04:12:00 2020-11-15 05:10:00 0:58:00 0:24:43 0:33:17 smithi master rhel 8.0 rados/cephadm/with-work/{distro/rhel_8.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} 2