Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 5709662 2020-12-15 06:47:23 2020-12-19 01:41:54 2020-12-19 02:21:53 0:39:59 0:33:31 0:06:28 smithi master ubuntu 18.04 rados/singleton/{all/thrash-eio mon_election/connectivity msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}} 2
fail 5709663 2020-12-15 06:47:24 2020-12-19 01:41:56 2020-12-19 02:07:58 0:26:02 0:16:56 0:09:06 smithi master centos 8.2 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{centos_latest} fixed-2 mon_election/classic} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

dead 5709664 2020-12-15 06:47:25 2020-12-19 01:41:56 2020-12-19 13:44:22 12:02:26 smithi master centos 8.2 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/rados_api_tests} 2
fail 5709665 2020-12-15 06:47:25 2020-12-19 01:41:56 2020-12-19 01:55:56 0:14:00 0:08:04 0:05:56 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi186 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c8682306c75836c231f2bd9f364a5f1c5a0c2247 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 5709666 2020-12-15 06:47:26 2020-12-19 01:42:13 2020-12-19 02:10:12 0:27:59 0:16:55 0:11:04 smithi master centos 8.2 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{centos_latest} fixed-2 mon_election/connectivity} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

pass 5709667 2020-12-15 06:47:27 2020-12-19 01:43:19 2020-12-19 02:07:18 0:23:59 0:17:10 0:06:49 smithi master ubuntu 18.04 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{ubuntu_18.04} fixed-2 mon_election/classic} 2
fail 5709668 2020-12-15 06:47:28 2020-12-19 01:43:58 2020-12-19 02:01:58 0:18:00 0:11:35 0:06:25 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)

fail 5709669 2020-12-15 06:47:28 2020-12-19 01:44:18 2020-12-19 02:10:18 0:26:00 0:16:53 0:09:07 smithi master centos 8.2 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{centos_latest} fixed-2 mon_election/connectivity} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5709670 2020-12-15 06:47:29 2020-12-19 01:44:44 2020-12-19 01:58:43 0:13:59 0:08:00 0:05:59 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi168 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c8682306c75836c231f2bd9f364a5f1c5a0c2247 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 5709671 2020-12-15 06:47:30 2020-12-19 01:44:44 2020-12-19 02:16:44 0:32:00 0:24:31 0:07:29 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/rados_api_tests} 2
Failure Reason:

"2020-12-19T02:05:11.371986+0000 osd.6 (osd.6) 245 : cluster [ERR] 53.11 shard 6 soid 53:8a4b46d9:test-rados-api-smithi166-25682-74::foo:23 : candidate size 10 info size 0 mismatch" in cluster log

fail 5709672 2020-12-15 06:47:31 2020-12-19 01:45:18 2020-12-19 02:09:18 0:24:00 0:16:27 0:07:33 smithi master rhel 8.3 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{rhel_latest} fixed-2 mon_election/classic} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

dead 5709673 2020-12-15 06:47:32 2020-12-19 01:46:45 2020-12-19 13:49:19 12:02:34 smithi master centos 8.2 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} 2
pass 5709674 2020-12-15 06:47:32 2020-12-19 01:46:45 2020-12-19 02:04:45 0:18:00 0:09:53 0:08:07 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} 1
pass 5709675 2020-12-15 06:47:33 2020-12-19 01:47:43 2020-12-19 02:11:43 0:24:00 0:16:56 0:07:04 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/redirect_set_object} 2
pass 5709676 2020-12-15 06:47:34 2020-12-19 01:48:24 2020-12-19 02:12:23 0:23:59 0:17:03 0:06:56 smithi master ubuntu 18.04 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{ubuntu_18.04} fixed-2 mon_election/connectivity} 2
fail 5709677 2020-12-15 06:47:35 2020-12-19 01:48:24 2020-12-19 02:06:23 0:17:59 0:11:40 0:06:19 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/classic task/test_orch_cli} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)

fail 5709678 2020-12-15 06:47:35 2020-12-19 01:48:24 2020-12-19 02:24:24 0:36:00 0:25:25 0:10:35 smithi master centos 8.2 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/rados_api_tests} 2
Failure Reason:

"2020-12-19T02:14:50.612179+0000 osd.7 (osd.7) 294 : cluster [ERR] 57.e shard 4 soid 57:7690f5b0:test-rados-api-smithi150-43979-74::foo:23 : candidate size 10 info size 0 mismatch" in cluster log

pass 5709679 2020-12-15 06:47:36 2020-12-19 01:48:24 2020-12-19 02:08:24 0:20:00 0:13:01 0:06:59 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/set-chunks-read} 2
fail 5709680 2020-12-15 06:47:37 2020-12-19 01:48:24 2020-12-19 02:02:24 0:14:00 0:08:03 0:05:57 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/classic task/test_adoption} 1
Failure Reason:

Command failed (workunit test cephadm/test_adoption.sh) on smithi036 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c8682306c75836c231f2bd9f364a5f1c5a0c2247 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'

pass 5709681 2020-12-15 06:47:38 2020-12-19 01:48:26 2020-12-19 02:12:26 0:24:00 0:17:11 0:06:49 smithi master ubuntu 18.04 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{ubuntu_18.04} fixed-2 mon_election/classic} 2
pass 5709682 2020-12-15 06:47:39 2020-12-19 01:48:27 2020-12-19 02:20:27 0:32:00 0:25:50 0:06:10 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/cache-pool-snaps} 2
pass 5709683 2020-12-15 06:47:39 2020-12-19 01:49:19 2020-12-19 02:25:19 0:36:00 0:26:44 0:09:16 smithi master centos 8.2 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/rados_api_tests} 2
fail 5709684 2020-12-15 06:47:40 2020-12-19 01:49:53 2020-12-19 02:09:53 0:20:00 0:10:31 0:09:29 smithi master centos 8.2 rados/cephadm/workunits/{distro/centos_latest mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi184 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c8682306c75836c231f2bd9f364a5f1c5a0c2247 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 5709685 2020-12-15 06:47:41 2020-12-19 01:49:53 2020-12-19 02:15:53 0:26:00 0:19:28 0:06:32 smithi master rhel 8.0 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{rhel_8.0} fixed-2 mon_election/connectivity} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

pass 5709686 2020-12-15 06:47:41 2020-12-19 01:49:58 2020-12-19 04:38:00 2:48:02 2:38:10 0:09:52 smithi master centos 8.2 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_api_tests validater/valgrind} 2