Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6299926 2021-07-29 03:29:27 2021-07-29 03:30:18 2021-07-29 03:49:26 0:19:08 0:07:59 0:11:09 smithi master centos 8.3 rados/cephadm/smoke-singlehost/{0-distro$/{centos_8.3_kubic_stable} 1-start 2-services/basic 3-final} 1
Failure Reason:

Command failed on smithi197 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid 9e49a82c-f01f-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 172.21.15.197 --single-host-defaults --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

pass 6299927 2021-07-29 03:29:28 2021-07-29 03:30:19 2021-07-29 03:52:54 0:22:35 0:10:46 0:11:49 smithi master ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} tasks/scrub_test} 2
pass 6299928 2021-07-29 03:29:29 2021-07-29 03:30:19 2021-07-29 04:31:20 1:01:01 0:49:42 0:11:19 smithi master ubuntu 20.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/mon} 1
dead 6299929 2021-07-29 03:29:30 2021-07-29 03:30:19 2021-07-29 05:02:03 1:31:44 smithi master centos 8.3 rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/classic} 2
pass 6299930 2021-07-29 03:29:30 2021-07-29 03:30:19 2021-07-29 03:47:07 0:16:48 0:08:08 0:08:40 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
fail 6299931 2021-07-29 03:29:31 2021-07-29 03:30:19 2021-07-29 03:49:12 0:18:53 0:11:53 0:07:00 smithi master rhel 8.4 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/filestore-xfs supported-random-distro$/{rhel_8} tasks/prometheus} 2
Failure Reason:

Command failed on smithi122 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6299932 2021-07-29 03:29:32 2021-07-29 03:30:20 2021-07-29 03:51:31 0:21:11 0:12:49 0:08:22 smithi master ubuntu 20.04 rados/singleton/{all/mon-config-keys mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} 1
fail 6299933 2021-07-29 03:29:33 2021-07-29 03:30:20 2021-07-29 03:47:09 0:16:49 0:08:29 0:08:20 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_adoption} 1
Failure Reason:

Command failed on smithi138 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299934 2021-07-29 03:29:34 2021-07-29 03:30:21 2021-07-29 03:46:44 0:16:23 0:07:45 0:08:38 smithi master centos 8.stream rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/few msgr/async no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8.stream} tasks/mon_clock_with_skews} 2
Failure Reason:

Command failed on smithi060 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299935 2021-07-29 03:29:35 2021-07-29 03:30:21 2021-07-29 03:51:27 0:21:06 0:07:31 0:13:35 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/pool-snaps-few-objects} 2
Failure Reason:

Command failed on smithi067 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6299936 2021-07-29 03:29:36 2021-07-29 03:30:22 2021-07-29 04:05:16 0:34:54 0:21:17 0:13:37 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/rgw-ingress 3-final} 2
pass 6299937 2021-07-29 03:29:37 2021-07-29 03:30:22 2021-07-29 04:14:24 0:44:02 0:31:58 0:12:04 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} 1
fail 6299938 2021-07-29 03:29:38 2021-07-29 03:30:22 2021-07-29 03:52:24 0:22:02 0:08:21 0:13:41 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/rgw 3-final} 2
Failure Reason:

Command failed on smithi006 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid 079c2430-f020-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 172.21.15.6 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6299939 2021-07-29 03:29:39 2021-07-29 03:30:23 2021-07-29 03:49:33 0:19:10 0:07:29 0:11:41 smithi master centos 8.3 rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/classic rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi192 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6299940 2021-07-29 03:29:39 2021-07-29 03:30:23 2021-07-29 03:48:44 0:18:21 0:08:08 0:10:13 smithi master ubuntu 20.04 rados/singleton/{all/mon-config mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} 1
fail 6299941 2021-07-29 03:29:40 2021-07-29 03:30:24 2021-07-29 03:47:04 0:16:40 0:07:45 0:08:55 smithi master centos 8.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{centos_8.stream} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 2
Failure Reason:

Command failed on smithi027 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299942 2021-07-29 03:29:41 2021-07-29 03:30:24 2021-07-29 03:51:19 0:20:55 0:09:07 0:11:48 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed on smithi087 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299943 2021-07-29 03:29:42 2021-07-29 03:30:24 2021-07-29 03:53:45 0:23:21 0:07:31 0:15:50 smithi master centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{default} supported-random-distro$/{centos_8.stream} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

Command failed on smithi195 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6299944 2021-07-29 03:29:43 2021-07-29 03:30:24 2021-07-29 03:51:19 0:20:55 0:09:28 0:11:27 smithi master ubuntu 20.04 rados/monthrash/{ceph clusters/3-mons mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/rados_5925} 2
dead 6299945 2021-07-29 03:29:44 2021-07-29 03:30:25 2021-07-29 05:01:43 1:31:18 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/rados_api_tests} 2
fail 6299946 2021-07-29 03:29:45 2021-07-29 03:30:25 2021-07-29 03:52:10 0:21:45 0:12:33 0:09:12 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/basic 3-final} 2
Failure Reason:

Command failed on smithi053 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid f782e4b2-f01f-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 172.21.15.53 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6299947 2021-07-29 03:29:46 2021-07-29 03:30:25 2021-07-29 03:52:41 0:22:16 0:08:27 0:13:49 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi134 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299948 2021-07-29 03:29:47 2021-07-29 03:30:25 2021-07-29 04:06:35 0:36:10 0:25:16 0:10:54 smithi master ubuntu 20.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi162 with status 22: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=fd0ca960701c0d11585f091a8fdc8387f0cf831f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 6299949 2021-07-29 03:29:47 2021-07-29 03:30:26 2021-07-29 03:48:09 0:17:43 0:07:14 0:10:29 smithi master centos 8.stream rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} 1
Failure Reason:

Command failed on smithi115 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299950 2021-07-29 03:29:48 2021-07-29 03:30:27 2021-07-29 03:52:07 0:21:40 0:07:22 0:14:18 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

Command failed on smithi136 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299951 2021-07-29 03:29:49 2021-07-29 03:30:27 2021-07-29 03:52:10 0:21:43 0:07:06 0:14:37 smithi master centos 8.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{default} supported-random-distro$/{centos_8.stream} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-overwrites} 2
Failure Reason:

Command failed on smithi180 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299952 2021-07-29 03:29:50 2021-07-29 03:30:28 2021-07-29 03:49:26 0:18:58 0:07:05 0:11:53 smithi master centos 8.stream rados/singleton/{all/osd-backfill mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8.stream}} 1
Failure Reason:

Command failed on smithi124 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299953 2021-07-29 03:29:51 2021-07-29 03:30:28 2021-07-29 03:51:00 0:20:32 0:08:06 0:12:26 smithi master centos 8.3 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

Command failed on smithi165 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299954 2021-07-29 03:29:52 2021-07-29 03:30:28 2021-07-29 03:48:35 0:18:07 0:07:16 0:10:51 smithi master centos 8.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{default} supported-random-distro$/{centos_8.stream} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

Command failed on smithi184 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299955 2021-07-29 03:29:53 2021-07-29 03:30:29 2021-07-29 03:52:01 0:21:32 0:12:16 0:09:16 smithi master rhel 8.3 rados/cephadm/smoke/{distro/rhel_8.3_kubic_stable fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi098 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid e64eaa82-f01f-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.98 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

pass 6299956 2021-07-29 03:29:54 2021-07-29 03:30:30 2021-07-29 03:58:46 0:28:16 0:17:09 0:11:07 smithi master ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} tasks/libcephsqlite} 2
pass 6299957 2021-07-29 03:29:55 2021-07-29 03:30:30 2021-07-29 04:22:42 0:52:12 0:39:10 0:13:02 smithi master ubuntu 20.04 rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/connectivity msgr/async start tasks/rados_api_tests} 2
pass 6299958 2021-07-29 03:29:56 2021-07-29 03:30:30 2021-07-29 04:09:19 0:38:49 0:29:42 0:09:07 smithi master ubuntu 20.04 rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{ubuntu_latest}} 1
fail 6299959 2021-07-29 03:29:57 2021-07-29 03:30:30 2021-07-29 04:49:06 1:18:36 1:03:48 0:14:48 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/radosbench-high-concurrency} 2
Failure Reason:

reached maximum tries (500) after waiting for 3000 seconds

pass 6299960 2021-07-29 03:29:57 2021-07-29 03:30:31 2021-07-29 04:01:01 0:30:30 0:17:37 0:12:53 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/client-keyring 3-final} 2
pass 6299961 2021-07-29 03:29:58 2021-07-29 03:30:32 2021-07-29 03:49:53 0:19:21 0:07:02 0:12:19 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/export-after-evict mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
pass 6299962 2021-07-29 03:29:59 2021-07-29 03:30:32 2021-07-29 04:21:29 0:50:57 0:41:17 0:09:40 smithi master ubuntu 20.04 rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} 1
pass 6299963 2021-07-29 03:30:00 2021-07-29 03:30:32 2021-07-29 03:51:33 0:21:01 0:12:46 0:08:15 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/1-node k8s/1.21 net/calico rook/master} 1
pass 6299964 2021-07-29 03:30:01 2021-07-29 03:30:32 2021-07-29 03:54:25 0:23:53 0:14:39 0:09:14 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/sample_fio} 1
fail 6299965 2021-07-29 03:30:02 2021-07-29 03:30:33 2021-07-29 03:54:35 0:24:02 0:09:26 0:14:36 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/osd-delay rados thrashers/pggrow thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi046 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid 5b86bc90-f020-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.46 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

pass 6299966 2021-07-29 03:30:03 2021-07-29 03:30:33 2021-07-29 04:14:08 0:43:35 0:30:11 0:13:24 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/connectivity} 2
fail 6299967 2021-07-29 03:30:04 2021-07-29 03:30:33 2021-07-29 03:50:12 0:19:39 0:11:45 0:07:54 smithi master rhel 8.4 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-bitmap supported-random-distro$/{rhel_8} tasks/workunits} 2
Failure Reason:

Command failed on smithi121 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6299968 2021-07-29 03:30:05 2021-07-29 03:30:33 2021-07-29 03:46:59 0:16:26 0:08:11 0:08:15 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_cephadm_repos} 1
fail 6299969 2021-07-29 03:30:06 2021-07-29 03:30:34 2021-07-29 03:50:33 0:19:59 0:12:00 0:07:59 smithi master rhel 8.4 rados/multimon/{clusters/9 mon_election/classic msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/mon_recovery} 3
Failure Reason:

Command failed on smithi107 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299970 2021-07-29 03:30:06 2021-07-29 03:30:35 2021-07-29 03:52:32 0:21:57 0:07:24 0:14:33 smithi master centos 8.3 rados/standalone/{supported-random-distro$/{centos_8} workloads/osd-backfill} 1
Failure Reason:

Command failed on smithi123 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299971 2021-07-29 03:30:07 2021-07-29 03:30:35 2021-07-29 03:48:16 0:17:41 0:07:23 0:10:18 smithi master centos 8.3 rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} 1
Failure Reason:

Command failed on smithi101 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299972 2021-07-29 03:30:08 2021-07-29 03:30:35 2021-07-29 03:51:53 0:21:18 0:11:57 0:09:21 smithi master rhel 8.4 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/radosbench} 2
Failure Reason:

Command failed on smithi176 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299973 2021-07-29 03:30:09 2021-07-29 03:30:37 2021-07-29 03:52:20 0:21:43 0:08:11 0:13:32 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/iscsi 3-final} 2
Failure Reason:

Command failed on smithi061 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid 071d1e9c-f020-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 172.21.15.61 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6299974 2021-07-29 03:30:10 2021-07-29 03:30:37 2021-07-29 03:50:19 0:19:42 0:12:13 0:07:29 smithi master rhel 8.4 rados/singleton/{all/osd-recovery mon_election/connectivity msgr-failures/many msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi022 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6299975 2021-07-29 03:30:11 2021-07-29 03:30:37 2021-07-29 03:48:12 0:17:35 0:07:50 0:09:45 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/full-tiering mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
fail 6299976 2021-07-29 03:30:12 2021-07-29 03:30:37 2021-07-29 03:54:36 0:23:59 0:08:24 0:15:35 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi099 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299977 2021-07-29 03:30:13 2021-07-29 03:30:37 2021-07-29 03:52:25 0:21:48 0:07:08 0:14:40 smithi master centos 8.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} 2
Failure Reason:

Command failed on smithi132 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299978 2021-07-29 03:30:14 2021-07-29 03:30:39 2021-07-29 03:51:13 0:20:34 0:12:13 0:08:21 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/mirror 3-final} 2
Failure Reason:

Command failed on smithi163 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid ccb62c4e-f01f-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 172.21.15.163 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

dead 6299979 2021-07-29 03:30:14 2021-07-29 03:30:39 2021-07-29 05:00:36 1:29:57 smithi master ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 6299980 2021-07-29 03:30:15 2021-07-29 03:30:39 2021-07-29 03:52:29 0:21:50 0:07:32 0:14:18 smithi master centos 8.3 rados/monthrash/{ceph clusters/9-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/sync workloads/rados_api_tests} 2
Failure Reason:

Command failed on smithi007 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

dead 6299981 2021-07-29 03:30:16 2021-07-29 03:30:41 2021-07-29 05:00:39 1:29:58 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/redirect} 2
fail 6299982 2021-07-29 03:30:17 2021-07-29 03:30:41 2021-07-29 03:50:56 0:20:15 0:11:45 0:08:30 smithi master rhel 8.4 rados/singleton/{all/peer mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi186 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6299983 2021-07-29 03:30:18 2021-07-29 03:30:41 2021-07-29 04:07:48 0:37:07 0:25:57 0:11:10 smithi master ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} tasks/rados_api_tests} 2
pass 6299984 2021-07-29 03:30:20 2021-07-29 03:30:41 2021-07-29 04:13:23 0:42:42 0:27:40 0:15:02 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_kubic_stable 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 6299985 2021-07-29 03:30:21 2021-07-29 03:34:32 2021-07-29 03:53:18 0:18:46 0:11:57 0:06:49 smithi master rhel 8.4 rados/singleton-nomsgr/{all/health-warnings mon_election/classic rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi190 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6299986 2021-07-29 03:30:22 2021-07-29 03:34:32 2021-07-29 04:02:41 0:28:09 0:17:39 0:10:30 smithi master ubuntu 20.04 rados/cephadm/smoke/{distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
fail 6299987 2021-07-29 03:30:23 2021-07-29 03:35:13 2021-07-29 03:53:26 0:18:13 0:07:16 0:10:57 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_api_tests validater/lockdep} 2
Failure Reason:

Command failed on smithi084 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6299988 2021-07-29 03:30:30 2021-07-29 03:36:33 2021-07-29 03:55:23 0:18:50 0:11:07 0:07:43 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/sample_radosbench} 1
fail 6299989 2021-07-29 03:30:31 2021-07-29 03:36:34 2021-07-29 03:58:44 0:22:10 0:06:56 0:15:14 smithi master centos 8.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8.stream} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
Failure Reason:

Command failed on smithi200 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299990 2021-07-29 03:30:33 2021-07-29 03:41:15 2021-07-29 04:05:07 0:23:52 0:12:15 0:11:37 smithi master rhel 8.4 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

Command failed on smithi060 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6299991 2021-07-29 03:30:34 2021-07-29 03:46:47 2021-07-29 04:16:03 0:29:16 0:18:45 0:10:31 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/nfs-ingress-rgw 3-final} 2
fail 6299992 2021-07-29 03:30:35 2021-07-29 03:47:07 2021-07-29 04:05:36 0:18:29 0:12:15 0:06:14 smithi master rhel 8.4 rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi052 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299993 2021-07-29 03:30:36 2021-07-29 03:47:08 2021-07-29 04:06:02 0:18:54 0:11:57 0:06:57 smithi master rhel 8.4 rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 2
Failure Reason:

Command failed on smithi135 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299994 2021-07-29 03:30:38 2021-07-29 03:47:08 2021-07-29 04:07:01 0:19:53 0:12:03 0:07:50 smithi master rhel 8.4 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/redirect_promote_tests} 2
Failure Reason:

Command failed on smithi101 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299995 2021-07-29 03:30:38 2021-07-29 03:48:19 2021-07-29 04:05:16 0:16:57 0:08:04 0:08:53 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

Command failed on smithi115 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299996 2021-07-29 03:30:39 2021-07-29 03:48:19 2021-07-29 04:02:54 0:14:35 0:06:51 0:07:44 smithi master centos 8.stream rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} 1
Failure Reason:

Command failed on smithi138 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6299997 2021-07-29 03:30:40 2021-07-29 03:48:19 2021-07-29 04:23:42 0:35:23 0:25:14 0:10:09 smithi master ubuntu 20.04 rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/classic msgr/async-v1only start tasks/rados_python} 2
fail 6299998 2021-07-29 03:30:41 2021-07-29 03:48:40 2021-07-29 04:08:05 0:19:25 0:08:40 0:10:45 smithi master centos 8.2 rados/dashboard/{centos_8.2_kubic_stable debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

Command failed on smithi122 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6299999 2021-07-29 03:30:42 2021-07-29 03:49:20 2021-07-29 04:06:11 0:16:51 0:08:00 0:08:51 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

Command failed on smithi124 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid 10908fac-f022-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 172.21.15.124 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6300000 2021-07-29 03:30:43 2021-07-29 03:49:30 2021-07-29 04:06:58 0:17:28 0:07:12 0:10:16 smithi master centos 8.3 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-comp-lz4 supported-random-distro$/{centos_8} tasks/crash} 2
Failure Reason:

Command failed on smithi161 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300001 2021-07-29 03:30:44 2021-07-29 03:49:41 2021-07-29 04:07:16 0:17:35 0:07:19 0:10:16 smithi master centos 8.3 rados/singleton/{all/pg-autoscaler mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} 2
Failure Reason:

Command failed on smithi169 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300002 2021-07-29 03:30:45 2021-07-29 03:50:21 2021-07-29 04:07:15 0:16:54 0:07:31 0:09:23 smithi master centos 8.3 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-snaps-few-objects-overwrites} 2
Failure Reason:

Command failed on smithi062 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300003 2021-07-29 03:30:46 2021-07-29 03:50:21 2021-07-29 04:09:36 0:19:15 0:08:25 0:10:50 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi116 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300004 2021-07-29 03:30:47 2021-07-29 03:50:41 2021-07-29 04:05:37 0:14:56 0:06:54 0:08:02 smithi master centos 8.stream rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_8.stream}} 1
Failure Reason:

Command failed on smithi107 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6300005 2021-07-29 03:30:48 2021-07-29 03:50:42 2021-07-29 04:10:27 0:19:45 0:07:25 0:12:20 smithi master ubuntu 20.04 rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} 3
fail 6300006 2021-07-29 03:30:48 2021-07-29 03:51:02 2021-07-29 04:10:17 0:19:15 0:11:46 0:07:29 smithi master rhel 8.4 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/redirect_set_object} 2
Failure Reason:

Command failed on smithi087 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300007 2021-07-29 03:30:49 2021-07-29 03:51:22 2021-07-29 04:10:54 0:19:32 0:12:00 0:07:32 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

Command failed on smithi083 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid 845b8982-f022-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 172.21.15.83 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

dead 6300008 2021-07-29 03:30:50 2021-07-29 03:51:22 2021-07-29 05:00:34 1:09:12 smithi master ubuntu 20.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd} 1
fail 6300009 2021-07-29 03:30:51 2021-07-29 03:51:23 2021-07-29 04:07:01 0:15:38 0:05:40 0:09:58 smithi master centos 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} tasks/rados_cls_all} 2
Failure Reason:

Command failed on smithi188 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd sqlite-devel sqlite-devel sqlite-devel sqlite-devel'

fail 6300010 2021-07-29 03:30:52 2021-07-29 03:51:34 2021-07-29 04:08:26 0:16:52 0:07:58 0:08:54 smithi master centos 8.3 rados/cephadm/smoke/{distro/centos_8.3_kubic_stable fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi067 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid 55f148b6-f022-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.67 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6300011 2021-07-29 03:30:53 2021-07-29 03:51:34 2021-07-29 04:08:30 0:16:56 0:07:24 0:09:32 smithi master centos 8.3 rados/singleton/{all/pg-removal-interruption mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi186 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6300012 2021-07-29 03:30:54 2021-07-29 03:51:34 2021-07-29 04:13:07 0:21:33 0:12:29 0:09:04 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_read} 1
fail 6300013 2021-07-29 03:30:55 2021-07-29 03:51:55 2021-07-29 04:13:44 0:21:49 0:07:06 0:14:43 smithi master centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

Command failed on smithi090 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6300014 2021-07-29 03:30:55 2021-07-29 03:52:15 2021-07-29 04:15:33 0:23:18 0:14:01 0:09:17 smithi master ubuntu 20.04 rados/cephadm/smoke-singlehost/{0-distro$/{ubuntu_20.04} 1-start 2-services/rgw 3-final} 1
fail 6300015 2021-07-29 03:30:56 2021-07-29 03:52:15 2021-07-29 04:06:39 0:14:24 0:07:08 0:07:16 smithi master centos 8.stream rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} 1
Failure Reason:

Command failed on smithi040 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6300016 2021-07-29 03:30:57 2021-07-29 03:52:15 2021-07-29 04:27:28 0:35:13 0:25:40 0:09:33 smithi master ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} 2
fail 6300017 2021-07-29 03:30:58 2021-07-29 03:52:16 2021-07-29 04:11:25 0:19:09 0:08:56 0:10:13 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/fastclose rados thrashers/careful thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi073 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid bf98f174-f022-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.73 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6300018 2021-07-29 03:30:59 2021-07-29 03:52:26 2021-07-29 04:11:22 0:18:56 0:12:05 0:06:51 smithi master rhel 8.4 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/set-chunks-read} 2
Failure Reason:

Command failed on smithi151 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300019 2021-07-29 03:31:00 2021-07-29 03:52:26 2021-07-29 04:09:18 0:16:52 0:07:03 0:09:49 smithi master centos 8.stream rados/monthrash/{ceph clusters/3-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8.stream} thrashers/force-sync-many workloads/rados_mon_osdmap_prune} 2
Failure Reason:

Command failed on smithi176 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6300020 2021-07-29 03:31:01 2021-07-29 03:52:26 2021-07-29 04:31:53 0:39:27 0:28:46 0:10:41 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/classic} 2
fail 6300021 2021-07-29 03:31:02 2021-07-29 03:52:37 2021-07-29 04:09:10 0:16:33 0:08:28 0:08:05 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_adoption} 1
Failure Reason:

Command failed on smithi007 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300022 2021-07-29 03:31:03 2021-07-29 03:52:37 2021-07-29 04:11:22 0:18:45 0:11:47 0:06:58 smithi master rhel 8.4 rados/singleton/{all/radostool mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi134 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300023 2021-07-29 03:31:04 2021-07-29 03:52:47 2021-07-29 04:10:34 0:17:47 0:07:19 0:10:28 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

Command failed on smithi019 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

dead 6300024 2021-07-29 03:31:05 2021-07-29 03:52:57 2021-07-29 05:02:13 1:09:16 smithi master ubuntu 20.04 rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{ubuntu_latest}} 1
fail 6300025 2021-07-29 03:31:06 2021-07-29 03:52:58 2021-07-29 04:12:11 0:19:13 0:06:58 0:12:15 smithi master centos 8.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

Command failed on smithi187 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

dead 6300026 2021-07-29 03:31:07 2021-07-29 03:53:28 2021-07-29 05:01:11 1:07:43 smithi master ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 6300027 2021-07-29 03:31:08 2021-07-29 03:53:48 2021-07-29 04:21:03 0:27:15 0:17:00 0:10:15 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/nfs 3-final} 2
fail 6300028 2021-07-29 03:31:08 2021-07-29 03:53:48 2021-07-29 04:13:21 0:19:33 0:11:50 0:07:43 smithi master rhel 8.4 rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi074 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300029 2021-07-29 03:31:09 2021-07-29 03:54:29 2021-07-29 04:11:39 0:17:10 0:06:55 0:10:15 smithi master centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8.stream} thrashers/morepggrow thrashosds-health workloads/small-objects-balanced} 2
Failure Reason:

Command failed on smithi071 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300030 2021-07-29 03:31:10 2021-07-29 03:54:39 2021-07-29 04:13:21 0:18:42 0:08:05 0:10:37 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/nfs2 3-final} 2
Failure Reason:

Command failed on smithi046 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid c87defd8-f022-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 172.21.15.46 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6300031 2021-07-29 03:31:11 2021-07-29 03:54:39 2021-07-29 04:12:23 0:17:44 0:06:56 0:10:48 smithi master centos 8.stream rados/singleton/{all/random-eio mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8.stream}} 2
Failure Reason:

Command failed on smithi155 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300032 2021-07-29 03:31:12 2021-07-29 03:55:30 2021-07-29 04:15:19 0:19:49 0:08:20 0:11:29 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed on smithi045 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6300033 2021-07-29 03:31:13 2021-07-29 03:58:50 2021-07-29 04:23:11 0:24:21 0:12:42 0:11:39 smithi master ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-comp-snappy supported-random-distro$/{ubuntu_latest} tasks/failover} 2
fail 6300034 2021-07-29 03:31:14 2021-07-29 03:58:51 2021-07-29 04:17:42 0:18:51 0:08:07 0:10:44 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi139 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6300035 2021-07-29 03:31:15 2021-07-29 03:58:52 2021-07-29 04:32:22 0:33:30 0:21:13 0:12:17 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 2
fail 6300036 2021-07-29 03:31:16 2021-07-29 04:01:02 2021-07-29 04:19:35 0:18:33 0:11:32 0:07:01 smithi master rhel 8.4 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi143 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300037 2021-07-29 03:31:16 2021-07-29 04:02:43 2021-07-29 04:18:06 0:15:23 0:05:24 0:09:59 smithi master centos 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/rados_python} 2
Failure Reason:

Command failed on smithi086 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd sqlite-devel sqlite-devel sqlite-devel sqlite-devel'

pass 6300038 2021-07-29 03:31:17 2021-07-29 04:03:03 2021-07-29 04:25:55 0:22:52 0:13:16 0:09:36 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} 1
pass 6300039 2021-07-29 03:31:19 2021-07-29 04:05:14 2021-07-29 04:32:54 0:27:40 0:17:32 0:10:08 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/small-objects-localized} 2
pass 6300040 2021-07-29 03:31:20 2021-07-29 04:05:24 2021-07-29 04:46:31 0:41:07 0:30:22 0:10:45 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_kubic_stable 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 6300041 2021-07-29 03:31:20 2021-07-29 04:05:25 2021-07-29 04:22:39 0:17:14 0:06:55 0:10:19 smithi master centos 8.stream rados/multimon/{clusters/3 mon_election/classic msgr-failures/many msgr/async no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8.stream} tasks/mon_clock_with_skews} 2
Failure Reason:

Command failed on smithi107 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300042 2021-07-29 03:31:21 2021-07-29 04:05:45 2021-07-29 04:20:50 0:15:05 0:07:05 0:08:00 smithi master centos 8.3 rados/singleton/{all/rebuild-mondb mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi135 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300043 2021-07-29 03:31:22 2021-07-29 04:06:05 2021-07-29 04:25:10 0:19:05 0:11:58 0:07:07 smithi master rhel 8.3 rados/cephadm/smoke/{distro/rhel_8.3_kubic_stable fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi124 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid 810f1134-f024-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.124 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

pass 6300044 2021-07-29 03:31:23 2021-07-29 04:06:15 2021-07-29 04:30:58 0:24:43 0:17:12 0:07:31 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
fail 6300045 2021-07-29 03:31:24 2021-07-29 04:06:16 2021-07-29 04:25:42 0:19:26 0:08:31 0:10:55 smithi master centos 8.3 rados/cephadm/with-work/{0-distro/centos_8.3_kubic_stable fixed-2 mode/packaged mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi162 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300046 2021-07-29 03:31:25 2021-07-29 04:06:46 2021-07-29 04:21:51 0:15:05 0:07:12 0:07:53 smithi master centos 8.3 rados/standalone/{supported-random-distro$/{centos_8} workloads/scrub} 1
Failure Reason:

Command failed on smithi188 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6300047 2021-07-29 03:31:26 2021-07-29 04:07:06 2021-07-29 04:30:34 0:23:28 0:11:17 0:12:11 smithi master ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 6300048 2021-07-29 03:31:27 2021-07-29 04:07:06 2021-07-29 04:26:03 0:18:57 0:12:38 0:06:19 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

Command failed on smithi022 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid b7c1468e-f024-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 172.21.15.22 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

pass 6300049 2021-07-29 03:31:28 2021-07-29 04:07:17 2021-07-29 04:38:37 0:31:20 0:20:33 0:10:47 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/small-objects} 2
dead 6300050 2021-07-29 03:31:29 2021-07-29 04:07:17 2021-07-29 05:01:30 0:54:13 smithi master ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-radosbench} 2
dead 6300051 2021-07-29 03:31:30 2021-07-29 04:07:57 2021-07-29 05:01:15 0:53:18 smithi master ubuntu 20.04 rados/singleton/{all/recovery-preemption mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} 1
pass 6300052 2021-07-29 03:31:30 2021-07-29 04:07:57 2021-07-29 04:28:51 0:20:54 0:12:52 0:08:02 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} 1
fail 6300053 2021-07-29 03:31:31 2021-07-29 04:08:08 2021-07-29 04:25:26 0:17:18 0:07:20 0:09:58 smithi master centos 8.3 rados/monthrash/{ceph clusters/9-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/many workloads/rados_mon_workunits} 2
Failure Reason:

Command failed on smithi067 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

dead 6300054 2021-07-29 03:31:32 2021-07-29 04:08:28 2021-07-29 04:24:08 0:15:40 0:03:49 0:11:51 smithi master ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/few rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-snaps-few-objects-overwrites} 2
Failure Reason:

Failure object was: {'smithi122.front.sepia.ceph.com': {'cache_updated': False, 'stdout': '', 'stderr': 'W: --force-yes is deprecated, use one of the options starting with --allow instead.\nE: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 7402 (apt-get)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?\n', 'rc': 100, 'invocation': {'module_args': {'dpkg_options': 'force-confdef,force-confold', 'autoremove': False, 'force': True, 'force_apt_get': False, 'policy_rc_d': None, 'package': ['mpich', 'qemu-system-x86', 'lttng-tools', 'libtool-bin', 'docker.io', 'python3-nose', 'python3-virtualenv', 'python3-configobj', 'python3-gevent', 'python3-numpy', 'python3-matplotlib', 'python3-setuptools', 'libfcgi0ldbl', 'python-dev', 'libev-dev', 'perl', 'libwww-perl', 'lsb-release', 'build-essential', 'sysstat', 'gdb', 'libedit2', 'cryptsetup-bin', 'xfsprogs', 'gdisk', 'parted', 'libuuid1', 'libatomic-ops-dev', 'git-core', 'attr', 'dbench', 'bonnie++', 'valgrind', 'ant', 'libtool', 'automake', 'gettext', 'uuid-dev', 'libacl1-dev', 'bc', 'xfsdump', 'xfslibs-dev', 'libattr1-dev', 'quota', 'libcap2-bin', 'libncurses5-dev', 'lvm2', 'vim', 'pdsh', 'blktrace', 'genisoimage', 'libjson-xs-perl', 'xml-twig-tools', 'default-jdk', 'junit4', 'tgt', 'open-iscsi', 'cifs-utils', 'ipcalc', 'nfs-common', 'nfs-kernel-server', 'software-properties-common'], 'autoclean': False, 'install_recommends': None, 'name': ['mpich', 'qemu-system-x86', 'lttng-tools', 'libtool-bin', 'docker.io', 'python3-nose', 'python3-virtualenv', 'python3-configobj', 'python3-gevent', 'python3-numpy', 'python3-matplotlib', 'python3-setuptools', 'libfcgi0ldbl', 'python-dev', 'libev-dev', 'perl', 'libwww-perl', 'lsb-release', 'build-essential', 'sysstat', 'gdb', 'libedit2', 'cryptsetup-bin', 'xfsprogs', 'gdisk', 'parted', 'libuuid1', 'libatomic-ops-dev', 'git-core', 'attr', 'dbench', 'bonnie++', 'valgrind', 'ant', 'libtool', 'automake', 'gettext', 'uuid-dev', 'libacl1-dev', 'bc', 'xfsdump', 'xfslibs-dev', 'libattr1-dev', 'quota', 'libcap2-bin', 'libncurses5-dev', 'lvm2', 'vim', 'pdsh', 'blktrace', 'genisoimage', 'libjson-xs-perl', 'xml-twig-tools', 'default-jdk', 'junit4', 'tgt', 'open-iscsi', 'cifs-utils', 'ipcalc', 'nfs-common', 'nfs-kernel-server', 'software-properties-common'], 'purge': False, 'allow_unauthenticated': False, 'state': 'present', 'upgrade': None, 'update_cache': None, 'default_release': None, 'only_upgrade': False, 'deb': None, 'cache_valid_time': 0}}, 'cache_update_time': 1627532052, 'msg': '\'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" --force-yes install \'qemu-system-x86\' \'git-core\'\' failed: W: --force-yes is deprecated, use one of the options starting with --allow instead.\nE: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 7402 (apt-get)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?\n', 'stdout_lines': [], 'stderr_lines': ['W: --force-yes is deprecated, use one of the options starting with --allow instead.', 'E: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 7402 (apt-get)', 'E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?'], '_ansible_no_log': False, 'changed': False}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_73aa7e3b960c7ffac669297b6aa86606265edd1b/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_73aa7e3b960c7ffac669297b6aa86606265edd1b/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_73aa7e3b960c7ffac669297b6aa86606265edd1b/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_73aa7e3b960c7ffac669297b6aa86606265edd1b/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_73aa7e3b960c7ffac669297b6aa86606265edd1b/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_73aa7e3b960c7ffac669297b6aa86606265edd1b/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_73aa7e3b960c7ffac669297b6aa86606265edd1b/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_73aa7e3b960c7ffac669297b6aa86606265edd1b/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_73aa7e3b960c7ffac669297b6aa86606265edd1b/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 117, in represent_mapping node_key = self.represent_data(item_key) File "/home/teuthworker/src/git.ceph.com_git_teuthology_73aa7e3b960c7ffac669297b6aa86606265edd1b/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_73aa7e3b960c7ffac669297b6aa86606265edd1b/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)yaml.representer.RepresenterError: ('cannot represent an object', 'cache_update_time')

pass 6300055 2021-07-29 03:31:33 2021-07-29 04:08:38 2021-07-29 04:38:32 0:29:54 0:18:42 0:11:12 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/rgw 3-final} 2
fail 6300056 2021-07-29 03:31:34 2021-07-29 04:09:19 2021-07-29 04:26:03 0:16:44 0:07:29 0:09:15 smithi master centos 8.3 rados/objectstore/{backends/objectstore-filestore-memstore supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi032 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300057 2021-07-29 03:31:35 2021-07-29 04:09:19 2021-07-29 04:25:39 0:16:20 0:07:06 0:09:14 smithi master centos 8.stream rados/singleton-nomsgr/{all/pool-access mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} 1
Failure Reason:

Command failed on smithi007 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

dead 6300058 2021-07-29 03:31:36 2021-07-29 04:09:19 2021-07-29 05:00:36 0:51:17 smithi master centos 8.3 rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/connectivity} 2
fail 6300059 2021-07-29 03:31:37 2021-07-29 04:09:39 2021-07-29 04:27:13 0:17:34 0:07:11 0:10:23 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/mon_recovery validater/lockdep} 2
Failure Reason:

Command failed on smithi104 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300060 2021-07-29 03:31:38 2021-07-29 04:10:10 2021-07-29 04:29:36 0:19:26 0:11:58 0:07:28 smithi master rhel 8.4 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
Failure Reason:

Command failed on smithi165 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300061 2021-07-29 03:31:39 2021-07-29 04:10:30 2021-07-29 04:28:01 0:17:31 0:07:21 0:10:10 smithi master centos 8.3 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

Command failed on smithi019 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6300062 2021-07-29 03:31:40 2021-07-29 04:10:40 2021-07-29 04:27:31 0:16:51 0:07:35 0:09:16 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_cephadm_repos} 1
pass 6300063 2021-07-29 03:31:41 2021-07-29 04:10:40 2021-07-29 04:29:10 0:18:30 0:07:27 0:11:03 smithi master ubuntu 20.04 rados/singleton/{all/resolve_stuck_peering mon_election/classic msgr-failures/many msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}} 2
fail 6300064 2021-07-29 03:31:42 2021-07-29 04:11:01 2021-07-29 04:28:42 0:17:41 0:07:11 0:10:30 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/snaps-few-objects-balanced} 2
Failure Reason:

Command failed on smithi154 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6300065 2021-07-29 03:31:43 2021-07-29 04:11:31 2021-07-29 04:30:37 0:19:06 0:10:04 0:09:02 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_read} 1
fail 6300066 2021-07-29 03:31:43 2021-07-29 04:11:31 2021-07-29 04:30:08 0:18:37 0:08:01 0:10:36 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/basic 3-final} 2
Failure Reason:

Command failed on smithi006 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid 1fa037a6-f025-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 172.21.15.6 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6300067 2021-07-29 03:31:44 2021-07-29 04:11:32 2021-07-29 04:26:43 0:15:11 0:05:33 0:09:38 smithi master centos 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/rados_stress_watch} 2
Failure Reason:

Command failed on smithi071 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd sqlite-devel sqlite-devel sqlite-devel sqlite-devel'

fail 6300068 2021-07-29 03:31:45 2021-07-29 04:11:42 2021-07-29 04:33:06 0:21:24 0:09:03 0:12:21 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/few rados thrashers/default thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi084 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid 8b5b49ea-f025-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.84 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

pass 6300069 2021-07-29 03:31:46 2021-07-29 04:12:13 2021-07-29 04:49:56 0:37:43 0:28:41 0:09:02 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
fail 6300070 2021-07-29 03:31:47 2021-07-29 04:12:34 2021-07-29 04:32:32 0:19:58 0:08:11 0:11:47 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi140 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6300071 2021-07-29 03:31:48 2021-07-29 04:13:14 2021-07-29 04:36:52 0:23:38 0:12:42 0:10:56 smithi master ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-comp-zlib supported-random-distro$/{ubuntu_latest} tasks/insights} 2
pass 6300072 2021-07-29 03:31:49 2021-07-29 04:13:24 2021-07-29 04:40:25 0:27:01 0:17:25 0:09:36 smithi master ubuntu 20.04 rados/cephadm/smoke/{distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} 2
pass 6300073 2021-07-29 03:31:50 2021-07-29 04:13:24 2021-07-29 04:30:35 0:17:11 0:09:01 0:08:10 smithi master ubuntu 20.04 rados/singleton/{all/test-crash mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 1
fail 6300074 2021-07-29 03:31:51 2021-07-29 04:13:25 2021-07-29 04:31:24 0:17:59 0:07:14 0:10:45 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/snaps-few-objects-localized} 2
Failure Reason:

Command failed on smithi180 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300075 2021-07-29 03:31:52 2021-07-29 04:13:55 2021-07-29 04:33:09 0:19:14 0:12:38 0:06:36 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

Command failed on smithi012 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid b79afa5a-f025-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 172.21.15.12 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6300076 2021-07-29 03:31:53 2021-07-29 04:13:55 2021-07-29 04:33:29 0:19:34 0:11:54 0:07:40 smithi master rhel 8.4 rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} tasks/mon_recovery} 2
Failure Reason:

Command failed on smithi097 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300077 2021-07-29 03:31:54 2021-07-29 04:14:15 2021-07-29 04:30:54 0:16:39 0:07:23 0:09:16 smithi master centos 8.3 rados/singleton-nomsgr/{all/version-number-sanity mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi036 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300078 2021-07-29 03:31:54 2021-07-29 04:14:26 2021-07-29 04:31:31 0:17:05 0:08:21 0:08:44 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_orch_cli} 1
Failure Reason:

Command failed on smithi045 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300079 2021-07-29 03:31:55 2021-07-29 04:15:26 2021-07-29 04:32:38 0:17:12 0:06:47 0:10:25 smithi master centos 8.stream rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8.stream}} 1
Failure Reason:

Command failed on smithi136 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6300080 2021-07-29 03:31:56 2021-07-29 04:15:36 2021-07-29 04:43:06 0:27:30 0:16:38 0:10:52 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/iscsi 3-final} 2
fail 6300081 2021-07-29 03:31:57 2021-07-29 04:16:07 2021-07-29 04:37:03 0:20:56 0:11:47 0:09:09 smithi master rhel 8.4 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

Command failed on smithi139 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300082 2021-07-29 03:31:58 2021-07-29 04:18:07 2021-07-29 04:37:57 0:19:50 0:06:46 0:13:04 smithi master centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8.stream} thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 2
Failure Reason:

Command failed on smithi143 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300083 2021-07-29 03:31:59 2021-07-29 04:20:58 2021-07-29 04:42:17 0:21:19 0:15:29 0:05:50 smithi master rhel 8.3 rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/root mon_election/classic msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi120 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300084 2021-07-29 03:32:00 2021-07-29 04:21:09 2021-07-29 04:38:46 0:17:37 0:07:18 0:10:19 smithi master centos 8.3 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-balanced} 2
Failure Reason:

Command failed on smithi188 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6300085 2021-07-29 03:32:01 2021-07-29 04:22:01 2021-07-29 04:46:02 0:24:01 0:13:05 0:10:56 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} 1
dead 6300086 2021-07-29 03:32:02 2021-07-29 04:22:41 2021-07-29 04:38:35 0:15:54 0:04:03 0:11:51 smithi master ubuntu 20.04 rados/monthrash/{ceph clusters/3-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/one workloads/snaps-few-objects} 2
Failure Reason:

{'Failure object was': {'smithi037.front.sepia.ceph.com': {'msg': 'Failed to update apt cache: ', 'invocation': {'module_args': {'dpkg_options': 'force-confdef,force-confold', 'autoremove': False, 'force': False, 'force_apt_get': False, 'policy_rc_d': 'None', 'package': 'None', 'autoclean': False, 'install_recommends': 'None', 'purge': False, 'allow_unauthenticated': False, 'state': 'present', 'upgrade': 'None', 'update_cache': True, 'default_release': 'None', 'only_upgrade': False, 'deb': 'None', 'cache_valid_time': 0}}, '_ansible_no_log': False, 'attempts': 24, 'changed': False}}, 'Traceback (most recent call last)': 'File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_73aa7e3b960c7ffac669297b6aa86606265edd1b/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_73aa7e3b960c7ffac669297b6aa86606265edd1b/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_73aa7e3b960c7ffac669297b6aa86606265edd1b/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_73aa7e3b960c7ffac669297b6aa86606265edd1b/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_73aa7e3b960c7ffac669297b6aa86606265edd1b/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_73aa7e3b960c7ffac669297b6aa86606265edd1b/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_73aa7e3b960c7ffac669297b6aa86606265edd1b/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_73aa7e3b960c7ffac669297b6aa86606265edd1b/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_73aa7e3b960c7ffac669297b6aa86606265edd1b/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 117, in represent_mapping node_key = self.represent_data(item_key) File "/home/teuthworker/src/git.ceph.com_git_teuthology_73aa7e3b960c7ffac669297b6aa86606265edd1b/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_73aa7e3b960c7ffac669297b6aa86606265edd1b/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)', 'yaml.representer.RepresenterError': "('cannot represent an object', '_ansible_no_log')"}

fail 6300087 2021-07-29 03:32:03 2021-07-29 04:22:51 2021-07-29 04:43:06 0:20:15 0:08:15 0:12:00 smithi master centos 8.2 rados/dashboard/{centos_8.2_kubic_stable debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} tasks/dashboard} 2
Failure Reason:

Command failed on smithi178 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300088 2021-07-29 03:32:04 2021-07-29 04:23:12 2021-07-29 04:41:28 0:18:16 0:12:13 0:06:03 smithi master rhel 8.4 rados/objectstore/{backends/alloc-hint supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi052 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300089 2021-07-29 03:32:05 2021-07-29 04:23:12 2021-07-29 04:38:45 0:15:33 0:07:09 0:08:24 smithi master centos 8.3 rados/rest/{mgr-restful supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi184 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300090 2021-07-29 03:32:06 2021-07-29 04:23:52 2021-07-29 04:38:35 0:14:43 0:07:01 0:07:42 smithi master centos 8.stream rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{centos_8.stream}} 1
Failure Reason:

Command failed on smithi041 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6300091 2021-07-29 03:32:07 2021-07-29 04:23:53 2021-07-29 04:45:23 0:21:30 0:11:44 0:09:46 smithi master ubuntu 20.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/c2c} 1
dead 6300092 2021-07-29 03:32:08 2021-07-29 04:24:13 2021-07-29 05:02:09 0:37:56 smithi master rhel 8.3 rados/upgrade/parallel/{0-distro$/{rhel_8.3_kubic_stable} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
fail 6300093 2021-07-29 03:32:08 2021-07-29 04:25:14 2021-07-29 04:42:13 0:16:59 0:07:09 0:09:50 smithi master centos 8.3 rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} 1
Failure Reason:

Command failed on smithi186 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300094 2021-07-29 03:32:09 2021-07-29 04:25:14 2021-07-29 04:44:36 0:19:22 0:08:10 0:11:12 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi094 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300095 2021-07-29 03:32:10 2021-07-29 04:25:34 2021-07-29 04:42:49 0:17:15 0:07:15 0:10:00 smithi master centos 8.3 rados/singleton/{all/thrash-backfill-full mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} 2
Failure Reason:

Command failed on smithi162 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6300096 2021-07-29 03:32:11 2021-07-29 04:25:44 2021-07-29 04:43:24 0:17:40 0:08:03 0:09:37 smithi master ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/rados_striper} 2
fail 6300097 2021-07-29 03:32:12 2021-07-29 04:26:05 2021-07-29 04:42:37 0:16:32 0:08:06 0:08:26 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/mirror 3-final} 2
Failure Reason:

Command failed on smithi022 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid 1ca19142-f027-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 172.21.15.22 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6300098 2021-07-29 03:32:13 2021-07-29 04:26:05 2021-07-29 04:43:41 0:17:36 0:07:12 0:10:24 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed on smithi071 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300099 2021-07-29 03:32:14 2021-07-29 04:26:45 2021-07-29 04:46:04 0:19:19 0:07:25 0:11:54 smithi master centos 8.3 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

Command failed on smithi104 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

dead 6300100 2021-07-29 03:32:15 2021-07-29 04:27:16 2021-07-29 05:01:06 0:33:50 smithi master ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
dead 6300101 2021-07-29 03:32:16 2021-07-29 04:27:36 2021-07-29 05:01:25 0:33:49 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_kubic_stable 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 6300102 2021-07-29 03:32:17 2021-07-29 04:28:06 2021-07-29 04:45:52 0:17:46 0:06:51 0:10:55 smithi master centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream} thrashers/morepggrow thrashosds-health workloads/write_fadvise_dontneed} 2
Failure Reason:

Command failed on smithi132 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300103 2021-07-29 03:32:18 2021-07-29 04:28:47 2021-07-29 04:47:49 0:19:02 0:11:42 0:07:20 smithi master rhel 8.4 rados/singleton-nomsgr/{all/balancer mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi098 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300104 2021-07-29 03:32:19 2021-07-29 04:28:47 2021-07-29 04:48:56 0:20:09 0:08:15 0:11:54 smithi master centos 8.3 rados/cephadm/orchestrator_cli/{0-random-distro$/{centos_8.3_kubic_stable} 2-node-mgr orchestrator_cli} 2
Failure Reason:

Command failed on smithi083 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300105 2021-07-29 03:32:20 2021-07-29 04:29:17 2021-07-29 04:46:18 0:17:01 0:07:14 0:09:47 smithi master centos 8.3 rados/singleton/{all/thrash-eio mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} 2
Failure Reason:

Command failed on smithi174 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300106 2021-07-29 03:32:21 2021-07-29 04:29:37 2021-07-29 04:48:47 0:19:10 0:11:55 0:07:15 smithi master rhel 8.4 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-comp-zstd supported-random-distro$/{rhel_8} tasks/module_selftest} 2
Failure Reason:

Command failed on smithi165 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300107 2021-07-29 03:32:22 2021-07-29 04:29:38 2021-07-29 04:46:59 0:17:21 0:07:17 0:10:04 smithi master centos 8.3 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/connectivity msgr-failures/osd-delay rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 2
Failure Reason:

Command failed on smithi134 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300108 2021-07-29 03:32:23 2021-07-29 04:30:18 2021-07-29 04:47:41 0:17:23 0:07:50 0:09:33 smithi master centos 8.3 rados/cephadm/smoke/{distro/centos_8.3_kubic_stable fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi073 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid d2b69d2e-f027-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.73 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6300109 2021-07-29 03:32:24 2021-07-29 04:30:38 2021-07-29 04:49:56 0:19:18 0:11:58 0:07:20 smithi master rhel 8.3 rados/cephadm/smoke-singlehost/{0-distro$/{rhel_8.3_kubic_stable} 1-start 2-services/basic 3-final} 1
Failure Reason:

Command failed on smithi161 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid f3823ebe-f027-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 172.21.15.161 --single-host-defaults --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6300110 2021-07-29 03:32:25 2021-07-29 04:30:38 2021-07-29 04:47:32 0:16:54 0:07:23 0:09:31 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/admin_socket_objecter_requests} 2
Failure Reason:

Command failed on smithi101 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300111 2021-07-29 03:32:26 2021-07-29 04:30:39 2021-07-29 04:47:02 0:16:23 0:07:15 0:09:08 smithi master centos 8.stream rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/classic rados supported-random-distro$/{centos_8.stream}} 1
Failure Reason:

Command failed on smithi016 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

dead 6300112 2021-07-29 03:32:27 2021-07-29 04:30:39 2021-07-29 05:01:46 0:31:07 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/classic} 2
fail 6300113 2021-07-29 03:32:28 2021-07-29 04:30:59 2021-07-29 04:51:05 0:20:06 0:07:00 0:13:06 smithi master centos 8.stream rados/multimon/{clusters/9 mon_election/classic msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8.stream} tasks/mon_clock_no_skews} 3
Failure Reason:

Command failed on smithi090 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300114 2021-07-29 03:32:29 2021-07-29 04:31:30 2021-07-29 04:49:06 0:17:36 0:06:53 0:10:43 smithi master centos 8.stream rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream}} 2
Failure Reason:

Command failed on smithi173 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6300115 2021-07-29 03:32:30 2021-07-29 04:32:00 2021-07-29 04:50:47 0:18:47 0:10:17 0:08:30 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write} 1
fail 6300116 2021-07-29 03:32:31 2021-07-29 04:32:00 2021-07-29 04:49:25 0:17:25 0:08:26 0:08:59 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_adoption} 1
Failure Reason:

Command failed on smithi057 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300117 2021-07-29 03:32:32 2021-07-29 04:32:31 2021-07-29 04:51:16 0:18:45 0:12:17 0:06:28 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/nfs-ingress-rgw 3-final} 2
Failure Reason:

Command failed on smithi058 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid 3469516a-f028-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 172.21.15.58 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6300118 2021-07-29 03:32:32 2021-07-29 04:32:41 2021-07-29 04:52:15 0:19:34 0:08:56 0:10:38 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/radosbench} 3
Failure Reason:

Command failed on smithi115 with status 1: "sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid 7758115a-f028-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-addrv '[v1:172.21.15.115:6789]' --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring"

fail 6300119 2021-07-29 03:32:33 2021-07-29 04:33:01 2021-07-29 04:52:45 0:19:44 0:07:12 0:12:32 smithi master centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

Command failed on smithi151 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

dead 6300120 2021-07-29 03:32:34 2021-07-29 04:33:12 2021-07-29 05:00:22 0:27:10 smithi master ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/rados_workunit_loadgen_big} 2
fail 6300121 2021-07-29 03:32:35 2021-07-29 04:33:12 2021-07-29 04:52:30 0:19:18 0:11:59 0:07:19 smithi master rhel 8.4 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/cache-agent-big} 2
Failure Reason:

Command failed on smithi097 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300122 2021-07-29 03:32:36 2021-07-29 04:33:32 2021-07-29 04:53:54 0:20:22 0:07:11 0:13:11 smithi master centos 8.stream rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} 1
Failure Reason:

Command failed on smithi149 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

dead 6300123 2021-07-29 03:32:37 2021-07-29 04:36:53 2021-07-29 05:02:14 0:25:21 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/nfs-ingress 3-final} 2
fail 6300124 2021-07-29 03:32:38 2021-07-29 04:36:53 2021-07-29 04:55:41 0:18:48 0:11:37 0:07:11 smithi master rhel 8.4 rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi119 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300125 2021-07-29 03:32:39 2021-07-29 04:36:54 2021-07-29 04:55:42 0:18:48 0:11:50 0:06:58 smithi master rhel 8.4 rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} 2
Failure Reason:

Command failed on smithi093 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

dead 6300126 2021-07-29 03:32:40 2021-07-29 04:36:54 2021-07-29 05:02:18 0:25:24 smithi master ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-fast-read} 2
fail 6300127 2021-07-29 03:32:41 2021-07-29 04:37:04 2021-07-29 04:53:58 0:16:54 0:07:08 0:09:46 smithi master centos 8.stream rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8.stream}} 1
Failure Reason:

Command failed on smithi086 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300128 2021-07-29 03:32:42 2021-07-29 04:37:04 2021-07-29 04:56:59 0:19:55 0:11:41 0:08:14 smithi master rhel 8.4 rados/monthrash/{ceph clusters/9-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/sync-many workloads/snaps-few-objects} 2
Failure Reason:

Command failed on smithi143 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300129 2021-07-29 03:32:43 2021-07-29 04:38:05 2021-07-29 04:55:14 0:17:09 0:08:23 0:08:46 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed on smithi176 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300130 2021-07-29 03:32:44 2021-07-29 04:38:35 2021-07-29 04:57:30 0:18:55 0:08:50 0:10:05 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi061 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6300131 2021-07-29 03:32:45 2021-07-29 04:38:35 2021-07-29 05:00:09 0:21:34 0:11:42 0:09:52 smithi master ubuntu 20.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/crush} 1
fail 6300132 2021-07-29 03:32:46 2021-07-29 04:38:46 2021-07-29 04:55:29 0:16:43 0:07:18 0:09:25 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/filestore-xfs rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

Command failed on smithi002 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300133 2021-07-29 03:32:46 2021-07-29 04:38:47 2021-07-29 04:53:32 0:14:45 0:07:06 0:07:39 smithi master centos 8.stream rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{centos_8.stream}} 1
Failure Reason:

Command failed on smithi184 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300134 2021-07-29 03:32:47 2021-07-29 04:38:47 2021-07-29 04:55:42 0:16:55 0:07:23 0:09:32 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/cache-agent-small} 2
Failure Reason:

Command failed on smithi037 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300135 2021-07-29 03:32:48 2021-07-29 04:38:48 2021-07-29 04:56:02 0:17:14 0:11:36 0:05:38 smithi master rhel 8.4 rados/singleton/{all/watch-notify-same-primary mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi076 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300136 2021-07-29 03:32:49 2021-07-29 04:38:48 2021-07-29 04:57:22 0:18:34 0:07:58 0:10:36 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

Command failed on smithi046 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:fd0ca960701c0d11585f091a8fdc8387f0cf831f -v bootstrap --fsid 2d007074-f029-11eb-8c24-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 172.21.15.46 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

dead 6300137 2021-07-29 03:32:50 2021-07-29 04:40:28 2021-07-29 05:01:48 0:21:20 smithi master ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
dead 6300138 2021-07-29 03:32:51 2021-07-29 04:42:19 2021-07-29 05:01:01 0:18:42 smithi master ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
dead 6300139 2021-07-29 03:32:52 2021-07-29 04:42:39 2021-07-29 05:02:09 0:19:30 smithi master rhel 8.3 rados/cephadm/smoke/{distro/rhel_8.3_kubic_stable fixed-2 mon_election/connectivity start} 2
dead 6300140 2021-07-29 03:32:53 2021-07-29 04:43:00 2021-07-29 05:01:24 0:18:24 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} 1
fail 6300141 2021-07-29 03:32:54 2021-07-29 04:43:00 2021-07-29 05:00:53 0:17:53 0:06:54 0:10:59 smithi master centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-hybrid supported-random-distro$/{centos_8.stream} tasks/progress} 2
Failure Reason:

Command failed on smithi178 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

dead 6300142 2021-07-29 03:32:55 2021-07-29 04:43:10 2021-07-29 05:01:36 0:18:26 smithi master rhel 8.3 rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/packaged mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} 2
fail 6300143 2021-07-29 03:32:56 2021-07-29 04:43:11 2021-07-29 04:58:10 0:14:59 0:07:05 0:07:54 smithi master centos 8.3 rados/singleton/{all/admin-socket mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi125 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6300144 2021-07-29 03:32:57 2021-07-29 04:43:31 2021-07-29 04:58:03 0:14:32 0:07:10 0:07:22 smithi master centos 8.stream rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} 1
Failure Reason:

Command failed on smithi043 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

dead 6300145 2021-07-29 03:32:58 2021-07-29 04:43:31 2021-07-29 05:01:29 0:17:58 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
dead 6300146 2021-07-29 03:32:59 2021-07-29 04:43:31 2021-07-29 05:00:39 0:17:08 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/nfs 3-final} 2
dead 6300147 2021-07-29 03:33:00 2021-07-29 04:43:45 2021-07-29 05:01:50 0:18:05 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/cache-pool-snaps-readproxy} 2
dead 6300148 2021-07-29 03:33:00 2021-07-29 04:44:45 2021-07-29 05:01:27 0:16:42 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/connectivity} 2
dead 6300149 2021-07-29 03:33:01 2021-07-29 04:45:55 2021-07-29 05:01:16 0:15:21 smithi master ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/rados_workunit_loadgen_mix} 2
dead 6300150 2021-07-29 03:33:02 2021-07-29 04:46:06 2021-07-29 05:01:13 0:15:07 smithi master centos 8.stream rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/few msgr/async no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8.stream} tasks/mon_clock_with_skews} 3
dead 6300151 2021-07-29 03:33:03 2021-07-29 04:46:06 2021-07-29 05:01:07 0:15:01 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_cephadm_repos} 1
dead 6300152 2021-07-29 03:33:04 2021-07-29 04:46:26 2021-07-29 05:00:57 0:14:31 smithi master centos 8.3 rados/singleton/{all/deduptool mon_election/connectivity msgr-failures/many msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} 1
dead 6300153 2021-07-29 03:33:05 2021-07-29 04:46:27 2021-07-29 05:01:28 0:15:01 smithi master centos 8.3 rados/singleton-nomsgr/{all/full-tiering mon_election/classic rados supported-random-distro$/{centos_8}} 1
dead 6300154 2021-07-29 03:33:06 2021-07-29 04:46:37 2021-07-29 05:01:58 0:15:21 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/nfs2 3-final} 2
dead 6300155 2021-07-29 03:33:07 2021-07-29 04:47:07 2021-07-29 05:01:52 0:14:45 smithi master centos 8.3 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 2
dead 6300156 2021-07-29 03:33:08 2021-07-29 04:47:07 2021-07-29 05:00:24 0:13:17 smithi master ubuntu 20.04 rados/objectstore/{backends/filejournal supported-random-distro$/{ubuntu_latest}} 1
dead 6300157 2021-07-29 03:33:09 2021-07-29 04:47:38 2021-07-29 05:01:58 0:14:20 smithi master centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8.stream} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
dead 6300158 2021-07-29 03:33:10 2021-07-29 04:48:48 2021-07-29 05:00:30 0:11:42 smithi master centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8.stream} thrashers/mapgap thrashosds-health workloads/cache-pool-snaps} 2
dead 6300159 2021-07-29 03:33:11 2021-07-29 04:48:59 2021-07-29 05:02:18 0:13:19 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} 2
dead 6300160 2021-07-29 03:33:12 2021-07-29 04:49:09 2021-07-29 05:00:21 0:11:12 smithi master ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-many-deletes} 2
dead 6300161 2021-07-29 03:33:12 2021-07-29 04:49:09 2021-07-29 05:02:15 0:13:06 smithi master centos 8.stream rados/monthrash/{ceph clusters/3-mons mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8.stream} thrashers/sync workloads/pool-create-delete} 2
dead 6300162 2021-07-29 03:33:13 2021-07-29 04:49:29 2021-07-29 05:01:02 0:11:33 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_kubic_stable 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
dead 6300163 2021-07-29 03:33:14 2021-07-29 04:50:00 2021-07-29 05:00:53 0:10:53 smithi master rhel 8.4 rados/singleton/{all/divergent_priors mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} 1
dead 6300164 2021-07-29 03:33:15 2021-07-29 04:50:00 2021-07-29 05:00:44 0:10:44 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4K_seq_read} 1
dead 6300165 2021-07-29 03:33:16 2021-07-29 04:50:00 2021-07-29 05:00:54 0:10:54 smithi master centos 8.3 rados/singleton-nomsgr/{all/health-warnings mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
dead 6300166 2021-07-29 03:33:17 2021-07-29 04:50:00 2021-07-29 05:01:08 0:11:08 smithi master ubuntu 20.04 rados/cephadm/smoke/{distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
dead 6300167 2021-07-29 03:33:18 2021-07-29 04:51:11 2021-07-29 05:01:52 0:10:41 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_cls_all validater/valgrind} 2
dead 6300168 2021-07-29 03:33:19 2021-07-29 04:51:11 2021-07-29 05:00:34 0:09:23 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/cache-snaps-balanced} 2
dead 6300169 2021-07-29 03:33:20 2021-07-29 04:51:22 2021-07-29 05:01:18 0:09:56 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/rgw-ingress 3-final} 2
dead 6300170 2021-07-29 03:33:21 2021-07-29 04:52:23 2021-07-29 05:01:33 0:09:10 smithi master centos 8.3 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
dead 6300171 2021-07-29 03:33:22 2021-07-29 04:52:33 2021-07-29 05:01:50 0:09:17 smithi master centos 8.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
dead 6300172 2021-07-29 03:33:23 2021-07-29 04:52:54 2021-07-29 05:00:26 0:07:32 smithi master centos 8.stream rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/erasure-code} 1
dead 6300173 2021-07-29 03:33:23 2021-07-29 04:52:55 2021-07-29 05:00:47 0:07:52 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/rbd_cls} 3
dead 6300174 2021-07-29 03:33:24 2021-07-29 04:53:55 2021-07-29 05:00:46 0:06:51 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_orch_cli} 1
dead 6300175 2021-07-29 03:33:25 2021-07-29 04:54:05 2021-07-29 05:02:18 0:08:13 smithi master ubuntu 20.04 rados/singleton/{all/divergent_priors2 mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

SSH connection to smithi176 was lost: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y install linux-image-generic'

fail 6300176 2021-07-29 03:33:26 2021-07-29 04:55:16 2021-07-29 05:15:57 0:20:41 smithi master ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-low-osd-mem-target supported-random-distro$/{ubuntu_latest} tasks/prometheus}
Failure Reason:

machine smithi002.front.sepia.ceph.com is locked by scheduled_teuthology@teuthology, not scheduled_kchai@teuthology

dead 6300177 2021-07-29 03:33:27 2021-07-29 04:55:36 2021-07-29 05:00:29 0:04:53 smithi master centos 8.3 rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/classic rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Error reimaging machines: [Errno 104] Connection reset by peer

dead 6300178 2021-07-29 03:33:28 2021-07-29 04:55:46 2021-07-29 05:14:04 0:18:18 smithi master ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/rados_workunit_loadgen_mostlyread} 2
Failure Reason:

Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds

dead 6300179 2021-07-29 03:33:29 2021-07-29 04:55:46 2021-07-29 05:14:41 0:18:55 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/rgw 3-final} 2
Failure Reason:

Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds

dead 6300180 2021-07-29 03:33:30 2021-07-29 04:55:47 2021-07-29 05:15:49 0:20:02 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/cache-snaps} 2
Failure Reason:

Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds

fail 6300181 2021-07-29 03:33:31 2021-07-29 04:57:07 2021-07-29 05:17:04 0:19:57 smithi master ubuntu 20.04 rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/classic msgr/async-v2only start tasks/rados_python}
Failure Reason:

machine smithi046.front.sepia.ceph.com is locked by scheduled_teuthology@teuthology, not scheduled_kchai@teuthology

fail 6300182 2021-07-29 03:33:32 2021-07-29 04:57:30 2021-07-29 05:18:08 0:20:38 smithi master centos 8.2 rados/dashboard/{centos_8.2_kubic_stable debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zstd} tasks/e2e}
Failure Reason:

machine smithi041.front.sepia.ceph.com is locked by scheduled_teuthology@teuthology, not scheduled_kchai@teuthology

fail 6300183 2021-07-29 03:33:33 2021-07-29 04:57:40 2021-07-29 05:17:23 0:19:43 smithi master centos 8.3 rados/singleton/{all/dump-stuck mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}}
Failure Reason:

machine smithi076.front.sepia.ceph.com is locked by scheduled_teuthology@teuthology, not scheduled_kchai@teuthology

dead 6300184 2021-07-29 03:33:34 2021-07-29 04:57:40 2021-07-29 05:13:21 0:15:41 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/basic 3-final} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

fail 6300185 2021-07-29 03:33:34 2021-07-29 04:58:11 2021-07-29 05:14:52 0:16:41 smithi master rhel 8.4 rados/multimon/{clusters/3 mon_election/classic msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/mon_recovery} 1
Failure Reason:

machine smithi086.front.sepia.ceph.com is locked by scheduled_teuthology@teuthology, not scheduled_kchai@teuthology

dead 6300195 2021-07-29 03:33:44 2021-07-29 03:33:44 smithi master centos 8.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/connectivity msgr-failures/fastclose objectstore/filestore-xfs rados recovery-overrides/{default} supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/ec-small-objects}