Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6573370 2021-12-20 19:03:51 2021-12-20 19:04:42 2021-12-20 19:25:32 0:20:50 0:15:04 0:05:46 smithi master centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-low-osd-mem-target} supported-random-distro$/{centos_8.stream} tasks/insights} 2
pass 6573371 2021-12-20 19:03:52 2021-12-20 19:04:43 2021-12-20 20:15:23 1:10:40 1:00:13 0:10:27 smithi master rhel 8.4 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 6573372 2021-12-20 19:03:53 2021-12-20 19:04:43 2021-12-20 19:43:28 0:38:45 0:26:02 0:12:43 smithi master centos 8.3 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 6573373 2021-12-20 19:03:53 2021-12-20 19:04:44 2021-12-20 19:21:31 0:16:47 0:06:36 0:10:11 smithi master centos 8.3 rados/cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

Command failed on smithi017 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573374 2021-12-20 19:03:54 2021-12-20 19:04:44 2021-12-20 19:58:35 0:53:51 0:44:00 0:09:51 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados tasks/mon_recovery validater/valgrind} 2
fail 6573375 2021-12-20 19:03:55 2021-12-20 19:04:45 2021-12-20 19:21:39 0:16:54 0:06:33 0:10:21 smithi master centos 8.3 rados/cephadm/smoke/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi074 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573376 2021-12-20 19:03:56 2021-12-20 19:04:45 2021-12-20 19:36:52 0:32:07 0:21:37 0:10:30 smithi master centos 8.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-overwrites} 2
fail 6573377 2021-12-20 19:03:57 2021-12-20 19:04:46 2021-12-20 19:21:38 0:16:52 0:10:30 0:06:22 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

Command failed on smithi137 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573378 2021-12-20 19:03:57 2021-12-20 19:04:46 2021-12-20 19:25:06 0:20:20 0:11:05 0:09:15 smithi master centos 8.3 rados/singleton-nomsgr/{all/export-after-evict mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 6573379 2021-12-20 19:03:58 2021-12-20 19:04:47 2021-12-20 19:55:10 0:50:23 0:40:51 0:09:32 smithi master rhel 8.4 rados/singleton/{all/lost-unfound-delete mon_election/classic msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} 1
fail 6573380 2021-12-20 19:03:59 2021-12-20 19:04:47 2021-12-20 19:29:22 0:24:35 0:10:37 0:13:58 smithi master centos 8.3 rados/cephadm/thrash/{0-distro/centos_8.3_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi038 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573381 2021-12-20 19:04:00 2021-12-20 19:04:48 2021-12-20 19:31:03 0:26:15 0:21:45 0:04:30 smithi master rhel 8.4 rados/objectstore/{backends/filejournal supported-random-distro$/{rhel_8}} 1
pass 6573382 2021-12-20 19:04:00 2021-12-20 19:04:48 2021-12-20 19:34:27 0:29:39 0:15:42 0:13:57 smithi master rados/cephadm/workunits/{agent/off mon_election/classic task/test_cephadm} 1
fail 6573383 2021-12-20 19:04:01 2021-12-20 19:04:49 2021-12-20 19:22:21 0:17:32 0:04:08 0:13:24 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

Command failed on smithi110 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573384 2021-12-20 19:04:02 2021-12-20 19:04:49 2021-12-20 19:35:18 0:30:29 0:19:37 0:10:52 smithi master centos 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} tasks/libcephsqlite} 2
pass 6573385 2021-12-20 19:04:02 2021-12-20 19:04:50 2021-12-20 19:49:29 0:44:39 0:34:39 0:10:00 smithi master centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_8.stream} thrashers/careful thrashosds-health workloads/snaps-few-objects} 2
dead 6573386 2021-12-20 19:04:03 2021-12-20 19:04:50 2021-12-21 07:18:02 12:13:12 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

fail 6573387 2021-12-20 19:04:04 2021-12-20 19:04:51 2021-12-20 19:39:51 0:35:00 0:23:32 0:11:28 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/host rook/1.7.0} 3
Failure Reason:

'wait for toolbox' reached maximum tries (100) after waiting for 500 seconds

fail 6573388 2021-12-20 19:04:05 2021-12-20 19:04:51 2021-12-20 19:18:03 0:13:12 0:05:14 0:07:58 smithi master centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi078 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573389 2021-12-20 19:04:05 2021-12-20 19:04:52 2021-12-20 19:51:42 0:46:50 0:38:19 0:08:31 smithi master rhel 8.4 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} 2
pass 6573390 2021-12-20 19:04:06 2021-12-20 19:04:52 2021-12-20 19:27:54 0:23:02 0:10:44 0:12:18 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/health-warnings mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
pass 6573391 2021-12-20 19:04:07 2021-12-20 19:04:53 2021-12-20 20:41:17 1:36:24 1:27:37 0:08:47 smithi master centos 8.stream rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/erasure-code} 1
pass 6573392 2021-12-20 19:04:07 2021-12-20 19:04:53 2021-12-20 19:34:04 0:29:11 0:20:33 0:08:38 smithi master rhel 8.4 rados/singleton/{all/max-pg-per-osd.from-mon mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 1
fail 6573393 2021-12-20 19:04:08 2021-12-20 19:04:54 2021-12-20 19:23:29 0:18:35 0:06:17 0:12:18 smithi master centos 8.2 rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

Command failed on smithi079 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573394 2021-12-20 19:04:09 2021-12-20 19:04:54 2021-12-20 19:24:08 0:19:14 0:09:57 0:09:17 smithi master centos 8.3 rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 2
dead 6573395 2021-12-20 19:04:10 2021-12-20 19:04:55 2021-12-20 19:21:21 0:16:26 0:03:49 0:12:37 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls agent/off mon_election/classic} 2
Failure Reason:

Failure object was: {'smithi172.front.sepia.ceph.com': {'msg': '\'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install \'docker.io\'\' failed: E: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 7978 (apt-get)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?\n', 'stdout': '', 'stderr': 'E: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 7978 (apt-get)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?\n', 'rc': 100, 'cache_updated': False, 'cache_update_time': 1640027604, 'invocation': {'module_args': {'name': ['docker.io', 'python3-setuptools', 'python3-pip'], 'state': 'latest', 'package': ['docker.io', 'python3-setuptools', 'python3-pip'], 'cache_valid_time': 0, 'purge': False, 'force': False, 'dpkg_options': 'force-confdef,force-confold', 'autoremove': False, 'autoclean': False, 'only_upgrade': False, 'force_apt_get': False, 'allow_unauthenticated': False, 'update_cache': None, 'deb': None, 'default_release': None, 'install_recommends': None, 'upgrade': None, 'policy_rc_d': None}}, 'stdout_lines': [], 'stderr_lines': ['E: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 7978 (apt-get)', 'E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?'], '_ansible_no_log': False, 'changed': False}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 117, in represent_mapping node_key = self.represent_data(item_key) File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)yaml.representer.RepresenterError: ('cannot represent an object', 'cache_update_time')

pass 6573396 2021-12-20 19:04:10 2021-12-20 19:04:55 2021-12-20 19:26:31 0:21:36 0:10:21 0:11:15 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/sample_radosbench} 1
fail 6573397 2021-12-20 19:04:11 2021-12-20 19:04:56 2021-12-20 19:26:01 0:21:05 0:09:37 0:11:28 smithi master centos 8.3 rados/cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi082 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573398 2021-12-20 19:04:12 2021-12-20 19:04:57 2021-12-20 19:22:48 0:17:51 0:08:27 0:09:24 smithi master centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi005 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573399 2021-12-20 19:04:13 2021-12-20 19:04:57 2021-12-20 19:31:03 0:26:06 0:16:55 0:09:11 smithi master centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8.stream} thrashers/mapgap thrashosds-health workloads/admin_socket_objecter_requests} 2
fail 6573400 2021-12-20 19:04:13 2021-12-20 19:04:58 2021-12-20 19:17:18 0:12:20 0:05:03 0:07:17 smithi master centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

Command failed on smithi073 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573401 2021-12-20 19:04:14 2021-12-20 19:04:58 2021-12-20 19:17:19 0:12:21 0:05:06 0:07:15 smithi master centos 8.stream rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi061 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573402 2021-12-20 19:04:15 2021-12-20 19:04:59 2021-12-20 19:25:33 0:20:34 0:06:38 0:13:56 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

Command failed on smithi016 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573403 2021-12-20 19:04:16 2021-12-20 19:04:59 2021-12-20 21:24:23 2:19:24 2:11:44 0:07:40 smithi master centos 8.stream rados/objectstore/{backends/filestore-idempotent-aio-journal supported-random-distro$/{centos_8.stream}} 1
pass 6573404 2021-12-20 19:04:17 2021-12-20 19:05:00 2021-12-20 19:29:52 0:24:52 0:13:44 0:11:08 smithi master centos 8.3 rados/singleton/{all/max-pg-per-osd.from-replica mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} 1
pass 6573405 2021-12-20 19:04:17 2021-12-20 19:05:00 2021-12-20 19:23:35 0:18:35 0:08:26 0:10:09 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
fail 6573406 2021-12-20 19:04:18 2021-12-20 19:05:01 2021-12-20 19:22:36 0:17:35 0:07:31 0:10:04 smithi master centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/default thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi002 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573407 2021-12-20 19:04:19 2021-12-20 19:05:02 2021-12-20 19:27:38 0:22:36 0:13:05 0:09:31 smithi master centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8.stream} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 6573408 2021-12-20 19:04:19 2021-12-20 19:05:02 2021-12-20 19:28:38 0:23:36 0:14:31 0:09:05 smithi master centos 8.stream rados/cephadm/orchestrator_cli/{0-random-distro$/{centos_8.stream_container_tools} 2-node-mgr agent/on orchestrator_cli} 2
pass 6573409 2021-12-20 19:04:20 2021-12-20 19:05:03 2021-12-20 19:37:25 0:32:22 0:19:04 0:13:18 smithi master ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-low-osd-mem-target} supported-random-distro$/{ubuntu_latest} tasks/progress} 2
fail 6573410 2021-12-20 19:04:21 2021-12-20 19:05:04 2021-12-20 19:20:55 0:15:51 0:06:18 0:09:33 smithi master centos 8.3 rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_8.3_container_tools_3.0} 1-start 2-services/rgw 3-final} 1
Failure Reason:

Command failed on smithi086 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573411 2021-12-20 19:04:22 2021-12-20 19:05:04 2021-12-20 19:21:58 0:16:54 0:07:59 0:08:55 smithi master centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools_crun 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi052 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573412 2021-12-20 19:04:22 2021-12-20 19:05:05 2021-12-20 19:34:23 0:29:18 0:21:13 0:08:05 smithi master rhel 8.4 rados/monthrash/{ceph clusters/9-mons mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/many workloads/rados_5925} 2
fail 6573413 2021-12-20 19:04:23 2021-12-20 19:05:05 2021-12-20 19:21:26 0:16:21 0:05:20 0:11:01 smithi master centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

Command failed on smithi057 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573414 2021-12-20 19:04:24 2021-12-20 19:05:06 2021-12-20 19:25:42 0:20:36 0:11:08 0:09:28 smithi master rhel 8.4 rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

Command failed on smithi022 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573415 2021-12-20 19:04:25 2021-12-20 19:05:06 2021-12-20 19:39:30 0:34:24 0:24:29 0:09:55 smithi master rhel 8.4 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/cache-agent-small} 2
pass 6573416 2021-12-20 19:04:25 2021-12-20 19:05:07 2021-12-20 19:27:02 0:21:55 0:08:50 0:13:05 smithi master ubuntu 20.04 rados/singleton/{all/mon-config-key-caps mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} 1
fail 6573417 2021-12-20 19:04:26 2021-12-20 19:05:07 2021-12-20 19:21:54 0:16:47 0:05:36 0:11:11 smithi master rados/cephadm/workunits/{agent/off mon_election/classic task/test_nfs} 1
Failure Reason:

Command failed on smithi194 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573418 2021-12-20 19:04:27 2021-12-20 19:05:08 2021-12-20 19:26:07 0:20:59 0:15:16 0:05:43 smithi master centos 8.stream rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{centos_8.stream}} 1
pass 6573419 2021-12-20 19:04:28 2021-12-20 19:05:08 2021-12-20 19:32:04 0:26:56 0:16:17 0:10:39 smithi master centos 8.3 rados/standalone/{supported-random-distro$/{centos_8} workloads/mgr} 1
pass 6573420 2021-12-20 19:04:28 2021-12-20 19:05:09 2021-12-20 20:02:16 0:57:07 0:45:02 0:12:05 smithi master centos 8.3 rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
fail 6573421 2021-12-20 19:04:29 2021-12-20 19:05:10 2021-12-20 19:19:28 0:14:18 0:05:17 0:09:01 smithi master centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/rgw 3-final} 2
Failure Reason:

Command failed on smithi150 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573422 2021-12-20 19:04:30 2021-12-20 19:05:10 2021-12-20 19:30:31 0:25:21 0:17:05 0:08:16 smithi master centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8.stream} tasks/rados_cls_all} 2
pass 6573423 2021-12-20 19:04:31 2021-12-20 19:05:11 2021-12-20 19:50:10 0:44:59 0:32:01 0:12:58 smithi master ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 6573424 2021-12-20 19:04:31 2021-12-20 19:05:12 2021-12-20 19:49:04 0:43:52 0:32:24 0:11:28 smithi master ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 6573425 2021-12-20 19:04:32 2021-12-20 19:05:12 2021-12-20 22:18:53 3:13:41 3:00:55 0:12:46 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados tasks/rados_cls_all validater/valgrind} 2
fail 6573426 2021-12-20 19:04:33 2021-12-20 19:05:13 2021-12-20 19:33:06 0:27:53 0:21:36 0:06:17 smithi master rhel 8.4 rados/cephadm/thrash/{0-distro/rhel_8.4_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi050 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573427 2021-12-20 19:04:34 2021-12-20 19:05:13 2021-12-20 21:39:12 2:33:59 2:23:41 0:10:18 smithi master centos 8.3 rados/objectstore/{backends/filestore-idempotent supported-random-distro$/{centos_8}} 1
fail 6573428 2021-12-20 19:04:34 2021-12-20 19:05:14 2021-12-20 19:22:04 0:16:50 0:07:56 0:08:54 smithi master centos 8.stream rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi059 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

dead 6573429 2021-12-20 19:04:35 2021-12-20 19:05:14 2021-12-21 07:20:11 12:14:57 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

pass 6573430 2021-12-20 19:04:36 2021-12-20 19:05:15 2021-12-20 19:29:25 0:24:10 0:11:52 0:12:18 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} 1
fail 6573431 2021-12-20 19:04:36 2021-12-20 19:05:15 2021-12-20 19:34:51 0:29:36 0:10:42 0:18:54 smithi master rhel 8.4 rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

Command failed on smithi061 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573432 2021-12-20 19:04:37 2021-12-20 19:17:29 2021-12-20 19:44:47 0:27:18 0:21:16 0:06:02 smithi master rhel 8.4 rados/singleton/{all/mon-config mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} 1
pass 6573433 2021-12-20 19:04:38 2021-12-20 19:17:30 2021-12-20 19:55:45 0:38:15 0:27:06 0:11:09 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/cache-pool-snaps} 2
pass 6573434 2021-12-20 19:04:39 2021-12-20 19:18:11 2021-12-20 19:55:34 0:37:23 0:31:28 0:05:55 smithi master centos 8.stream rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8.stream}} 1
fail 6573435 2021-12-20 19:04:39 2021-12-20 19:18:12 2021-12-20 19:37:05 0:18:53 0:10:20 0:08:33 smithi master rhel 8.4 rados/cephadm/smoke/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi150 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573436 2021-12-20 19:04:40 2021-12-20 19:19:33 2021-12-20 19:46:22 0:26:49 0:18:34 0:08:15 smithi master centos 8.stream rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{centos_8.stream}} 1
fail 6573437 2021-12-20 19:04:41 2021-12-20 19:21:04 2021-12-20 19:38:53 0:17:49 0:10:23 0:07:26 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/basic 3-final} 2
Failure Reason:

Command failed on smithi158 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573438 2021-12-20 19:04:42 2021-12-20 19:21:25 2021-12-20 21:00:41 1:39:16 1:30:11 0:09:05 smithi master centos 8.3 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-radosbench} 2
fail 6573439 2021-12-20 19:04:42 2021-12-20 19:21:35 2021-12-20 19:38:39 0:17:04 0:10:48 0:06:16 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

Command failed on smithi057 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573440 2021-12-20 19:04:43 2021-12-20 19:21:36 2021-12-20 19:49:06 0:27:30 0:20:33 0:06:57 smithi master rhel 8.4 rados/cephadm/thrash/{0-distro/rhel_8.4_container_tools_rhel8 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi074 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573441 2021-12-20 19:04:44 2021-12-20 19:21:47 2021-12-20 19:49:07 0:27:20 0:21:00 0:06:20 smithi master rhel 8.4 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zstd} supported-random-distro$/{rhel_8} tasks/workunits} 2
pass 6573442 2021-12-20 19:04:45 2021-12-20 19:21:47 2021-12-20 19:40:06 0:18:19 0:06:37 0:11:42 smithi master ubuntu 20.04 rados/multimon/{clusters/21 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} 3
pass 6573443 2021-12-20 19:04:46 2021-12-20 19:22:08 2021-12-20 20:02:15 0:40:07 0:31:00 0:09:07 smithi master centos 8.3 rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} 1
fail 6573444 2021-12-20 19:04:46 2021-12-20 19:22:08 2021-12-20 19:36:17 0:14:09 0:03:41 0:10:28 smithi master ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi110 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573445 2021-12-20 19:04:47 2021-12-20 19:22:29 2021-12-20 20:18:01 0:55:32 0:49:29 0:06:03 smithi master rhel 8.4 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 2
pass 6573446 2021-12-20 19:04:48 2021-12-20 19:22:40 2021-12-20 20:02:00 0:39:20 0:30:45 0:08:35 smithi master centos 8.3 rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 6573447 2021-12-20 19:04:49 2021-12-20 19:22:40 2021-12-20 20:28:03 1:05:23 0:56:13 0:09:10 smithi master centos 8.3 rados/standalone/{supported-random-distro$/{centos_8} workloads/misc} 1
dead 6573448 2021-12-20 19:04:49 2021-12-20 19:22:41 2021-12-20 19:38:55 0:16:14 0:04:04 0:12:10 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

{'Failure object was': {'smithi027.front.sepia.ceph.com': {'msg': 'Failed to update apt cache: ', 'invocation': {'module_args': {'dpkg_options': 'force-confdef,force-confold', 'autoremove': False, 'force': False, 'force_apt_get': False, 'policy_rc_d': 'None', 'package': 'None', 'autoclean': False, 'install_recommends': 'None', 'purge': False, 'allow_unauthenticated': False, 'state': 'present', 'upgrade': 'None', 'update_cache': True, 'default_release': 'None', 'only_upgrade': False, 'deb': 'None', 'cache_valid_time': 0}}, '_ansible_no_log': False, 'attempts': 24, 'changed': False}}, 'Traceback (most recent call last)': 'File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 117, in represent_mapping node_key = self.represent_data(item_key) File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)', 'yaml.representer.RepresenterError': "('cannot represent an object', '_ansible_no_log')"}

pass 6573449 2021-12-20 19:04:50 2021-12-20 19:22:51 2021-12-20 19:54:52 0:32:01 0:22:07 0:09:54 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/cache-snaps} 2
fail 6573450 2021-12-20 19:04:51 2021-12-20 19:23:32 2021-12-20 19:41:52 0:18:20 0:06:13 0:12:07 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} 1
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi099.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

pass 6573451 2021-12-20 19:04:52 2021-12-20 19:23:42 2021-12-20 19:38:30 0:14:48 0:08:05 0:06:43 smithi master centos 8.stream rados/objectstore/{backends/fusestore supported-random-distro$/{centos_8.stream}} 1
fail 6573452 2021-12-20 19:04:52 2021-12-20 19:24:13 2021-12-20 19:41:26 0:17:13 0:05:32 0:11:41 smithi master ubuntu 20.04 rados/cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi029 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573453 2021-12-20 19:04:53 2021-12-20 19:24:43 2021-12-20 19:40:29 0:15:46 0:05:59 0:09:47 smithi master centos 8.2 rados/cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

Command failed on smithi092 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573454 2021-12-20 19:04:54 2021-12-20 19:25:34 2021-12-20 19:41:23 0:15:49 0:06:12 0:09:37 smithi master centos 8.2 rados/cephadm/smoke/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi016 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573455 2021-12-20 19:04:55 2021-12-20 19:25:35 2021-12-20 19:47:13 0:21:38 0:09:07 0:12:31 smithi master ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 6573456 2021-12-20 19:04:56 2021-12-20 19:26:06 2021-12-20 19:41:29 0:15:23 0:06:14 0:09:09 smithi master centos 8.2 rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} 2
Failure Reason:

Command failed on smithi006 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573457 2021-12-20 19:04:56 2021-12-20 19:53:58 1287 smithi master rhel 8.4 rados/singleton/{all/peer mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} 1
dead 6573458 2021-12-20 19:04:57 2021-12-20 19:26:37 2021-12-21 07:41:28 12:14:51 smithi master centos 8.stream rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls agent/on mon_election/classic} 2
Failure Reason:

hit max job timeout

fail 6573459 2021-12-20 19:04:58 2021-12-20 19:27:48 2021-12-20 19:57:49 0:30:01 0:21:47 0:08:14 smithi master rhel 8.4 rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi032 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573460 2021-12-20 19:04:59 2021-12-20 19:51:25 1014 smithi master centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream} tasks/rados_stress_watch} 2
pass 6573461 2021-12-20 19:04:59 2021-12-20 19:27:59 2021-12-20 19:46:25 0:18:26 0:07:27 0:10:59 smithi master rados/cephadm/workunits/{agent/on mon_election/classic task/test_adoption} 1
fail 6573462 2021-12-20 19:05:00 2021-12-20 19:28:40 2021-12-20 19:46:56 0:18:16 0:06:23 0:11:53 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

Command failed on smithi038 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573463 2021-12-20 19:05:01 2021-12-20 19:29:30 2021-12-20 20:23:31 0:54:01 0:44:24 0:09:37 smithi master ubuntu 20.04 rados/monthrash/{ceph clusters/9-mons mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/rados_mon_osdmap_prune} 2
pass 6573464 2021-12-20 19:05:02 2021-12-20 19:50:26 576 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} 1
pass 6573465 2021-12-20 19:05:03 2021-12-20 19:47:58 665 smithi master centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream} thrashers/none thrashosds-health workloads/dedup-io-mixed} 2
dead 6573466 2021-12-20 19:05:04 2021-12-20 19:30:33 2021-12-21 07:44:26 12:13:53 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

pass 6573467 2021-12-20 19:05:04 2021-12-20 20:07:40 1670 smithi master rhel 8.4 rados/cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.4_container_tools_rhel8} 2-node-mgr agent/off orchestrator_cli} 2
pass 6573468 2021-12-20 19:05:05 2021-12-20 19:50:47 496 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/balancer mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
fail 6573469 2021-12-20 19:05:06 2021-12-20 19:33:15 2021-12-20 19:48:33 0:15:18 0:06:04 0:09:14 smithi master centos 8.3 rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_8.3_container_tools_3.0} 1-start 2-services/basic 3-final} 1
Failure Reason:

Command failed on smithi129 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573470 2021-12-20 19:05:07 2021-12-20 19:33:15 2021-12-20 19:55:32 0:22:17 0:14:28 0:07:49 smithi master centos 8.stream rados/singleton/{all/pg-autoscaler mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream}} 1
fail 6573471 2021-12-20 19:05:07 2021-12-20 19:34:06 2021-12-20 19:54:07 0:20:01 0:09:31 0:10:30 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi168 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573472 2021-12-20 19:05:08 2021-12-20 19:34:26 2021-12-20 19:53:50 0:19:24 0:12:45 0:06:39 smithi master centos 8.stream rados/objectstore/{backends/keyvaluedb supported-random-distro$/{centos_8.stream}} 1
fail 6573473 2021-12-20 19:05:09 2021-12-20 19:34:37 2021-12-20 19:52:16 0:17:39 0:06:23 0:11:16 smithi master centos 8.3 rados/cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

Command failed on smithi061 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573474 2021-12-20 19:05:10 2021-12-20 19:34:58 2021-12-20 20:47:14 1:12:16 1:01:24 0:10:52 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/mon_recovery validater/valgrind} 2
pass 6573475 2021-12-20 19:05:11 2021-12-20 19:35:28 2021-12-20 20:26:26 0:50:58 0:42:06 0:08:52 smithi master rhel 8.4 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 6573476 2021-12-20 19:05:11 2021-12-20 19:36:59 2021-12-20 20:17:28 0:40:29 0:30:12 0:10:17 smithi master ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 6573477 2021-12-20 19:05:12 2021-12-20 19:37:10 2021-12-20 20:57:56 1:20:46 1:14:56 0:05:50 smithi master rhel 8.4 rados/standalone/{supported-random-distro$/{rhel_8} workloads/mon} 1
fail 6573478 2021-12-20 19:05:13 2021-12-20 19:37:10 2021-12-20 19:54:14 0:17:04 0:08:03 0:09:01 smithi master centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/pacific backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi043 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573479 2021-12-20 19:05:14 2021-12-20 19:38:31 2021-12-20 19:49:20 0:10:49 0:04:57 0:05:52 smithi master centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

Command failed on smithi057 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573480 2021-12-20 19:05:15 2021-12-20 20:04:33 936 smithi master centos 8.3 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{centos_8} tasks/failover} 2
pass 6573481 2021-12-20 19:05:15 2021-12-20 19:39:02 2021-12-20 20:20:38 0:41:36 0:31:53 0:09:43 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/pool-snaps-few-objects} 2
fail 6573482 2021-12-20 19:05:16 2021-12-20 19:39:03 2021-12-20 19:50:59 0:11:56 0:05:00 0:06:56 smithi master centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

Command failed on smithi055 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573483 2021-12-20 19:05:17 2021-12-20 19:39:33 2021-12-20 19:55:12 0:15:39 0:09:23 0:06:16 smithi master centos 8.stream rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/classic rados supported-random-distro$/{centos_8.stream}} 1
fail 6573484 2021-12-20 19:05:18 2021-12-20 19:39:54 2021-12-20 19:51:43 0:11:49 0:04:55 0:06:54 smithi master centos 8.stream rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi072 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573485 2021-12-20 19:05:18 2021-12-20 19:39:54 2021-12-20 20:01:58 0:22:04 0:12:25 0:09:39 smithi master ubuntu 20.04 rados/singleton/{all/radostool mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} 1
fail 6573486 2021-12-20 19:05:19 2021-12-20 19:40:15 2021-12-20 19:51:42 0:11:27 0:04:59 0:06:28 smithi master centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

Command failed on smithi059 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573487 2021-12-20 19:05:20 2021-12-20 19:40:16 2021-12-20 19:59:51 0:19:35 0:09:09 0:10:26 smithi master centos 8.3 rados/cephadm/thrash/{0-distro/centos_8.3_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi092 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573488 2021-12-20 19:05:21 2021-12-20 19:40:36 2021-12-20 20:08:57 0:28:21 0:21:09 0:07:12 smithi master centos 8.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/careful thrashosds-health workloads/ec-small-objects-fast-read} 2
fail 6573489 2021-12-20 19:05:21 2021-12-20 19:41:27 2021-12-20 20:00:46 0:19:19 0:09:02 0:10:17 smithi master centos 8.2 rados/cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi029 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573490 2021-12-20 19:05:22 2021-12-20 19:41:27 2021-12-20 19:58:40 0:17:13 0:10:55 0:06:18 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

Command failed on smithi006 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573491 2021-12-20 19:05:23 2021-12-20 19:41:38 2021-12-20 20:03:54 0:22:16 0:09:34 0:12:42 smithi master ubuntu 20.04 rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} 2
pass 6573492 2021-12-20 19:05:24 2021-12-20 19:43:39 2021-12-20 19:59:35 0:15:56 0:04:37 0:11:19 smithi master rados/cephadm/workunits/{agent/on mon_election/classic task/test_cephadm_repos} 1
pass 6573493 2021-12-20 19:05:24 2021-12-20 19:43:39 2021-12-20 20:22:32 0:38:53 0:27:58 0:10:55 smithi master ubuntu 20.04 rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{ubuntu_latest}} 1
pass 6573494 2021-12-20 19:05:25 2021-12-20 19:44:50 2021-12-20 20:01:28 0:16:38 0:09:17 0:07:21 smithi master centos 8.stream rados/singleton-nomsgr/{all/crushdiff mon_election/classic rados supported-random-distro$/{centos_8.stream}} 1
fail 6573495 2021-12-20 19:05:26 2021-12-20 19:46:31 2021-12-20 19:57:52 0:11:21 0:04:58 0:06:23 smithi master centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi038 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573496 2021-12-20 19:05:26 2021-12-20 19:47:01 2021-12-20 20:19:55 0:32:54 0:26:02 0:06:52 smithi master rhel 8.4 rados/singleton/{all/rebuild-mondb mon_election/classic msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} 1
pass 6573497 2021-12-20 19:05:27 2021-12-20 19:47:02 2021-12-20 20:21:10 0:34:08 0:25:42 0:08:26 smithi master centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8.stream} thrashers/mapgap thrashosds-health workloads/radosbench-high-concurrency} 2
pass 6573498 2021-12-20 19:05:28 2021-12-20 19:47:23 2021-12-20 20:09:20 0:21:57 0:10:55 0:11:02 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} 1
pass 6573499 2021-12-20 19:05:29 2021-12-20 19:47:24 2021-12-20 20:30:03 0:42:39 0:30:12 0:12:27 smithi master ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/rados_workunit_loadgen_big} 2
fail 6573500 2021-12-20 19:05:29 2021-12-20 19:48:04 2021-12-20 20:03:53 0:15:49 0:07:35 0:08:14 smithi master centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi022 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573501 2021-12-20 19:05:30 2021-12-20 19:48:35 2021-12-21 00:14:55 4:26:20 4:18:53 0:07:27 smithi master centos 8.stream rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/osd-backfill} 1
pass 6573502 2021-12-20 19:05:31 2021-12-20 19:49:06 2021-12-20 20:32:47 0:43:41 0:34:07 0:09:34 smithi master centos 8.3 rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} 1
fail 6573503 2021-12-20 19:05:32 2021-12-20 19:49:06 2021-12-20 20:06:33 0:17:27 0:10:23 0:07:04 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

Command failed on smithi137 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573504 2021-12-20 19:05:32 2021-12-20 19:49:17 2021-12-20 20:12:33 0:23:16 0:13:13 0:10:03 smithi master centos 8.3 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 6573505 2021-12-20 19:05:33 2021-12-20 19:49:27 2021-12-20 20:06:49 0:17:22 0:10:46 0:06:36 smithi master rhel 8.4 rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

Command failed on smithi064 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573506 2021-12-20 19:05:34 2021-12-20 19:49:38 2021-12-20 20:07:24 0:17:46 0:10:20 0:07:26 smithi master rhel 8.4 rados/cephadm/smoke/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi175 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573507 2021-12-20 19:05:35 2021-12-20 19:50:19 2021-12-20 20:04:55 0:14:36 0:03:39 0:10:57 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

Command failed on smithi085 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573508 2021-12-20 19:05:35 2021-12-20 19:50:29 2021-12-20 20:17:13 0:26:44 0:21:41 0:05:03 smithi master rhel 8.4 rados/singleton-nomsgr/{all/full-tiering mon_election/classic rados supported-random-distro$/{rhel_8}} 1
pass 6573509 2021-12-20 19:05:36 2021-12-20 19:50:50 2021-12-20 20:05:48 0:14:58 0:09:04 0:05:54 smithi master centos 8.stream rados/singleton/{all/resolve_stuck_peering mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8.stream}} 2
dead 6573510 2021-12-20 19:05:37 2021-12-20 19:51:10 2021-12-21 08:05:01 12:13:51 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

fail 6573511 2021-12-20 19:05:38 2021-12-20 19:51:31 2021-12-20 20:26:40 0:35:09 0:22:36 0:12:33 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/flannel rook/1.7.0} 3
Failure Reason:

'wait for toolbox' reached maximum tries (100) after waiting for 500 seconds

pass 6573512 2021-12-20 19:05:38 2021-12-20 19:51:52 2021-12-20 20:23:19 0:31:27 0:24:28 0:06:59 smithi master rhel 8.4 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/redirect} 2
pass 6573513 2021-12-20 19:05:39 2021-12-20 19:51:52 2021-12-20 20:41:57 0:50:05 0:42:06 0:07:59 smithi master rhel 8.4 rados/monthrash/{ceph clusters/9-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/snaps-few-objects} 2
fail 6573514 2021-12-20 19:05:40 2021-12-20 19:52:23 2021-12-20 20:09:20 0:16:57 0:07:40 0:09:17 smithi master centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools_crun 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi096 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573515 2021-12-20 19:05:41 2021-12-20 19:53:54 2021-12-20 20:35:45 0:41:51 0:32:06 0:09:45 smithi master centos 8.3 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-low-osd-mem-target} supported-random-distro$/{centos_8} tasks/module_selftest} 2
pass 6573516 2021-12-20 19:05:41 2021-12-20 19:54:14 2021-12-20 20:37:59 0:43:45 0:37:43 0:06:02 smithi master rhel 8.4 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-small-objects-overwrites} 2
fail 6573517 2021-12-20 19:05:42 2021-12-20 19:54:25 2021-12-20 20:10:08 0:15:43 0:06:06 0:09:37 smithi master centos 8.2 rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

Command failed on smithi039 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573518 2021-12-20 19:05:43 2021-12-20 19:54:25 2021-12-20 22:33:12 2:38:47 2:15:39 0:23:08 smithi master ubuntu 20.04 rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{ubuntu_latest}} 1
dead 6573519 2021-12-20 19:05:44 2021-12-20 19:54:56 2021-12-21 08:16:15 12:21:19 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls agent/on mon_election/classic} 2
Failure Reason:

hit max job timeout

fail 6573520 2021-12-20 19:05:44 2021-12-20 19:55:17 2021-12-20 20:10:49 0:15:32 0:07:33 0:07:59 smithi master centos 8.stream rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi100 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573521 2021-12-20 19:05:45 2021-12-20 19:55:37 2021-12-20 22:51:26 2:55:49 2:45:22 0:10:27 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados tasks/rados_cls_all validater/valgrind} 2
pass 6573522 2021-12-20 19:05:46 2021-12-20 19:55:48 2021-12-20 20:25:00 0:29:12 0:18:57 0:10:15 smithi master centos 8.3 rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} 1
fail 6573523 2021-12-20 19:05:47 2021-12-20 19:55:48 2021-12-20 20:15:02 0:19:14 0:10:47 0:08:27 smithi master rhel 8.4 rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

Command failed on smithi038 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573524 2021-12-20 19:05:47 2021-12-20 19:57:59 2021-12-20 20:37:53 0:39:54 0:30:42 0:09:12 smithi master centos 8.3 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 6573525 2021-12-20 19:05:48 2021-12-20 19:58:40 2021-12-20 20:39:40 0:41:00 0:31:10 0:09:50 smithi master centos 8.3 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 6573526 2021-12-20 19:05:49 2021-12-20 19:58:51 2021-12-20 20:15:10 0:16:19 0:07:42 0:08:37 smithi master centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{filestore-xfs} tasks/e2e} 2
Failure Reason:

Command failed on smithi062 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573527 2021-12-20 19:05:50 2021-12-20 19:59:42 2021-12-20 20:17:53 0:18:11 0:07:15 0:10:56 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
fail 6573528 2021-12-20 19:05:50 2021-12-20 19:59:52 2021-12-20 20:16:22 0:16:30 0:06:26 0:10:04 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw 3-final} 2
Failure Reason:

Command failed on smithi029 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573529 2021-12-20 19:05:51 2021-12-20 20:00:53 2021-12-20 20:28:29 0:27:36 0:20:34 0:07:02 smithi master rhel 8.4 rados/cephadm/thrash/{0-distro/rhel_8.4_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi094 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573530 2021-12-20 19:05:52 2021-12-20 20:01:33 2021-12-20 20:28:01 0:26:28 0:15:17 0:11:11 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/redirect_set_object} 2
pass 6573531 2021-12-20 19:05:53 2021-12-20 20:02:04 2021-12-20 23:22:04 3:20:00 3:11:53 0:08:07 smithi master centos 8.stream rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/osd} 1
pass 6573532 2021-12-20 19:05:53 2021-12-20 20:02:25 2021-12-20 20:29:41 0:27:16 0:15:13 0:12:03 smithi master centos 8.2 rados/cephadm/orchestrator_cli/{0-random-distro$/{centos_8.2_container_tools_3.0} 2-node-mgr agent/on orchestrator_cli} 2
fail 6573533 2021-12-20 19:05:54 2021-12-20 20:03:55 2021-12-20 20:17:56 0:14:01 0:03:30 0:10:31 smithi master ubuntu 20.04 rados/cephadm/smoke-singlehost/{0-random-distro$/{ubuntu_20.04} 1-start 2-services/rgw 3-final} 1
Failure Reason:

Command failed on smithi129 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573534 2021-12-20 19:05:55 2021-12-20 20:03:56 2021-12-20 20:25:25 0:21:29 0:11:02 0:10:27 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} 1
fail 6573535 2021-12-20 19:05:55 2021-12-20 20:03:57 2021-12-20 20:19:23 0:15:26 0:05:35 0:09:51 smithi master rados/cephadm/workunits/{agent/on mon_election/classic task/test_orch_cli} 1
Failure Reason:

Command failed on smithi052 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573536 2021-12-20 19:05:56 2021-12-20 20:03:57 2021-12-20 20:58:26 0:54:29 0:43:12 0:11:17 smithi master centos 8.3 rados/singleton/{all/thrash-eio mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} 2
pass 6573537 2021-12-20 19:05:57 2021-12-20 20:04:38 2021-12-20 20:38:32 0:33:54 0:26:18 0:07:36 smithi master centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream} tasks/rados_workunit_loadgen_mostlyread} 2
pass 6573538 2021-12-20 19:05:58 2021-12-20 20:04:58 2021-12-20 20:25:47 0:20:49 0:09:51 0:10:58 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
pass 6573539 2021-12-20 19:05:58 2021-12-20 20:05:49 2021-12-20 20:43:59 0:38:10 0:27:25 0:10:45 smithi master centos 8.3 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-small-objects} 2
fail 6573540 2021-12-20 19:05:59 2021-12-20 20:06:40 2021-12-20 20:20:48 0:14:08 0:03:40 0:10:28 smithi master ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

Command failed on smithi064 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573541 2021-12-20 19:06:00 2021-12-20 20:07:01 2021-12-20 20:21:27 0:14:26 0:03:44 0:10:42 smithi master ubuntu 20.04 rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi175 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573542 2021-12-20 19:06:01 2021-12-20 20:07:31 2021-12-20 23:02:14 2:54:43 2:28:24 0:26:19 smithi master rhel 8.4 rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{rhel_8}} 1
fail 6573543 2021-12-20 19:06:01 2021-12-20 20:07:32 2021-12-20 20:18:38 0:11:06 0:04:57 0:06:09 smithi master centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/basic 3-final} 2
Failure Reason:

Command failed on smithi060 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573544 2021-12-20 19:06:02 2021-12-20 20:07:42 2021-12-20 20:36:20 0:28:38 0:21:08 0:07:30 smithi master rhel 8.4 rados/cephadm/thrash/{0-distro/rhel_8.4_container_tools_rhel8 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi016 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573545 2021-12-20 19:06:03 2021-12-20 20:09:03 2021-12-20 20:24:23 0:15:20 0:08:35 0:06:45 smithi master centos 8.stream rados/multimon/{clusters/21 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream} tasks/mon_clock_with_skews} 3
fail 6573546 2021-12-20 19:06:04 2021-12-20 20:09:24 2021-12-20 20:22:17 0:12:53 0:05:03 0:07:50 smithi master centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

Command failed on smithi039 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573547 2021-12-20 19:06:04 2021-12-20 20:10:15 2021-12-20 20:50:07 0:39:52 0:33:09 0:06:43 smithi master rhel 8.4 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/small-objects-balanced} 2
fail 6573548 2021-12-20 19:06:05 2021-12-20 20:10:56 2021-12-20 20:27:24 0:16:28 0:05:53 0:10:35 smithi master centos 8.2 rados/cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi117 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573549 2021-12-20 19:06:06 2021-12-20 20:12:36 2021-12-20 20:28:31 0:15:55 0:07:08 0:08:47 smithi master centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi038 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573550 2021-12-20 19:06:07 2021-12-20 20:15:07 2021-12-20 20:48:48 0:33:41 0:22:46 0:10:55 smithi master centos 8.3 rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 2
pass 6573551 2021-12-20 19:06:07 2021-12-20 20:15:18 2021-12-20 20:38:25 0:23:07 0:14:03 0:09:04 smithi master centos 8.3 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} supported-random-distro$/{centos_8} tasks/prometheus} 2
pass 6573552 2021-12-20 19:06:08 2021-12-20 20:15:28 2021-12-20 20:51:40 0:36:12 0:24:18 0:11:54 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 2
pass 6573553 2021-12-20 19:06:09 2021-12-20 20:15:29 2021-12-20 20:55:11 0:39:42 0:29:23 0:10:19 smithi master ubuntu 20.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
pass 6573554 2021-12-20 19:06:10 2021-12-20 20:16:30 2021-12-20 20:37:13 0:20:43 0:12:14 0:08:29 smithi master centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 6573555 2021-12-20 19:06:10 2021-12-20 20:17:31 2021-12-20 20:45:33 0:28:02 0:20:22 0:07:40 smithi master rhel 8.4 rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi092 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

dead 6573556 2021-12-20 19:06:11 2021-12-20 20:18:01 2021-12-21 08:31:31 12:13:30 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

fail 6573557 2021-12-20 19:06:12 2021-12-20 20:18:02 2021-12-20 20:36:00 0:17:58 0:10:51 0:07:07 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

Command failed on smithi060 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573558 2021-12-20 19:06:13 2021-12-20 20:18:43 2021-12-20 23:11:59 2:53:16 2:47:17 0:05:59 smithi master centos 8.stream rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/scrub} 1
fail 6573559 2021-12-20 19:06:13 2021-12-20 20:19:33 2021-12-20 20:38:02 0:18:29 0:05:25 0:13:04 smithi master ubuntu 20.04 rados/cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi158 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573560 2021-12-20 19:06:14 2021-12-20 20:20:44 2021-12-20 20:48:28 0:27:44 0:18:06 0:09:38 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/small-objects} 2
pass 6573561 2021-12-20 19:06:15 2021-12-20 20:20:55 2021-12-20 20:45:48 0:24:53 0:19:41 0:05:12 smithi master rhel 8.4 rados/singleton/{all/admin-socket mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} 1
pass 6573562 2021-12-20 19:06:16 2021-12-20 20:20:55 2021-12-20 20:48:57 0:28:02 0:20:56 0:07:06 smithi master rhel 8.4 rados/monthrash/{ceph clusters/9-mons mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/one workloads/rados_5925} 2
fail 6573563 2021-12-20 19:06:16 2021-12-20 20:21:16 2021-12-20 20:36:55 0:15:39 0:06:10 0:09:29 smithi master centos 8.3 rados/cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

Command failed on smithi175 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573564 2021-12-20 19:06:17 2021-12-20 20:21:37 2021-12-20 21:00:30 0:38:53 0:28:18 0:10:35 smithi master ubuntu 20.04 rados/objectstore/{backends/objectstore-filestore-memstore supported-random-distro$/{ubuntu_latest}} 1
fail 6573565 2021-12-20 19:06:18 2021-12-20 20:22:27 2021-12-20 20:37:18 0:14:51 0:06:17 0:08:34 smithi master centos 8.3 rados/cephadm/smoke/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi073 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573566 2021-12-20 19:06:19 2021-12-20 20:22:38 2021-12-20 20:39:16 0:16:38 0:09:12 0:07:26 smithi master centos 8.stream rados/singleton-nomsgr/{all/pool-access mon_election/classic rados supported-random-distro$/{centos_8.stream}} 1
fail 6573567 2021-12-20 19:06:19 2021-12-20 20:23:28 2021-12-20 20:40:33 0:17:05 0:10:47 0:06:18 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/mirror 3-final} 2
Failure Reason:

Command failed on smithi058 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573568 2021-12-20 19:06:20 2021-12-20 20:23:39 2021-12-20 20:43:19 0:19:40 0:08:58 0:10:42 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_write} 1
pass 6573569 2021-12-20 19:06:21 2021-12-20 20:23:40 2021-12-20 20:50:43 0:27:03 0:16:03 0:11:00 smithi master rados/cephadm/workunits/{agent/on mon_election/classic task/test_cephadm} 1
fail 6573570 2021-12-20 19:06:22 2021-12-20 20:24:30 2021-12-20 20:38:12 0:13:42 0:03:43 0:09:59 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

Command failed on smithi082 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573571 2021-12-20 19:06:22 2021-12-20 20:24:31 2021-12-21 03:17:31 6:53:00 6:42:30 0:10:30 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi022 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5940a5d44c977cc4dd7c5ff7bda7c23212bb59de TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 6573572 2021-12-20 19:06:23 2021-12-20 20:25:31 2021-12-20 21:06:11 0:40:40 0:32:20 0:08:20 smithi master centos 8.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 6573573 2021-12-20 19:06:24 2021-12-20 20:26:32 2021-12-20 21:08:03 0:41:31 0:33:30 0:08:01 smithi master centos 8.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 6573574 2021-12-20 19:06:24 2021-12-20 20:26:43 2021-12-20 20:46:31 0:19:48 0:09:15 0:10:33 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi072 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573575 2021-12-20 19:06:25 2021-12-20 20:26:43 2021-12-20 20:43:07 0:16:24 0:05:52 0:10:32 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/host rook/master} 1
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi133.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

fail 6573576 2021-12-20 19:06:26 2021-12-20 20:27:34 2021-12-20 20:39:01 0:11:27 0:05:03 0:06:24 smithi master centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

Command failed on smithi003 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573577 2021-12-20 19:06:27 2021-12-20 20:28:05 2021-12-20 20:55:21 0:27:16 0:20:00 0:07:16 smithi master rhel 8.4 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/repair_test} 2
pass 6573578 2021-12-20 19:06:27 2021-12-20 20:28:05 2021-12-20 20:55:10 0:27:05 0:20:53 0:06:12 smithi master rhel 8.4 rados/singleton/{all/deduptool mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} 1
pass 6573579 2021-12-20 19:06:28 2021-12-20 20:28:06 2021-12-20 21:09:57 0:41:51 0:30:44 0:11:07 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/snaps-few-objects-localized} 2
pass 6573580 2021-12-20 19:06:29 2021-12-20 20:28:36 2021-12-20 20:47:51 0:19:15 0:09:33 0:09:42 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/version-number-sanity mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
fail 6573581 2021-12-20 19:06:30 2021-12-20 20:28:37 2021-12-20 20:43:56 0:15:19 0:05:57 0:09:22 smithi master centos 8.2 rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

Command failed on smithi094 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

dead 6573582 2021-12-20 19:06:31 2021-12-20 20:28:37 2021-12-21 08:51:31 12:22:54 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls agent/on mon_election/classic} 2
Failure Reason:

hit max job timeout

fail 6573583 2021-12-20 19:06:31 2021-12-20 20:29:48 2021-12-20 20:45:51 0:16:03 0:05:28 0:10:35 smithi master ubuntu 20.04 rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi037 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573584 2021-12-20 19:06:32 2021-12-20 20:30:08 2021-12-20 21:56:56 1:26:48 1:15:29 0:11:19 smithi master centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zlib} tasks/dashboard} 2
pass 6573585 2021-12-20 19:06:33 2021-12-20 20:35:50 2021-12-20 21:07:29 0:31:39 0:24:35 0:07:04 smithi master rhel 8.4 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} supported-random-distro$/{rhel_8} tasks/crash} 2
pass 6573586 2021-12-20 19:06:34 2021-12-20 20:36:10 2021-12-20 20:53:53 0:17:43 0:09:30 0:08:13 smithi master centos 8.stream rados/objectstore/{backends/alloc-hint supported-random-distro$/{centos_8.stream}} 1
pass 6573587 2021-12-20 19:06:34 2021-12-20 20:36:11 2021-12-20 21:00:07 0:23:56 0:14:10 0:09:46 smithi master centos 8.3 rados/rest/{mgr-restful supported-random-distro$/{centos_8}} 1
pass 6573588 2021-12-20 19:06:35 2021-12-20 20:36:21 2021-12-20 21:14:31 0:38:10 0:32:55 0:05:15 smithi master rhel 8.4 rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{rhel_8}} 1
pass 6573589 2021-12-20 19:06:36 2021-12-20 20:36:22 2021-12-20 20:58:23 0:22:01 0:13:59 0:08:02 smithi master centos 8.stream rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/c2c} 1
fail 6573590 2021-12-20 19:06:37 2021-12-20 20:37:03 2021-12-20 21:16:41 0:39:38 0:32:46 0:06:52 smithi master centos 8.stream rados/upgrade/parallel/{0-random-distro$/{centos_8.stream_container_tools} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_cmpomap.sh) on smithi073 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_cmpomap.sh'

pass 6573591 2021-12-20 19:06:37 2021-12-20 20:37:23 2021-12-20 21:14:53 0:37:30 0:26:34 0:10:56 smithi master centos 8.3 rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} 1
fail 6573592 2021-12-20 19:06:38 2021-12-20 20:37:24 2021-12-20 20:57:06 0:19:42 0:09:17 0:10:25 smithi master centos 8.3 rados/cephadm/thrash/{0-distro/centos_8.3_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi050 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573593 2021-12-20 19:06:39 2021-12-20 20:37:24 2021-12-20 21:19:10 0:41:46 0:30:47 0:10:59 smithi master centos 8.3 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} 2
pass 6573594 2021-12-20 19:06:40 2021-12-20 20:37:25 2021-12-20 21:25:13 0:47:48 0:40:31 0:07:17 smithi master rhel 8.4 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/minsize_recovery thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 2
fail 6573595 2021-12-20 19:06:40 2021-12-20 20:37:56 2021-12-20 20:49:40 0:11:44 0:04:52 0:06:52 smithi master centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

Command failed on smithi158 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573596 2021-12-20 19:06:41 2021-12-20 20:38:06 2021-12-20 20:59:36 0:21:30 0:10:49 0:10:41 smithi master centos 8.3 rados/singleton/{all/divergent_priors2 mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} 1
fail 6573597 2021-12-20 19:06:42 2021-12-20 20:38:07 2021-12-20 20:48:30 0:10:23 0:05:04 0:05:19 smithi master centos 8.stream rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi018 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573598 2021-12-20 19:06:43 2021-12-20 20:38:07 2021-12-20 20:53:57 0:15:50 0:06:16 0:09:34 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

Command failed on smithi082 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573599 2021-12-20 19:06:43 2021-12-20 20:38:18 2021-12-20 21:05:55 0:27:37 0:20:39 0:06:58 smithi master rhel 8.4 rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/mon_clock_no_skews} 2
pass 6573600 2021-12-20 19:06:44 2021-12-20 20:38:29 2021-12-20 21:06:22 0:27:53 0:17:24 0:10:29 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/write_fadvise_dontneed} 2
dead 6573601 2021-12-20 19:06:45 2021-12-20 20:38:39 2021-12-21 08:52:50 12:14:11 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

pass 6573602 2021-12-20 19:06:45 2021-12-20 20:39:10 2021-12-20 21:00:20 0:21:10 0:14:16 0:06:54 smithi master centos 8.stream rados/cephadm/orchestrator_cli/{0-random-distro$/{centos_8.stream_container_tools_crun} 2-node-mgr agent/off orchestrator_cli} 2
fail 6573603 2021-12-20 19:06:46 2021-12-20 20:39:50 2021-12-20 20:51:17 0:11:27 0:04:46 0:06:41 smithi master centos 8.stream rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools} 1-start 2-services/basic 3-final} 1
Failure Reason:

Command failed on smithi111 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573604 2021-12-20 19:06:47 2021-12-20 20:39:51 2021-12-20 20:58:51 0:19:00 0:13:15 0:05:45 smithi master centos 8.stream rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/classic rados supported-random-distro$/{centos_8.stream}} 1
pass 6573605 2021-12-20 19:06:48 2021-12-20 20:40:41 2021-12-20 21:09:30 0:28:49 0:20:19 0:08:30 smithi master centos 8.3 rados/singleton/{all/ec-inconsistent-hinfo mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} 1
fail 6573606 2021-12-20 19:06:48 2021-12-20 20:40:42 2021-12-20 20:53:28 0:12:46 0:04:58 0:07:48 smithi master centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

Command failed on smithi061 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573607 2021-12-20 19:06:49 2021-12-20 20:42:03 2021-12-20 21:03:30 0:21:27 0:12:24 0:09:03 smithi master centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 6573608 2021-12-20 19:06:50 2021-12-20 20:44:04 2021-12-20 21:04:56 0:20:52 0:11:57 0:08:55 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/sample_fio} 1
fail 6573609 2021-12-20 19:06:51 2021-12-20 20:44:04 2021-12-20 20:59:35 0:15:31 0:07:32 0:07:59 smithi master centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi133 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573610 2021-12-20 19:06:51 2021-12-20 20:44:05 2021-12-20 21:02:19 0:18:14 0:10:22 0:07:52 smithi master rhel 8.4 rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi092 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573611 2021-12-20 19:06:52 2021-12-20 20:45:35 2021-12-20 21:01:29 0:15:54 0:05:17 0:10:37 smithi master rados/cephadm/workunits/{agent/on mon_election/classic task/test_nfs} 1
Failure Reason:

Command failed on smithi037 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573612 2021-12-20 19:06:53 2021-12-20 20:45:56 2021-12-20 21:07:45 0:21:49 0:11:11 0:10:38 smithi master ubuntu 20.04 rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{ubuntu_latest}} 1
pass 6573613 2021-12-20 19:06:54 2021-12-20 20:45:57 2021-12-20 21:08:04 0:22:07 0:13:54 0:08:13 smithi master centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream} thrashers/none thrashosds-health workloads/cache-agent-big} 2
fail 6573614 2021-12-20 19:06:54 2021-12-20 20:46:37 2021-12-20 20:58:41 0:12:04 0:04:53 0:07:11 smithi master centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

Command failed on smithi139 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

pass 6573615 2021-12-20 19:06:55 2021-12-20 20:47:18 2021-12-20 21:42:56 0:55:38 0:45:18 0:10:20 smithi master centos 8.3 rados/monthrash/{ceph clusters/9-mons mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/sync workloads/rados_mon_osdmap_prune} 2
pass 6573616 2021-12-20 19:06:56 2021-12-20 20:47:59 2021-12-20 21:16:40 0:28:41 0:18:17 0:10:24 smithi master ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/libcephsqlite} 2
pass 6573617 2021-12-20 19:06:57 2021-12-20 20:48:29 2021-12-20 21:03:58 0:15:29 0:05:58 0:09:31 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
pass 6573618 2021-12-20 19:06:57 2021-12-20 20:48:40 2021-12-20 21:18:24 0:29:44 0:18:40 0:11:04 smithi master ubuntu 20.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/crush} 1
pass 6573619 2021-12-20 19:06:58 2021-12-20 20:48:41 2021-12-20 21:04:18 0:15:37 0:07:47 0:07:50 smithi master centos 8.stream rados/singleton/{all/erasure-code-nonregression mon_election/classic msgr-failures/none msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream}} 1
fail 6573620 2021-12-20 19:06:59 2021-12-20 20:48:51 2021-12-20 21:08:43 0:19:52 0:08:54 0:10:58 smithi master centos 8.3 rados/cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi121 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573621 2021-12-20 19:07:00 2021-12-20 20:49:02 2021-12-20 21:07:11 0:18:09 0:10:20 0:07:49 smithi master rhel 8.4 rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

Command failed on smithi158 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573622 2021-12-20 19:07:00 2021-12-20 20:49:42 2021-12-20 21:07:50 0:18:08 0:10:16 0:07:52 smithi master rhel 8.4 rados/cephadm/smoke/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi100 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573623 2021-12-20 19:07:01 2021-12-20 20:50:13 2021-12-20 21:10:50 0:20:37 0:08:28 0:12:09 smithi master centos 8.3 rados/thrash-old-clients/{0-distro$/{centos_8.3_container_tools_3.0} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/default thrashosds-health workloads/radosbench} 3
Failure Reason:

Command failed on smithi062 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'

fail 6573624 2021-12-20 19:07:02 2021-12-20 20:51:24 2021-12-20 21:09:22 0:17:58 0:10:51 0:07:07 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

Command failed on smithi023 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5940a5d44c977cc4dd7c5ff7bda7c23212bb59de pull'