Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 6055968 2021-04-18 16:06:05 2021-04-18 16:06:53 2021-04-18 16:28:14 0:21:21 0:11:07 0:10:14 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/set-chunks-read} 2
Failure Reason:

{'Failure object was': {'smithi148.front.sepia.ceph.com': {'module_stdout': '', 'module_stderr': '', 'msg': 'MODULE FAILURE\\nSee stdout/stderr for the exact error', 'rc': -13, '_ansible_no_log': False, 'changed': False}}, 'Traceback (most recent call last)': 'File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_2713a3cd31b17738a50039eaa9d859b5dc39fb8a/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_2713a3cd31b17738a50039eaa9d859b5dc39fb8a/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_2713a3cd31b17738a50039eaa9d859b5dc39fb8a/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_2713a3cd31b17738a50039eaa9d859b5dc39fb8a/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_2713a3cd31b17738a50039eaa9d859b5dc39fb8a/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_2713a3cd31b17738a50039eaa9d859b5dc39fb8a/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_2713a3cd31b17738a50039eaa9d859b5dc39fb8a/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_2713a3cd31b17738a50039eaa9d859b5dc39fb8a/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_2713a3cd31b17738a50039eaa9d859b5dc39fb8a/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 117, in represent_mapping node_key = self.represent_data(item_key) File "/home/teuthworker/src/git.ceph.com_git_teuthology_2713a3cd31b17738a50039eaa9d859b5dc39fb8a/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_2713a3cd31b17738a50039eaa9d859b5dc39fb8a/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)', 'yaml.representer.RepresenterError': "('cannot represent an object', 'module_stderr')"}

pass 6055969 2021-04-18 16:06:06 2021-04-18 16:06:53 2021-04-18 16:33:49 0:26:56 0:21:07 0:05:49 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/rgw 3-final} 2
fail 6055970 2021-04-18 16:06:07 2021-04-18 16:06:54 2021-04-18 16:38:09 0:31:15 0:15:47 0:15:28 smithi master centos 8.3 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

"2021-04-18T16:33:55.065562+0000 mgr.y (mgr.4101) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6055971 2021-04-18 16:06:08 2021-04-18 16:06:54 2021-04-18 16:33:49 0:26:55 0:21:03 0:05:52 smithi master rhel 8.3 rados/singleton/{all/rebuild-mondb mon_election/classic msgr-failures/many msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

"2021-04-18T16:30:00.888944+0000 mgr.x (mgr.10020) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6055972 2021-04-18 16:06:09 2021-04-18 16:06:54 2021-04-18 16:47:57 0:41:03 0:26:34 0:14:29 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/small-objects-balanced} 2
Failure Reason:

"2021-04-18T16:32:19.875975+0000 mgr.y (mgr.4113) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6055973 2021-04-18 16:06:10 2021-04-18 16:06:54 2021-04-18 18:22:10 2:15:16 1:51:39 0:23:37 smithi master centos 8.3 rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8}} 1
pass 6055974 2021-04-18 16:06:11 2021-04-18 16:06:56 2021-04-18 16:42:55 0:35:59 0:21:07 0:14:52 smithi master ubuntu 20.04 rados/cephadm/smoke/{distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} 2
fail 6055975 2021-04-18 16:06:12 2021-04-18 16:06:56 2021-04-18 16:46:04 0:39:08 0:26:57 0:12:11 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/small-objects-localized} 2
Failure Reason:

"2021-04-18T16:31:00.908177+0000 mgr.x (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6055976 2021-04-18 16:06:13 2021-04-18 16:06:57 2021-04-18 16:24:16 0:17:19 0:07:31 0:09:48 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
fail 6055977 2021-04-18 16:06:14 2021-04-18 16:06:57 2021-04-18 16:43:57 0:37:00 0:23:16 0:13:44 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/small-objects} 2
Failure Reason:

"2021-04-18T16:31:19.850448+0000 mgr.y (mgr.4111) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6055978 2021-04-18 16:06:15 2021-04-18 16:06:57 2021-04-18 16:54:20 0:47:23 0:30:53 0:16:30 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/default thrashosds-health workloads/rbd_cls} 3
fail 6055979 2021-04-18 16:06:16 2021-04-18 16:06:58 2021-04-18 16:37:50 0:30:52 0:23:16 0:07:36 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI)

pass 6055980 2021-04-18 16:06:17 2021-04-18 16:06:58 2021-04-18 17:06:21 0:59:23 0:48:04 0:11:19 smithi master ubuntu 20.04 rados/standalone/{mon_election/connectivity supported-random-distro$/{ubuntu_latest} workloads/mon} 1
fail 6055981 2021-04-18 16:06:17 2021-04-18 16:06:58 2021-04-18 16:50:19 0:43:21 0:30:12 0:13:09 smithi master ubuntu 20.04 rados/upgrade/pacific-x/rgw-multisite/{clusters frontend overrides realm supported-random-distro$/{ubuntu_latest} tasks upgrade/secondary} 2
Failure Reason:

rgw multisite test failures

fail 6055982 2021-04-18 16:06:18 2021-04-18 16:06:59 2021-04-18 16:48:56 0:41:57 0:32:02 0:09:55 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/snaps-few-objects-balanced} 2
Failure Reason:

"2021-04-18T16:25:00.113539+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6055983 2021-04-18 16:06:19 2021-04-18 16:06:59 2021-04-18 16:44:55 0:37:56 0:28:20 0:09:36 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_18.04 workloads/cosbench_64K_write} 1
pass 6055984 2021-04-18 16:06:20 2021-04-18 16:07:00 2021-04-18 16:36:03 0:29:03 0:21:00 0:08:03 smithi master centos 8.3 rados/singleton/{all/recovery-preemption mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} 1
pass 6055985 2021-04-18 16:06:21 2021-04-18 16:07:00 2021-04-18 16:54:21 0:47:21 0:35:48 0:11:33 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
fail 6055986 2021-04-18 16:06:22 2021-04-18 16:07:00 2021-04-18 16:54:19 0:47:19 0:40:11 0:07:08 smithi master rhel 8.3 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

"2021-04-18T16:34:39.482053+0000 mgr.x (mgr.5998) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6055987 2021-04-18 16:06:23 2021-04-18 16:07:00 2021-04-18 16:52:19 0:45:19 0:36:49 0:08:30 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/snaps-few-objects-localized} 2
Failure Reason:

"2021-04-18T16:31:17.344653+0000 mgr.x (mgr.4105) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6055988 2021-04-18 16:06:24 2021-04-18 16:07:01 2021-04-18 16:31:49 0:24:48 0:13:14 0:11:34 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
pass 6055989 2021-04-18 16:06:25 2021-04-18 16:07:01 2021-04-18 16:58:21 0:51:20 0:39:55 0:11:25 smithi master centos 8.2 rados/cephadm/with-work/{0-distro/centos_8.2_kubic_stable fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
fail 6055990 2021-04-18 16:06:26 2021-04-18 16:07:01 2021-04-18 16:52:56 0:45:55 0:30:31 0:15:24 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/snaps-few-objects} 2
Failure Reason:

"2021-04-18T16:32:09.376103+0000 mgr.y (mgr.4153) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6055991 2021-04-18 16:06:27 2021-04-18 16:07:02 2021-04-18 16:36:03 0:29:01 0:16:13 0:12:48 smithi master ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/readwrite} 2
Failure Reason:

"2021-04-18T16:29:45.480740+0000 mgr.y (mgr.4100) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6055992 2021-04-18 16:06:28 2021-04-18 16:07:02 2021-04-18 16:45:57 0:38:55 0:28:59 0:09:56 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/write_fadvise_dontneed} 2
Failure Reason:

"2021-04-18T16:35:06.381074+0000 mgr.x (mgr.4113) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6055993 2021-04-18 16:06:29 2021-04-18 16:07:02 2021-04-18 16:30:02 0:23:00 0:12:12 0:10:48 smithi master ubuntu 20.04 rados/singleton/{all/resolve_stuck_peering mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 2
pass 6055994 2021-04-18 16:06:30 2021-04-18 16:07:03 2021-04-18 16:35:49 0:28:46 0:18:43 0:10:03 smithi master centos 8.2 rados/cephadm/smoke/{distro/centos_8.2_kubic_stable fixed-2 mon_election/connectivity start} 2
fail 6055995 2021-04-18 16:06:31 2021-04-18 16:07:03 2021-04-18 17:14:49 1:07:46 0:59:41 0:08:05 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/admin_socket_objecter_requests} 2
Failure Reason:

"2021-04-18T16:31:40.118751+0000 mgr.x (mgr.4102) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6055996 2021-04-18 16:06:32 2021-04-18 16:07:05 2021-04-18 16:32:26 0:25:21 0:16:06 0:09:15 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 1-start 2-services/basic 3-final} 2
pass 6055997 2021-04-18 16:06:33 2021-04-18 16:07:06 2021-04-18 16:40:04 0:32:58 0:23:16 0:09:42 smithi master rhel 8.3 rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
fail 6055998 2021-04-18 16:06:33 2021-04-18 16:07:06 2021-04-18 16:52:21 0:45:15 0:32:52 0:12:23 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/cache-agent-big} 2
Failure Reason:

"2021-04-18T16:37:05.786385+0000 mgr.y (mgr.4100) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6055999 2021-04-18 16:06:35 2021-04-18 16:07:06 2021-04-18 16:34:25 0:27:19 0:15:08 0:12:11 smithi master ubuntu 20.04 rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} 2
Failure Reason:

"2021-04-18T16:29:36.476190+0000 mgr.x (mgr.4114) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056000 2021-04-18 16:06:36 2021-04-18 16:07:07 2021-04-18 16:52:20 0:45:13 0:33:13 0:12:00 smithi master centos 8.3 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

"2021-04-18T16:29:24.825154+0000 mgr.x (mgr.4114) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056001 2021-04-18 16:06:36 2021-04-18 16:07:07 2021-04-18 16:35:49 0:28:42 0:16:55 0:11:47 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/cache-agent-small} 2
Failure Reason:

"2021-04-18T16:26:44.538486+0000 mgr.y (mgr.4110) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056002 2021-04-18 16:06:37 2021-04-18 16:07:07 2021-04-18 16:36:45 0:29:38 0:16:12 0:13:26 smithi master centos 8.3 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
fail 6056003 2021-04-18 16:06:38 2021-04-18 16:07:08 2021-04-18 16:39:29 0:32:21 0:16:45 0:15:36 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} 2
Failure Reason:

"2021-04-18T16:32:38.093146+0000 mgr.x (mgr.4104) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056004 2021-04-18 16:06:39 2021-04-18 16:07:08 2021-04-18 16:36:12 0:29:04 0:17:07 0:11:57 smithi master centos 8.2 rados/cephadm/smoke-singlehost/{0-distro$/{centos_8.2_kubic_stable} 1-start 2-services/rgw 3-final} 1
dead 6056005 2021-04-18 16:06:40 2021-04-18 16:07:08 2021-04-18 16:26:27 0:19:19 0:10:14 0:09:05 smithi master rhel 8.3 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/sync-many workloads/rados_mon_workunits} 2
Failure Reason:

{'Failure object was': {'smithi193.front.sepia.ceph.com': {'module_stdout': '', 'module_stderr': '', 'msg': 'MODULE FAILURE\\nSee stdout/stderr for the exact error', 'rc': -13, '_ansible_no_log': False, 'changed': False}}, 'Traceback (most recent call last)': 'File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_2713a3cd31b17738a50039eaa9d859b5dc39fb8a/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_2713a3cd31b17738a50039eaa9d859b5dc39fb8a/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_2713a3cd31b17738a50039eaa9d859b5dc39fb8a/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_2713a3cd31b17738a50039eaa9d859b5dc39fb8a/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_2713a3cd31b17738a50039eaa9d859b5dc39fb8a/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_2713a3cd31b17738a50039eaa9d859b5dc39fb8a/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_2713a3cd31b17738a50039eaa9d859b5dc39fb8a/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_2713a3cd31b17738a50039eaa9d859b5dc39fb8a/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_2713a3cd31b17738a50039eaa9d859b5dc39fb8a/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 117, in represent_mapping node_key = self.represent_data(item_key) File "/home/teuthworker/src/git.ceph.com_git_teuthology_2713a3cd31b17738a50039eaa9d859b5dc39fb8a/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_2713a3cd31b17738a50039eaa9d859b5dc39fb8a/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)', 'yaml.representer.RepresenterError': "('cannot represent an object', 'module_stderr')"}

pass 6056006 2021-04-18 16:06:41 2021-04-18 16:07:09 2021-04-18 16:32:49 0:25:40 0:14:22 0:11:18 smithi master centos 8.3 rados/singleton/{all/test-crash mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} 1
pass 6056007 2021-04-18 16:06:42 2021-04-18 16:07:09 2021-04-18 16:43:57 0:36:48 0:25:20 0:11:28 smithi master centos 8.3 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-small-objects-many-deletes} 2
fail 6056008 2021-04-18 16:06:43 2021-04-18 16:07:09 2021-04-18 16:36:19 0:29:10 0:15:17 0:13:53 smithi master ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-comp-lz4 supported-random-distro$/{ubuntu_latest} tasks/prometheus} 2
Failure Reason:

"2021-04-18T16:31:24.465664+0000 mgr.z (mgr.4100) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056009 2021-04-18 16:06:44 2021-04-18 16:07:10 2021-04-18 16:52:56 0:45:46 0:31:16 0:14:30 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/cache-pool-snaps-readproxy} 2
Failure Reason:

"2021-04-18T16:32:41.488202+0000 mgr.x (mgr.4105) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056010 2021-04-18 16:06:45 2021-04-18 16:07:10 2021-04-18 18:06:39 1:59:29 1:50:27 0:09:02 smithi master rhel 8.3 rados/standalone/{mon_election/classic supported-random-distro$/{rhel_8} workloads/osd-backfill} 1
pass 6056011 2021-04-18 16:06:46 2021-04-18 16:07:10 2021-04-18 16:34:18 0:27:08 0:15:19 0:11:49 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_18.04 workloads/fio_4K_rand_read} 1
fail 6056012 2021-04-18 16:06:47 2021-04-18 16:07:10 2021-04-18 16:41:57 0:34:47 0:24:21 0:10:26 smithi master rhel 8.3 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

"2021-04-18T16:37:18.485359+0000 mgr.x (mgr.4104) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056013 2021-04-18 16:06:47 2021-04-18 16:07:11 2021-04-18 16:50:19 0:43:08 0:29:41 0:13:27 smithi master centos 8.3 rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/classic} 2
pass 6056014 2021-04-18 16:06:48 2021-04-18 16:07:11 2021-04-18 18:51:09 2:43:58 2:13:56 0:30:02 smithi master centos 8.3 rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8}} 1
fail 6056015 2021-04-18 16:06:49 2021-04-18 16:07:11 2021-04-18 16:50:19 0:43:08 0:33:31 0:09:37 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/cache-pool-snaps} 2
Failure Reason:

"2021-04-18T16:34:45.635649+0000 mgr.x (mgr.4126) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056016 2021-04-18 16:06:50 2021-04-18 16:07:12 2021-04-18 16:39:49 0:32:37 0:20:12 0:12:25 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
pass 6056017 2021-04-18 16:06:51 2021-04-18 16:07:12 2021-04-18 16:28:29 0:21:17 0:11:43 0:09:34 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_adoption} 1
pass 6056018 2021-04-18 16:06:52 2021-04-18 16:07:12 2021-04-18 16:47:57 0:40:45 0:27:05 0:13:40 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/cache-snaps-balanced} 2
dead 6056019 2021-04-18 16:06:53 2021-04-18 16:07:13 2021-04-18 16:23:09 0:15:56 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 6056020 2021-04-18 16:06:54 2021-04-18 16:07:13 2021-04-18 16:36:10 0:28:57 0:18:06 0:10:51 smithi master ubuntu 20.04 rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} 1
fail 6056021 2021-04-18 16:06:56 2021-04-18 16:07:14 2021-04-18 16:52:20 0:45:06 0:34:31 0:10:35 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/cache-snaps} 2
Failure Reason:

"2021-04-18T16:34:52.095578+0000 mgr.x (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056022 2021-04-18 16:06:56 2021-04-18 16:07:14 2021-04-18 17:00:20 0:53:06 0:38:45 0:14:21 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} 2
fail 6056023 2021-04-18 16:06:57 2021-04-18 16:07:14 2021-04-18 16:35:49 0:28:35 0:14:27 0:14:08 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/cache} 2
Failure Reason:

"2021-04-18T16:30:56.582175+0000 mgr.y (mgr.4123) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056024 2021-04-18 16:06:58 2021-04-18 16:07:14 2021-04-18 16:41:56 0:34:42 0:20:56 0:13:46 smithi master ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/repair_test} 2
Failure Reason:

"2021-04-18T16:30:23.729470+0000 mgr.y (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056025 2021-04-18 16:06:59 2021-04-18 16:07:15 2021-04-18 16:36:47 0:29:32 0:21:20 0:08:12 smithi master rhel 8.3 rados/cephadm/smoke/{distro/rhel_8.3_kubic_stable fixed-2 mon_election/classic start} 2
pass 6056026 2021-04-18 16:07:00 2021-04-18 16:07:15 2021-04-18 16:52:55 0:45:40 0:35:05 0:10:35 smithi master rhel 8.3 rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{rhel_8}} 2
fail 6056027 2021-04-18 16:07:01 2021-04-18 16:07:15 2021-04-18 16:35:50 0:28:35 0:16:34 0:12:01 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/dedup-io-mixed} 2
Failure Reason:

"2021-04-18T16:29:15.702468+0000 mgr.y (mgr.4110) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056028 2021-04-18 16:07:02 2021-04-18 16:07:16 2021-04-18 16:56:19 0:49:03 0:39:13 0:09:50 smithi master rhel 8.3 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-snaps-few-objects-overwrites} 2
pass 6056029 2021-04-18 16:07:05 2021-04-18 16:07:16 2021-04-18 17:43:52 1:36:36 1:24:51 0:11:45 smithi master centos 8.3 rados/singleton/{all/thrash-backfill-full mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} 2
fail 6056030 2021-04-18 16:07:06 2021-04-18 16:07:16 2021-04-18 16:43:57 0:36:41 0:19:44 0:16:57 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/dedup-io-snaps} 2
Failure Reason:

"2021-04-18T16:32:21.072423+0000 mgr.y (mgr.4104) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056031 2021-04-18 16:07:07 2021-04-18 16:07:17 2021-04-18 16:37:50 0:30:33 0:20:52 0:09:41 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_cephadm} 1
fail 6056032 2021-04-18 16:07:08 2021-04-18 16:07:17 2021-04-18 17:00:19 0:53:02 0:43:16 0:09:46 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/pool-snaps-few-objects} 2
Failure Reason:

"2021-04-18T16:34:47.357525+0000 mgr.x (mgr.4099) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056033 2021-04-18 16:07:09 2021-04-18 16:07:17 2021-04-18 18:01:55 1:54:38 1:44:54 0:09:44 smithi master centos 8.3 rados/standalone/{mon_election/connectivity supported-random-distro$/{centos_8} workloads/osd} 1
pass 6056034 2021-04-18 16:07:10 2021-04-18 16:07:18 2021-04-18 16:38:11 0:30:53 0:17:45 0:13:08 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_18.04 workloads/fio_4K_rand_rw} 1
pass 6056035 2021-04-18 16:07:11 2021-04-18 16:07:18 2021-04-18 16:41:57 0:34:39 0:21:22 0:13:17 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/iscsi 3-final} 2
fail 6056036 2021-04-18 16:07:12 2021-04-18 16:07:18 2021-04-18 16:52:19 0:45:01 0:30:47 0:14:14 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/rados_api_tests} 2
Failure Reason:

"2021-04-18T16:32:18.779502+0000 mgr.x (mgr.4113) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056037 2021-04-18 16:07:13 2021-04-18 16:07:19 2021-04-18 16:40:25 0:33:06 0:22:36 0:10:30 smithi master centos 8.3 rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 6056038 2021-04-18 16:07:14 2021-04-18 16:07:19 2021-04-18 16:52:20 0:45:01 0:35:51 0:09:10 smithi master rhel 8.3 rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
fail 6056039 2021-04-18 16:07:15 2021-04-18 16:07:19 2021-04-18 17:31:50 1:24:31 1:11:26 0:13:05 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/radosbench-high-concurrency} 2
Failure Reason:

"2021-04-18T16:31:59.619304+0000 mgr.x (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056040 2021-04-18 16:07:16 2021-04-18 16:07:20 2021-04-18 16:52:56 0:45:36 0:33:46 0:11:50 smithi master centos 8.3 rados/singleton/{all/thrash-eio mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} 2
fail 6056041 2021-04-18 16:07:17 2021-04-18 16:07:20 2021-04-18 18:01:55 1:54:35 1:40:31 0:14:04 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/radosbench} 2
Failure Reason:

"2021-04-18T16:32:28.664841+0000 mgr.x (mgr.4102) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056042 2021-04-18 16:07:18 2021-04-18 16:07:20 2021-04-18 16:50:19 0:42:59 0:14:56 0:28:03 smithi master ubuntu 18.04 rados/cephadm/smoke/{distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
pass 6056043 2021-04-18 16:07:19 2021-04-18 16:23:22 2021-04-18 16:43:57 0:20:35 0:07:52 0:12:43 smithi master centos 8.3 rados/multimon/{clusters/9 mon_election/classic msgr-failures/many msgr/async-v2only no_pools objectstore/filestore-xfs rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} 3
fail 6056044 2021-04-18 16:07:20 2021-04-18 16:24:23 2021-04-18 16:56:20 0:31:57 0:19:58 0:11:59 smithi master ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

"2021-04-18T16:41:34.666779+0000 mgr.x (mgr.4126) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056045 2021-04-18 16:07:21 2021-04-18 16:26:36 2021-04-18 17:04:21 0:37:45 0:24:36 0:13:09 smithi master centos 8.3 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/fastclose objectstore/filestore-xfs rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
fail 6056046 2021-04-18 16:07:22 2021-04-18 16:28:36 2021-04-18 17:25:08 0:56:32 0:44:49 0:11:43 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/filestore-xfs rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

"2021-04-18T16:57:04.233392+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056047 2021-04-18 16:07:23 2021-04-18 16:30:08 2021-04-18 17:12:49 0:42:41 0:33:46 0:08:55 smithi master rhel 8.3 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/sync workloads/snaps-few-objects} 2
fail 6056048 2021-04-18 16:07:24 2021-04-18 16:32:28 2021-04-18 17:00:56 0:28:28 0:17:48 0:10:40 smithi master ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-small-objects} 2
Failure Reason:

"2021-04-18T16:48:10.279566+0000 mgr.y (mgr.4100) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056049 2021-04-18 16:07:25 2021-04-18 16:32:59 2021-04-18 16:58:20 0:25:21 0:17:29 0:07:52 smithi master rhel 8.3 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-comp-snappy supported-random-distro$/{rhel_8} tasks/workunits} 2
Failure Reason:

"2021-04-18T16:55:48.087248+0000 mgr.y (mgr.4114) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056050 2021-04-18 16:07:26 2021-04-18 16:33:59 2021-04-18 16:56:20 0:22:21 0:11:57 0:10:24 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/redirect} 2
Failure Reason:

"2021-04-18T16:49:22.026374+0000 mgr.y (mgr.4104) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056051 2021-04-18 16:07:27 2021-04-18 16:34:29 2021-04-18 17:00:22 0:25:53 0:10:49 0:15:04 smithi master ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

"2021-04-18T16:55:47.287383+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056052 2021-04-18 16:07:27 2021-04-18 16:35:57 2021-04-18 17:16:49 0:40:52 0:30:19 0:10:33 smithi master ubuntu 20.04 rados/objectstore/{backends/objectstore-filestore-memstore supported-random-distro$/{ubuntu_latest}} 1
pass 6056053 2021-04-18 16:07:28 2021-04-18 16:35:58 2021-04-18 16:58:56 0:22:58 0:17:51 0:05:07 smithi master rhel 8.3 rados/singleton-nomsgr/{all/pool-access mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
pass 6056054 2021-04-18 16:07:29 2021-04-18 16:35:58 2021-04-18 17:16:49 0:40:51 0:29:00 0:11:51 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/connectivity} 2
pass 6056055 2021-04-18 16:07:30 2021-04-18 16:35:58 2021-04-18 17:06:21 0:30:23 0:21:58 0:08:25 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/redirect_promote_tests} 2
pass 6056056 2021-04-18 16:07:31 2021-04-18 16:35:59 2021-04-18 17:15:11 0:39:12 0:27:02 0:12:10 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} 3
pass 6056057 2021-04-18 16:07:32 2021-04-18 16:36:09 2021-04-18 17:04:21 0:28:12 0:18:13 0:09:59 smithi master ubuntu 20.04 rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} 2
fail 6056058 2021-04-18 16:07:33 2021-04-18 16:36:09 2021-04-18 17:06:22 0:30:13 0:22:33 0:07:40 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/redirect_set_object} 2
Failure Reason:

"2021-04-18T16:59:48.251492+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056059 2021-04-18 16:07:34 2021-04-18 16:36:20 2021-04-18 16:58:58 0:22:38 0:15:02 0:07:36 smithi master rhel 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/scrub_test} 2
Failure Reason:

"2021-04-18T16:54:00.507022+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056060 2021-04-18 16:07:35 2021-04-18 16:36:20 2021-04-18 16:52:20 0:16:00 0:07:14 0:08:46 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_cephadm_repos} 1
pass 6056061 2021-04-18 16:07:36 2021-04-18 16:36:20 2021-04-18 17:02:57 0:26:37 0:15:55 0:10:42 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/set-chunks-read} 2
pass 6056062 2021-04-18 16:07:36 2021-04-18 16:36:51 2021-04-18 17:23:09 0:46:18 0:34:49 0:11:29 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
pass 6056063 2021-04-18 16:07:37 2021-04-18 16:36:51 2021-04-18 18:36:11 1:59:20 1:52:51 0:06:29 smithi master rhel 8.3 rados/standalone/{mon_election/classic supported-random-distro$/{rhel_8} workloads/scrub} 1
pass 6056064 2021-04-18 16:07:38 2021-04-18 16:36:51 2021-04-18 16:54:57 0:18:06 0:09:28 0:08:38 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_18.04 workloads/fio_4M_rand_read} 1
fail 6056065 2021-04-18 16:07:39 2021-04-18 16:37:52 2021-04-18 17:18:49 0:40:57 0:32:58 0:07:59 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/small-objects-balanced} 2
Failure Reason:

"2021-04-18T17:00:48.277622+0000 mgr.y (mgr.4101) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056066 2021-04-18 16:07:40 2021-04-18 16:38:12 2021-04-18 17:10:58 0:32:46 0:25:09 0:07:37 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
pass 6056067 2021-04-18 16:07:41 2021-04-18 16:38:13 2021-04-18 17:14:59 0:36:46 0:31:30 0:05:16 smithi master rhel 8.3 rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} 2
pass 6056068 2021-04-18 16:07:42 2021-04-18 16:38:13 2021-04-18 17:04:50 0:26:37 0:14:43 0:11:54 smithi master centos 8.2 rados/cephadm/smoke-roleless/{0-distro/centos_8.2_kubic_stable 1-start 2-services/mirror 3-final} 2
fail 6056069 2021-04-18 16:07:43 2021-04-18 16:39:39 2021-04-18 17:14:51 0:35:12 0:28:53 0:06:19 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/small-objects-localized} 2
Failure Reason:

"2021-04-18T17:01:55.975344+0000 mgr.y (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056070 2021-04-18 16:07:44 2021-04-18 16:39:50 2021-04-18 17:14:49 0:34:59 0:23:02 0:11:57 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/small-objects} 2
Failure Reason:

"2021-04-18T16:56:22.810944+0000 mgr.y (mgr.4104) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056071 2021-04-18 16:07:44 2021-04-18 16:40:30 2021-04-18 17:29:34 0:49:04 0:35:55 0:13:09 smithi master ubuntu 18.04 rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
fail 6056072 2021-04-18 16:07:45 2021-04-18 16:42:05 2021-04-18 17:19:11 0:37:06 0:26:44 0:10:22 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/snaps-few-objects-balanced} 2
Failure Reason:

"2021-04-18T16:58:01.564096+0000 mgr.x (mgr.4117) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056073 2021-04-18 16:07:46 2021-04-18 16:42:05 2021-04-18 16:59:10 0:17:05 0:08:23 0:08:42 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/version-number-sanity mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 6056074 2021-04-18 16:07:47 2021-04-18 16:42:05 2021-04-18 17:08:48 0:26:43 0:15:42 0:11:01 smithi master ubuntu 20.04 rados/cephadm/smoke/{distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
pass 6056075 2021-04-18 16:07:48 2021-04-18 16:42:06 2021-04-18 17:00:20 0:18:14 0:07:45 0:10:29 smithi master centos 8.3 rados/singleton/{all/watch-notify-same-primary mon_election/connectivity msgr-failures/many msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} 1
fail 6056076 2021-04-18 16:08:43 2021-04-18 16:42:06 2021-04-18 17:16:50 0:34:44 0:23:48 0:10:56 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/snaps-few-objects-localized} 2
Failure Reason:

"2021-04-18T16:58:30.109190+0000 mgr.y (mgr.4119) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056077 2021-04-18 16:08:56 2021-04-18 16:43:05 2021-04-18 17:19:13 0:36:08 0:26:01 0:10:07 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 2
Failure Reason:

"2021-04-18T16:59:53.009207+0000 mgr.x (mgr.4099) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056078 2021-04-18 16:09:56 2021-04-18 16:44:05 2021-04-18 17:14:52 0:30:47 0:21:41 0:09:06 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_orch_cli} 1
fail 6056079 2021-04-18 16:09:57 2021-04-18 16:44:06 2021-04-18 17:14:55 0:30:49 0:23:26 0:07:23 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/write_fadvise_dontneed} 2
Failure Reason:

"2021-04-18T17:05:23.222660+0000 mgr.x (mgr.4105) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056080 2021-04-18 16:09:58 2021-04-18 16:44:06 2021-04-18 17:41:52 0:57:46 0:48:52 0:08:54 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
fail 6056081 2021-04-18 16:09:59 2021-04-18 16:44:06 2021-04-18 17:09:15 0:25:09 0:15:39 0:09:30 smithi master centos 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/libcephsqlite} 2
Failure Reason:

"2021-04-18T16:59:28.369882+0000 mgr.x (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056082 2021-04-18 16:10:00 2021-04-18 16:44:07 2021-04-18 18:13:27 1:29:20 1:18:38 0:10:42 smithi master rhel 8.3 rados/dashboard/{clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} supported-random-distro$/{rhel_8} tasks/dashboard} 2
Failure Reason:

"2021-04-18T17:07:43.109583+0000 mgr.y (mgr.4109) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056083 2021-04-18 16:10:01 2021-04-18 16:46:07 2021-04-18 17:12:48 0:26:41 0:19:47 0:06:54 smithi master rhel 8.3 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-comp-zlib supported-random-distro$/{rhel_8} tasks/crash} 2
Failure Reason:

"2021-04-18T17:07:38.218530+0000 mgr.z (mgr.4120) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056084 2021-04-18 16:10:02 2021-04-18 16:46:07 2021-04-18 17:27:34 0:41:27 0:28:57 0:12:30 smithi master ubuntu 20.04 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/force-sync-many workloads/snaps-few-objects} 2
Failure Reason:

"2021-04-18T17:03:29.967286+0000 mgr.x (mgr.4125) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056085 2021-04-18 16:10:03 2021-04-18 16:47:58 2021-04-18 17:07:17 0:19:19 0:07:37 0:11:42 smithi master centos 8.3 rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} 3
pass 6056086 2021-04-18 16:10:03 2021-04-18 16:47:59 2021-04-18 17:04:18 0:16:19 0:06:49 0:09:30 smithi master ubuntu 20.04 rados/objectstore/{backends/alloc-hint supported-random-distro$/{ubuntu_latest}} 1
pass 6056087 2021-04-18 16:10:04 2021-04-18 16:48:59 2021-04-18 17:10:48 0:21:49 0:11:53 0:09:56 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_18.04 workloads/fio_4M_rand_rw} 1
pass 6056088 2021-04-18 16:10:05 2021-04-18 16:48:59 2021-04-18 17:06:18 0:17:19 0:07:49 0:09:30 smithi master ubuntu 20.04 rados/rest/{mgr-restful supported-random-distro$/{ubuntu_latest}} 1
pass 6056089 2021-04-18 16:10:06 2021-04-18 16:50:29 2021-04-18 17:06:18 0:15:49 0:06:51 0:08:58 smithi master ubuntu 20.04 rados/singleton/{all/admin-socket mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 1
fail 6056090 2021-04-18 16:10:07 2021-04-18 16:50:29 2021-04-18 17:35:52 0:45:23 0:38:40 0:06:43 smithi master rhel 8.3 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

"2021-04-18T17:15:21.100804+0000 mgr.x (mgr.6292) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056091 2021-04-18 16:10:08 2021-04-18 16:50:30 2021-04-18 17:21:08 0:30:38 0:25:32 0:05:06 smithi master rhel 8.3 rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{rhel_8}} 1
pass 6056092 2021-04-18 16:10:09 2021-04-18 16:50:30 2021-04-18 17:17:20 0:26:50 0:20:43 0:06:07 smithi master rhel 8.3 rados/standalone/{mon_election/classic supported-random-distro$/{rhel_8} workloads/crush} 1
fail 6056093 2021-04-18 16:10:10 2021-04-18 16:50:30 2021-04-18 17:16:48 0:26:18 0:16:50 0:09:28 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/admin_socket_objecter_requests} 2
Failure Reason:

"2021-04-18T17:06:29.393281+0000 mgr.x (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056094 2021-04-18 16:10:11 2021-04-18 16:50:31 2021-04-18 17:31:50 0:41:19 0:30:30 0:10:49 smithi master ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 2
Failure Reason:

"2021-04-18T17:05:44.565363+0000 mgr.y (mgr.4098) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056095 2021-04-18 16:10:12 2021-04-18 16:50:31 2021-04-18 17:21:08 0:30:37 0:20:45 0:09:52 smithi master rhel 8.3 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
fail 6056096 2021-04-18 16:10:12 2021-04-18 16:52:22 2021-04-18 17:33:51 0:41:29 0:33:32 0:07:57 smithi master rhel 8.3 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

"2021-04-18T17:15:03.976438+0000 mgr.y (mgr.4100) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056097 2021-04-18 16:10:13 2021-04-18 16:52:22 2021-04-18 17:29:34 0:37:12 0:26:54 0:10:18 smithi master centos 8.3 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 2
Failure Reason:

"2021-04-18T17:07:56.356184+0000 mgr.y (mgr.4106) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056098 2021-04-18 16:10:14 2021-04-18 16:52:23 2021-04-18 17:21:08 0:28:45 0:20:05 0:08:40 smithi master rhel 8.3 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

"2021-04-18T17:15:55.028876+0000 mgr.x (mgr.4101) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056099 2021-04-18 16:10:15 2021-04-18 16:52:23 2021-04-18 17:41:52 0:49:29 0:38:46 0:10:43 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/careful thrashosds-health workloads/cache-snaps} 3
fail 6056100 2021-04-18 16:10:16 2021-04-18 16:52:24 2021-04-18 19:03:08 2:10:44 2:04:12 0:06:32 smithi master rhel 8.3 rados/upgrade/pacific-x/parallel/{0-start 1-tasks distro1$/{rhel_8.3_kubic_stable} mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

SELinux denials found on ubuntu@smithi063.front.sepia.ceph.com: ['type=AVC msg=audit(1618770232.055:7763): avc: denied { write } for pid=90893 comm="alertmanager" path="socket:[385438]" dev="sockfs" ino=385438 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=udp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618771025.150:8125): avc: denied { write } for pid=74194 comm="msgr-worker-1" laddr=172.21.15.63 lport=60436 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770261.595:7900): avc: denied { search } for pid=74171 comm="conmon" name="/" dev="tmpfs" ino=17419 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770245.720:7824): avc: denied { read } for pid=89172 comm="node_exporter" name="mdstat" dev="proc" ino=4026532018 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_mdstat_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770247.053:7834): avc: denied { search } for pid=90893 comm="alertmanager" name="etc" dev="sda1" ino=2491791 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c21,c809 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770254.542:7840): avc: denied { getattr } for pid=74194 comm="safe_timer" path="/var/lib/ceph/mon/ceph-a/store.db" dev="dm-4" ino=16797828 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.722:7945): avc: denied { read } for pid=89172 comm="node_exporter" name="entropy_avail" dev="proc" ino=324148 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_kernel_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770223.811:7707): avc: denied { getattr } for pid=71401 comm="mgr-fin" path="/usr/share/ceph/mgr" dev="sda1" ino=2102548 scontext=system_u:object_r:unlabeled_t:s0 tcontext=unconfined_u:object_r:container_ro_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770223.954:7711): avc: denied { getattr } for pid=76122 comm="safe_timer" path="/var/lib/ceph/mon/ceph-c/store.db/LOCK" dev="dm-4" ino=16797840 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770220.835:7663): avc: denied { search } for pid=79399 comm="safe_timer" name="/" dev="sysfs" ino=1 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770244.306:7805): avc: denied { search } for pid=111160 comm="pgrep" name="71401" dev="proc" ino=285264 scontext=system_u:system_r:ksmtuned_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1 trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770260.579:7884): avc: denied { search } for pid=84497 comm="safe_timer" name="diff" dev="sda1" ino=2491724 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_ro_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770223.979:7714): avc: denied { search } for pid=71378 comm="conmon" name="systemd" dev="tmpfs" ino=17428 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:init_var_run_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770232.054:7760): avc: denied { connect } for pid=90893 comm="alertmanager" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=udp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770262.054:7915): avc: denied { read } for pid=90893 comm="alertmanager" path="socket:[386655]" dev="sockfs" ino=386655 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=udp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.722:7748): avc: denied { read } for pid=89172 comm="node_exporter" name="file-nr" dev="proc" ino=324146 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_fs_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770221.986:7680): avc: denied { read } for pid=71401 comm="safe_timer" name="os-release" dev="overlay" ino=660337 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=lnk_file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.722:7742): avc: denied { search } for pid=89172 comm="node_exporter" name="/" dev="devtmpfs" ino=1025 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:device_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770224.540:7723): avc: denied { remove_name } for pid=74194 comm="rocksdb:high0" name="000160.log" dev="dm-4" ino=16797845 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770264.543:7926): avc: denied { search } for pid=74194 comm="safe_timer" name="var" dev="sda1" ino=2491651 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c53,c547 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770235.109:7766): avc: denied { shutdown } for pid=71401 comm="msgr-worker-2" laddr=172.21.15.63 lport=59614 faddr=172.21.15.63 fport=6800 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.721:7732): avc: denied { read } for pid=89172 comm="node_exporter" name="mdstat" dev="proc" ino=4026532018 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_mdstat_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770245.720:7824): avc: denied { open } for pid=89172 comm="node_exporter" path="/proc/mdstat" dev="proc" ino=4026532018 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_mdstat_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770262.053:7907): avc: denied { getattr } for pid=90893 comm="alertmanager" path="/etc/resolv.conf" dev="tmpfs" ino=323524 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770263.323:7921): avc: denied { search } for pid=79399 comm="osd_srv_heartbt" name="/" dev="overlay" ino=2491686 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c136,c983 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770232.054:7759): avc: denied { setopt } for pid=90893 comm="alertmanager" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=udp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770232.054:7757): avc: denied { search } for pid=90893 comm="alertmanager" name="/" dev="overlay" ino=2491781 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c21,c809 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770262.181:7919): avc: denied { open } for pid=71401 comm="safe_timer" path="/usr/lib/os-release" dev="sda1" ino=661282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=unconfined_u:object_r:container_ro_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770225.998:7753): avc: denied { read } for pid=71401 comm="safe_timer" name="memory.limit_in_bytes" dev="cgroup" ino=22058 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:cgroup_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.725:7949): avc: denied { search } for pid=89172 comm="node_exporter" name="/" dev="devtmpfs" ino=1025 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:device_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618771320.168:8194): avc: denied { write } for pid=74194 comm="msgr-worker-1" laddr=172.21.15.63 lport=60436 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770262.181:7919): avc: denied { read } for pid=71401 comm="safe_timer" name="os-release" dev="overlay" ino=660337 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=lnk_file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770245.720:7820): avc: denied { search } for pid=89172 comm="node_exporter" name="fs" dev="proc" ino=324145 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_fs_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.722:7750): avc: denied { read } for pid=89172 comm="node_exporter" name="entropy_avail" dev="proc" ino=324148 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_kernel_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770223.953:7709): avc: denied { open } for pid=76122 comm="safe_timer" path="/var/lib/ceph/mon/ceph-c/store.db" dev="dm-4" ino=16797836 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770223.811:7707): avc: denied { getattr } for pid=71401 comm="mgr-fin" path="/usr/share/ceph/mgr" dev="overlay" ino=2102548 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770260.450:7882): avc: denied { append } for pid=71401 comm="log" path="/var/log/ceph/ceph-mgr.y.log" dev="sda1" ino=396083 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770235.109:7777): avc: denied { getattr } for pid=71401 comm="msgr-worker-0" laddr=172.21.15.63 lport=6800 faddr=172.21.15.63 fport=60436 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770256.144:7866): avc: denied { getattr } for pid=71401 comm="safe_timer" path="/sys/fs/cgroup/memory/memory.limit_in_bytes" dev="cgroup" ino=22058 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:cgroup_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770259.934:7879): avc: denied { open } for pid=87023 comm="safe_timer" path="/proc/loadavg" dev="proc" ino=4026532031 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770220.578:7660): avc: denied { read } for pid=74194 comm="msgr-worker-1" path="socket:[350636]" dev="sockfs" ino=350636 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.722:7747): avc: denied { search } for pid=89172 comm="node_exporter" name="rpc" dev="proc" ino=4026532408 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_rpc_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618771270.164:8189): avc: denied { write } for pid=71401 comm="msgr-worker-0" laddr=172.21.15.63 lport=60442 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770225.721:7738): avc: denied { open } for pid=89172 comm="node_exporter" path="/sys/fs/xfs/dm-4/stats/stats" dev="sysfs" ino=45517 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770238.060:7794): avc: denied { getattr } for pid=71401 comm="safe_timer" path="/usr/lib/os-release" dev="overlay" ino=661282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.721:7736): avc: denied { read } for pid=89172 comm="node_exporter" name="xfs" dev="sysfs" ino=45445 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770235.109:7770): avc: denied { name_connect } for pid=71401 comm="msgr-worker-1" dest=6800 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770255.110:7843): avc: denied { write } for pid=74194 comm="msgr-worker-1" laddr=172.21.15.63 lport=60436 faddr=172.21.15.63 fport=6800 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770255.720:7860): avc: denied { open } for pid=89172 comm="node_exporter" path="/proc/1/mounts" dev="proc" ino=322167 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770244.306:7803): avc: denied { read } for pid=111160 comm="pgrep" name="status" dev="proc" ino=269948 scontext=system_u:system_r:ksmtuned_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770225.721:7732): avc: denied { open } for pid=89172 comm="node_exporter" path="/proc/mdstat" dev="proc" ino=4026532018 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_mdstat_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770255.719:7850): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/proc" dev="proc" ino=1 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.725:7951): avc: denied { search } for pid=89172 comm="node_exporter" name="var" dev="sda1" ino=1281 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:var_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.722:7748): avc: denied { open } for pid=89172 comm="node_exporter" path="/proc/sys/fs/file-nr" dev="proc" ino=324146 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_fs_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770255.719:7851): avc: denied { read } for pid=89172 comm="node_exporter" name="stat" dev="proc" ino=326711 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770245.720:7822): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/sys/devices/system/edac/mc" dev="sysfs" ino=21409 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770255.720:7856): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/sys/devices/pci0000:00/0000:00:1c.4/0000:07:00.0/hwmon/hwmon0/uevent" dev="sysfs" ino=35331 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618771445.175:8210): avc: denied { write } for pid=71401 comm="msgr-worker-0" laddr=172.21.15.63 lport=60442 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770255.721:7864): avc: denied { search } for pid=89172 comm="node_exporter" name="containers" dev="sda1" ino=1452 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:object_r:container_var_lib_t:s0"', 'type=AVC msg=audit(1618770255.588:7846): avc: denied { write } for pid=74194 comm="fn_monstore" path="pipe:[276435]" dev="pipefs" ino=276435 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=fifo_file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770265.720:7940): avc: denied { search } for pid=89172 comm="node_exporter" name="fs" dev="proc" ino=324145 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_fs_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.026:7725): avc: denied { write } for pid=74171 comm="conmon" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=unix_dgram_socket permissive=1 srawcon="system_u:system_r:container_runtime_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770225.722:7748): avc: denied { search } for pid=89172 comm="node_exporter" name="fs" dev="proc" ino=324145 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_fs_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618771200.160:8164): avc: denied { write } for pid=71401 comm="msgr-worker-0" laddr=172.21.15.63 lport=60442 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770259.223:7877): avc: denied { search } for pid=79399 comm="osd_srv_heartbt" name="/" dev="overlay" ino=2491686 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c136,c983 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770245.721:7830): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/proc/loadavg" dev="proc" ino=4026532031 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770260.592:7886): avc: denied { write } for pid=74194 comm="safe_timer" path="/var/lib/ceph/mon/ceph-a/store.db/000166.log" dev="dm-4" ino=16797847 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770261.595:7900): avc: denied { sendto } for pid=74171 comm="conmon" path="/run/systemd/journal/socket" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=unix_dgram_socket permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770224.537:7721): avc: denied { create } for pid=74194 comm="rstore_compact" name="000163.log" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770224.537:7721): avc: denied { open } for pid=74194 comm="rstore_compact" path="/var/lib/ceph/mon/ceph-a/store.db/000163.log" dev="dm-4" ino=16797839 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770261.595:7899): avc: denied { write } for pid=74171 comm="conmon" name="null" dev="devtmpfs" ino=12 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:null_device_t:s0 tclass=chr_file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770265.720:7933): avc: denied { open } for pid=89172 comm="node_exporter" path="/proc/7/net/sockstat" dev="proc" ino=4026532060 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_net_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770259.934:7879): avc: denied { read } for pid=87023 comm="safe_timer" name="loadavg" dev="proc" ino=4026532031 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770261.606:7905): avc: denied { search } for pid=87023 comm="osd_srv_heartbt" name="/" dev="overlay" ino=2491743 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c784,c795 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.998:7755): avc: denied { read } for pid=71401 comm="safe_timer" name="os-release" dev="overlay" ino=660337 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=lnk_file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770650.127:8066): avc: denied { write } for pid=74194 comm="msgr-worker-1" laddr=172.21.15.63 lport=60436 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770255.588:7849): avc: denied { write } for pid=74171 comm="conmon" name="socket" dev="tmpfs" ino=16420 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:syslogd_var_run_t:s0 tclass=sock_file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770258.955:7876): avc: denied { dac_read_search } for pid=76122 comm="safe_timer" capability=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=capability permissive=1 srawcon="system_u:system_r:container_runtime_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770223.012:7700): avc: denied { append } for pid=74194 comm="log" path="/var/log/ceph/ceph-mon.a.log" dev="sda1" ino=396081 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770245.720:7823): avc: denied { open } for pid=89172 comm="node_exporter" path="/sys/devices/system/edac/mc" dev="sysfs" ino=21409 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770264.543:7926): avc: denied { search } for pid=74194 comm="safe_timer" name="/" dev="overlay" ino=2491643 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c53,c547 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770255.721:7861): avc: denied { search } for pid=89172 comm="node_exporter" name="/" dev="sda1" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:root_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770224.540:7723): avc: denied { unlink } for pid=74194 comm="rocksdb:high0" name="000160.log" dev="dm-4" ino=16797845 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770245.721:7825): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/proc/mdstat" dev="proc" ino=4026532018 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_mdstat_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618772035.110:8268): avc: denied { shutdown } for pid=74194 comm="msgr-worker-1" laddr=172.21.15.63 lport=60436 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770260.157:7881): avc: denied { search } for pid=71401 comm="safe_timer" name="container" dev="cgroup" ino=22051 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:cgroup_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770256.144:7865): avc: denied { read } for pid=71401 comm="safe_timer" name="memory.limit_in_bytes" dev="cgroup" ino=22058 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:cgroup_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770262.181:7919): avc: denied { getattr } for pid=71401 comm="safe_timer" name="os-release" dev="sda1" ino=661282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=unconfined_u:object_r:container_ro_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770261.402:7896): avc: denied { search } for pid=81926 comm="safe_timer" name="/" dev="overlay" ino=2491705 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c246,c551 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770255.721:7863): avc: denied { search } for pid=89172 comm="node_exporter" name="lib" dev="sda1" ino=1282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:var_lib_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770223.809:7706): avc: denied { search } for pid=71401 comm="mgr-fin" name="usr" dev="sda1" ino=1837271 scontext=system_u:object_r:unlabeled_t:s0 tcontext=unconfined_u:object_r:container_ro_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770221.021:7671): avc: denied { sendto } for pid=74171 comm="conmon" path="/run/systemd/journal/socket" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=unix_dgram_socket permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770260.579:7884): avc: denied { search } for pid=84497 comm="safe_timer" name="/" dev="proc" ino=1 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770220.578:7659): avc: denied { write } for pid=76122 comm="msgr-worker-0" laddr=172.21.15.63 lport=56864 faddr=172.21.15.63 fport=3300 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770258.955:7876): avc: denied { getattr } for pid=76122 comm="safe_timer" name="/" dev="dm-4" ino=128 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:fs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770260.579:7884): avc: denied { read } for pid=84497 comm="safe_timer" name="loadavg" dev="proc" ino=4026532031 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770262.053:7908): avc: denied { create } for pid=90893 comm="alertmanager" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=udp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.722:7945): avc: denied { open } for pid=89172 comm="node_exporter" path="/proc/sys/kernel/random/entropy_avail" dev="proc" ino=324148 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_kernel_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770908.682:8104): avc: denied { write } for pid=76122 comm="msgr-worker-2" laddr=172.21.15.63 lport=60462 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770244.306:7803): avc: denied { search } for pid=111160 comm="pgrep" name="71378" dev="proc" ino=269944 scontext=system_u:system_r:ksmtuned_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1 trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770225.026:7724): avc: denied { open } for pid=74171 comm="conmon" path="/dev/null" dev="devtmpfs" ino=12 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:null_device_t:s0 tclass=chr_file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770265.725:7950): avc: denied { search } for pid=89172 comm="node_exporter" name="/" dev="tmpfs" ino=17419 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770223.021:7705): avc: denied { read } for pid=74171 comm="conmon" path="pipe:[276435]" dev="pipefs" ino=276435 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=fifo_file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770245.721:7826): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/sys/class/hwmon/hwmon2" dev="sysfs" ino=43612 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=lnk_file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770232.055:7764): avc: denied { read } for pid=90893 comm="alertmanager" path="socket:[384534]" dev="sockfs" ino=384534 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=udp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.720:7944): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/proc/mdstat" dev="proc" ino=4026532018 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_mdstat_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770245.569:7811): avc: denied { use } for pid=74194 comm="fn_monstore" path="pipe:[276435]" dev="pipefs" ino=276435 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=fd permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618771150.158:8157): avc: denied { write } for pid=74194 comm="msgr-worker-1" laddr=172.21.15.63 lport=60436 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770245.719:7815): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/proc/7" dev="proc" ino=324713 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770283.652:7959): avc: denied { write } for pid=76122 comm="msgr-worker-2" laddr=172.21.15.63 lport=60462 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770263.650:7922): avc: denied { write } for pid=76122 comm="msgr-worker-2" laddr=172.21.15.63 lport=60462 faddr=172.21.15.63 fport=6800 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770243.649:7799): avc: denied { accept } for pid=71401 comm="msgr-worker-0" lport=6800 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770305.113:8016): avc: denied { write } for pid=74194 comm="msgr-worker-1" laddr=172.21.15.63 lport=60436 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770265.719:7927): avc: denied { search } for pid=89172 comm="node_exporter" name="/" dev="overlay" ino=2491762 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c257,c697 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770245.612:7813): avc: denied { search } for pid=84497 comm="osd_srv_heartbt" name="/" dev="overlay" ino=2491724 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c401,c484 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.720:7935): avc: denied { search } for pid=89172 comm="node_exporter" name="net" dev="proc" ino=324719 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_net_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770260.593:7887): avc: denied { write } for pid=74194 comm="safe_timer" path="pipe:[273197]" dev="pipefs" ino=273197 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=fifo_file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770232.054:7758): avc: denied { create } for pid=90893 comm="alertmanager" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=udp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.720:7730): avc: denied { open } for pid=89172 comm="node_exporter" path="/proc/7/fd" dev="proc" ino=326712 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.725:7951): avc: denied { search } for pid=89172 comm="node_exporter" name="lib" dev="sda1" ino=1282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:var_lib_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770221.021:7671): avc: denied { search } for pid=74171 comm="conmon" name="journal" dev="tmpfs" ino=16412 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:syslogd_var_run_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770223.953:7709): avc: denied { search } for pid=76122 comm="safe_timer" name="mon.c" dev="dm-4" ino=8388751 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770259.543:7878): avc: denied { search } for pid=74194 comm="safe_timer" name="/" dev="overlay" ino=2491643 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c53,c547 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770260.593:7891): avc: denied { read } for pid=76122 comm="msgr-worker-0" path="socket:[347008]" dev="sockfs" ino=347008 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770221.986:7680): avc: denied { read } for pid=71401 comm="safe_timer" name="os-release" dev="sda1" ino=661282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=unconfined_u:object_r:container_ro_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770262.169:7918): avc: denied { getattr } for pid=71401 comm="safe_timer" path="/sys/fs/cgroup/memory/memory.limit_in_bytes" dev="cgroup" ino=22058 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:cgroup_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.722:7746): avc: denied { search } for pid=89172 comm="node_exporter" name="host" dev="sda1" ino=2491771 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c257,c697 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770252.972:7837): avc: denied { read } for pid=71401 comm="msgr-worker-2" path="pipe:[278205]" dev="pipefs" ino=278205 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=fifo_file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770221.974:7678): avc: denied { read } for pid=71401 comm="safe_timer" name="memory.limit_in_bytes" dev="cgroup" ino=22058 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:cgroup_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770224.538:7722): avc: denied { search } for pid=76122 comm="rstore_compact" name="/" dev="overlay" ino=2491667 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c139,c805 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770228.011:7756): avc: denied { search } for pid=71401 comm="safe_timer" name="etc" dev="sda1" ino=2491624 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770232.054:7757): avc: denied { search } for pid=90893 comm="alertmanager" name="etc" dev="sda1" ino=2491791 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c21,c809 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770265.722:7946): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/proc/sys/kernel/random/entropy_avail" dev="proc" ino=324148 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_kernel_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770255.588:7847): avc: denied { read } for pid=74171 comm="conmon" path="pipe:[276435]" dev="pipefs" ino=276435 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=fifo_file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770225.720:7729): avc: denied { open } for pid=89172 comm="node_exporter" path="/proc/7/stat" dev="proc" ino=326711 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770221.986:7680): avc: denied { open } for pid=71401 comm="safe_timer" path="/usr/lib/os-release" dev="overlay" ino=661282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770223.809:7706): avc: denied { read } for pid=71401 comm="mgr-fin" name="lib" dev="sda1" ino=660550 scontext=system_u:object_r:unlabeled_t:s0 tcontext=unconfined_u:object_r:container_ro_file_t:s0 tclass=lnk_file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770265.724:7947): avc: denied { search } for pid=89172 comm="node_exporter" name="1" dev="proc" ino=320072 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770265.720:7936): avc: denied { open } for pid=89172 comm="node_exporter" path="/sys/devices/system/edac/mc" dev="sysfs" ino=21409 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770254.542:7839): avc: denied { search } for pid=74194 comm="safe_timer" name="mon.a" dev="dm-4" ino=8388740 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.998:7754): avc: denied { getattr } for pid=71401 comm="safe_timer" path="/sys/fs/cgroup/memory/memory.limit_in_bytes" dev="cgroup" ino=22058 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:cgroup_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770255.720:7857): avc: denied { open } for pid=89172 comm="node_exporter" path="/proc/7/net/arp" dev="proc" ino=4026532043 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_net_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770235.109:7771): avc: denied { name_connect } for pid=74194 comm="msgr-worker-1" dest=6800 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770265.720:7939): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/sys/devices/system/edac/mc/mc0/ce_count" dev="sysfs" ino=43800 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770221.021:7669): avc: denied { read } for pid=74171 comm="conmon" path="pipe:[276435]" dev="pipefs" ino=276435 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=fifo_file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770225.720:7729): avc: denied { search } for pid=89172 comm="node_exporter" name="7" dev="proc" ino=324713 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770221.021:7671): avc: denied { search } for pid=74171 comm="conmon" name="/" dev="tmpfs" ino=17419 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770225.722:7744): avc: denied { search } for pid=89172 comm="node_exporter" name="lib" dev="sda1" ino=1282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:var_lib_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.720:7935): avc: denied { search } for pid=89172 comm="node_exporter" name="sys" dev="proc" ino=4026531854 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770258.078:7872): avc: denied { search } for pid=81926 comm="osd_srv_heartbt" name="/" dev="overlay" ino=2491705 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c246,c551 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770258.157:7874): avc: denied { search } for pid=71401 comm="safe_timer" name="etc" dev="sda1" ino=2491624 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770254.542:7839): avc: denied { open } for pid=74194 comm="safe_timer" path="/var/lib/ceph/mon/ceph-a/store.db" dev="dm-4" ino=16797828 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770245.719:7817): avc: denied { read } for pid=89172 comm="node_exporter" name="fd" dev="proc" ino=326712 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770254.144:7838): avc: denied { read } for pid=71401 comm="safe_timer" name="os-release" dev="sda1" ino=660337 scontext=system_u:object_r:unlabeled_t:s0 tcontext=unconfined_u:object_r:container_ro_file_t:s0 tclass=lnk_file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770223.809:7706): avc: denied { getattr } for pid=71401 comm="mgr-fin" name="process.py" dev="sda1" ino=1970598 scontext=system_u:object_r:unlabeled_t:s0 tcontext=unconfined_u:object_r:container_ro_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770225.720:7729): avc: denied { read } for pid=89172 comm="node_exporter" name="stat" dev="proc" ino=326711 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.724:7751): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/proc/sys/kernel/random/entropy_avail" dev="proc" ino=324148 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_kernel_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770223.953:7708): avc: denied { search } for pid=76122 comm="safe_timer" name="var" dev="sda1" ino=2491675 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c139,c805 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770221.986:7681): avc: denied { getattr } for pid=71401 comm="safe_timer" path="/usr/lib/os-release" dev="overlay" ino=661282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770243.649:7800): avc: denied { setopt } for pid=71401 comm="msgr-worker-0" laddr=172.21.15.63 lport=6800 faddr=172.21.15.63 fport=60462 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770257.590:7868): avc: denied { search } for pid=74171 comm="conmon" name="/" dev="tmpfs" ino=17419 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770225.721:7734): avc: denied { search } for pid=89172 comm="node_exporter" name="1" dev="proc" ino=320072 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770262.169:7917): avc: denied { search } for pid=71401 comm="safe_timer" name="container" dev="cgroup" ino=22051 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:cgroup_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770245.720:7821): avc: denied { search } for pid=89172 comm="node_exporter" name="net" dev="proc" ino=324719 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_net_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770223.953:7708): avc: denied { search } for pid=76122 comm="safe_timer" name="/" dev="overlay" ino=2491667 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c139,c805 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.998:7753): avc: denied { open } for pid=71401 comm="safe_timer" path="/sys/fs/cgroup/memory/memory.limit_in_bytes" dev="cgroup" ino=22058 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:cgroup_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770224.528:7719): avc: denied { read } for pid=74194 comm="safe_timer" path="/var/lib/ceph/mon/ceph-a/store.db/000162.sst" dev="dm-4" ino=16797847 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770244.524:7810): avc: denied { open } for pid=81926 comm="safe_timer" path="/sys/devices/system/cpu/online" dev="sysfs" ino=35 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:cpu_online_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770221.021:7670): avc: denied { search } for pid=74171 comm="conmon" name="/" dev="devtmpfs" ino=1025 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:device_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770237.288:7785): avc: denied { read } for pid=79399 comm="msgr-worker-1" path="socket:[315540]" dev="sockfs" ino=315540 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770258.157:7875): avc: denied { getattr } for pid=71401 comm="safe_timer" path="/usr/lib/os-release" dev="overlay" ino=661282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770223.953:7710): avc: denied { getattr } for pid=76122 comm="safe_timer" path="/var/lib/ceph/mon/ceph-c/store.db" dev="dm-4" ino=16797836 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770223.013:7703): avc: denied { read } for pid=74194 comm="msgr-worker-1" path="pipe:[273197]" dev="pipefs" ino=273197 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=fifo_file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770245.721:7828): avc: denied { open } for pid=89172 comm="node_exporter" path="/proc/sys/kernel/random/entropy_avail" dev="proc" ino=324148 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_kernel_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770423.658:8040): avc: denied { write } for pid=76122 comm="msgr-worker-2" laddr=172.21.15.63 lport=60462 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770262.053:7906): avc: denied { search } for pid=90893 comm="alertmanager" name="/" dev="overlay" ino=2491781 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c21,c809 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770223.021:7704): avc: denied { write } for pid=74194 comm="fn_monstore" path="pipe:[276435]" dev="pipefs" ino=276435 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=fifo_file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618771143.650:8136): avc: denied { shutdown } for pid=76122 comm="msgr-worker-2" laddr=172.21.15.63 lport=60462 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770223.979:7714): avc: denied { search } for pid=71378 comm="conmon" name="journal" dev="tmpfs" ino=16412 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:syslogd_var_run_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770263.955:7923): avc: denied { getattr } for pid=76122 comm="safe_timer" name="/" dev="dm-4" ino=128 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:fs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.720:7730): avc: denied { read } for pid=89172 comm="node_exporter" name="fd" dev="proc" ino=326712 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.722:7750): avc: denied { open } for pid=89172 comm="node_exporter" path="/proc/sys/kernel/random/entropy_avail" dev="proc" ino=324148 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_kernel_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770235.109:7773): avc: denied { write } for pid=74194 comm="msgr-worker-1" laddr=172.21.15.63 lport=60436 faddr=172.21.15.63 fport=6800 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770256.144:7867): avc: denied { read } for pid=71401 comm="safe_timer" name="os-release" dev="overlay" ino=660337 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=lnk_file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770255.721:7863): avc: denied { search } for pid=89172 comm="node_exporter" name="var" dev="sda1" ino=1281 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:var_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770221.986:7680): avc: denied { search } for pid=71401 comm="safe_timer" name="etc" dev="sda1" ino=2491624 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618771805.197:8220): avc: denied { write } for pid=71401 comm="msgr-worker-0" laddr=172.21.15.63 lport=60442 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770225.720:7727): avc: denied { search } for pid=89172 comm="node_exporter" name="/" dev="overlay" ino=2491762 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c257,c697 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770238.047:7791): avc: denied { read } for pid=71401 comm="safe_timer" name="meminfo" dev="proc" ino=4026532032 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770258.157:7874): avc: denied { open } for pid=71401 comm="safe_timer" path="/usr/lib/os-release" dev="overlay" ino=661282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770221.021:7670): avc: denied { write } for pid=74171 comm="conmon" name="null" dev="devtmpfs" ino=12 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:null_device_t:s0 tclass=chr_file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770244.306:7804): avc: denied { getattr } for pid=111160 comm="pgrep" path="/proc/71401" dev="proc" ino=285264 scontext=system_u:system_r:ksmtuned_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1 trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770221.974:7677): avc: denied { search } for pid=71401 comm="safe_timer" name="/" dev="overlay" ino=1972390 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770224.537:7721): avc: denied { write } for pid=74194 comm="rstore_compact" name="store.db" dev="dm-4" ino=16797828 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.722:7743): avc: denied { search } for pid=89172 comm="node_exporter" name="/" dev="tmpfs" ino=17419 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770223.953:7709): avc: denied { read } for pid=76122 comm="safe_timer" name="store.db" dev="dm-4" ino=16797836 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.720:7934): avc: denied { read } for pid=89172 comm="node_exporter" name="net" dev="proc" ino=4026531844 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_t:s0 tclass=lnk_file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770221.986:7680): avc: denied { open } for pid=71401 comm="safe_timer" path="/usr/lib/os-release" dev="sda1" ino=661282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=unconfined_u:object_r:container_ro_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770265.724:7947): avc: denied { open } for pid=89172 comm="node_exporter" path="/proc/1/mounts" dev="proc" ino=322167 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770254.542:7839): avc: denied { read } for pid=74194 comm="safe_timer" name="store.db" dev="dm-4" ino=16797828 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770260.593:7890): avc: denied { write } for pid=74194 comm="msgr-worker-0" laddr=172.21.15.63 lport=3300 faddr=172.21.15.187 fport=37816 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770245.721:7829): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/proc/sys/kernel/random/entropy_avail" dev="proc" ino=324148 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_kernel_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770220.610:7661): avc: denied { open } for pid=84497 comm="osd_srv_heartbt" path="/proc/loadavg" dev="proc" ino=4026532031 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770245.721:7828): avc: denied { read } for pid=89172 comm="node_exporter" name="entropy_avail" dev="proc" ino=324148 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_kernel_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770245.719:7818): avc: denied { read } for pid=89172 comm="node_exporter" name="hwmon" dev="sysfs" ino=21394 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770255.588:7849): avc: denied { search } for pid=74171 comm="conmon" name="journal" dev="tmpfs" ino=16412 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:syslogd_var_run_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770265.725:7950): avc: denied { search } for pid=89172 comm="node_exporter" name="user" dev="tmpfs" ino=297 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:user_tmp_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770220.835:7662): avc: denied { search } for pid=79399 comm="safe_timer" name="/" dev="overlay" ino=2491686 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c136,c983 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770261.595:7897): avc: denied { use } for pid=74194 comm="fn_monstore" path="pipe:[276435]" dev="pipefs" ino=276435 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=fd permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770265.720:7938): avc: denied { search } for pid=89172 comm="node_exporter" name="rpc" dev="proc" ino=4026532408 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_rpc_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.722:7741): avc: denied { getattr } for pid=89172 comm="node_exporter" name="/" dev="tmpfs" ino=17420 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.720:7943): avc: denied { open } for pid=89172 comm="node_exporter" path="/proc/mdstat" dev="proc" ino=4026532018 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_mdstat_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770221.021:7671): avc: denied { write } for pid=74171 comm="conmon" name="socket" dev="tmpfs" ino=16420 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:syslogd_var_run_t:s0 tclass=sock_file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770225.721:7733): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/proc/mdstat" dev="proc" ino=4026532018 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_mdstat_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770262.053:7906): avc: denied { search } for pid=90893 comm="alertmanager" name="etc" dev="sda1" ino=2491791 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c21,c809 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770260.579:7884): avc: denied { search } for pid=84497 comm="safe_timer" name="/" dev="overlay" ino=2491724 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c401,c484 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.026:7724): avc: denied { write } for pid=74171 comm="conmon" name="null" dev="devtmpfs" ino=12 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:null_device_t:s0 tclass=chr_file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770225.722:7750): avc: denied { search } for pid=89172 comm="node_exporter" name="kernel" dev="proc" ino=326666 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_kernel_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.720:7728): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/proc/7" dev="proc" ino=324713 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770262.181:7919): avc: denied { read } for pid=71401 comm="safe_timer" name="os-release" dev="sda1" ino=660337 scontext=system_u:object_r:unlabeled_t:s0 tcontext=unconfined_u:object_r:container_ro_file_t:s0 tclass=lnk_file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770265.725:7952): avc: denied { search } for pid=89172 comm="node_exporter" name="containers" dev="sda1" ino=1452 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:object_r:container_var_lib_t:s0"', 'type=AVC msg=audit(1618770237.832:7790): avc: denied { search } for pid=79399 comm="safe_timer" name="/" dev="overlay" ino=2491686 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c136,c983 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770255.720:7854): avc: denied { read } for pid=89172 comm="node_exporter" name="file-nr" dev="proc" ino=324146 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_fs_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770245.721:7827): avc: denied { read } for pid=89172 comm="node_exporter" name="hwmon0" dev="sysfs" ino=35334 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=lnk_file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770256.144:7865): avc: denied { open } for pid=71401 comm="safe_timer" path="/sys/fs/cgroup/memory/memory.limit_in_bytes" dev="cgroup" ino=22058 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:cgroup_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.721:7738): avc: denied { read } for pid=89172 comm="node_exporter" name="stats" dev="sysfs" ino=45517 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770224.536:7720): avc: denied { search } for pid=74194 comm="safe_timer" name="/" dev="overlay" ino=2491643 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c53,c547 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770242.072:7797): avc: denied { search } for pid=71401 comm="safe_timer" name="container" dev="cgroup" ino=22051 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:cgroup_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770260.579:7884): avc: denied { open } for pid=84497 comm="safe_timer" path="/proc/loadavg" dev="proc" ino=4026532031 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.721:7731): avc: denied { open } for pid=89172 comm="node_exporter" path="/proc/7/net/sockstat" dev="proc" ino=4026532060 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_net_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770256.144:7867): avc: denied { open } for pid=71401 comm="safe_timer" path="/usr/lib/os-release" dev="sda1" ino=661282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=unconfined_u:object_r:container_ro_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770468.659:8041): avc: denied { write } for pid=76122 comm="msgr-worker-2" laddr=172.21.15.63 lport=60462 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770265.720:7940): avc: denied { open } for pid=89172 comm="node_exporter" path="/proc/sys/fs/file-nr" dev="proc" ino=324146 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_fs_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770263.955:7924): avc: denied { read } for pid=76122 comm="safe_timer" name="store.db" dev="dm-4" ino=16797836 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770262.181:7919): avc: denied { read } for pid=71401 comm="safe_timer" name="os-release" dev="sda1" ino=661282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=unconfined_u:object_r:container_ro_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770245.719:7814): avc: denied { search } for pid=89172 comm="node_exporter" name="/" dev="overlay" ino=2491762 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c257,c697 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770260.579:7885): avc: denied { read } for pid=84497 comm="safe_timer" name="online" dev="sysfs" ino=35 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:cpu_online_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618771265.163:8165): avc: denied { write } for pid=71401 comm="msgr-worker-0" laddr=172.21.15.63 lport=60442 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770262.181:7919): avc: denied { open } for pid=71401 comm="safe_timer" path="/usr/lib/os-release" dev="overlay" ino=661282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618771810.197:8224): avc: denied { write } for pid=71401 comm="msgr-worker-0" laddr=172.21.15.63 lport=60442 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770265.720:7940): avc: denied { read } for pid=89172 comm="node_exporter" name="file-nr" dev="proc" ino=324146 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_fs_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618771135.109:8131): avc: denied { shutdown } for pid=71401 comm="msgr-worker-1" laddr=172.21.15.63 lport=60438 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770220.578:7657): avc: denied { write } for pid=76122 comm="safe_timer" path="pipe:[283611]" dev="pipefs" ino=283611 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=fifo_file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770224.538:7722): avc: denied { search } for pid=76122 comm="rstore_compact" name="var" dev="sda1" ino=2491675 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c139,c805 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770785.135:8091): avc: denied { write } for pid=74194 comm="msgr-worker-1" laddr=172.21.15.63 lport=60436 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770221.021:7671): avc: denied { write } for pid=74171 comm="conmon" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=unix_dgram_socket permissive=1 srawcon="system_u:system_r:container_runtime_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770225.721:7731): avc: denied { read } for pid=89172 comm="node_exporter" name="sockstat" dev="proc" ino=4026532060 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_net_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770245.569:7812): avc: denied { search } for pid=74171 comm="conmon" name="/" dev="devtmpfs" ino=1025 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:device_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770258.145:7873): avc: denied { search } for pid=71401 comm="safe_timer" name="/" dev="overlay" ino=1972390 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770261.595:7897): avc: denied { write } for pid=74194 comm="fn_monstore" path="pipe:[276435]" dev="pipefs" ino=276435 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=fifo_file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770262.181:7919): avc: denied { search } for pid=71401 comm="safe_timer" name="etc" dev="sda1" ino=2491624 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770221.021:7670): avc: denied { open } for pid=74171 comm="conmon" path="/dev/null" dev="devtmpfs" ino=12 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:null_device_t:s0 tclass=chr_file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770288.653:7964): avc: denied { write } for pid=76122 comm="msgr-worker-2" laddr=172.21.15.63 lport=60462 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770245.722:7831): avc: denied { search } for pid=89172 comm="node_exporter" name="/" dev="devtmpfs" ino=1025 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:device_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770262.169:7917): avc: denied { open } for pid=71401 comm="safe_timer" path="/sys/fs/cgroup/memory/memory.limit_in_bytes" dev="cgroup" ino=22058 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:cgroup_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770261.595:7899): avc: denied { search } for pid=74171 comm="conmon" name="/" dev="sda1" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:root_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770235.109:7770): avc: denied { connect } for pid=71401 comm="msgr-worker-1" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770245.722:7832): avc: denied { search } for pid=89172 comm="node_exporter" name="/" dev="tmpfs" ino=17419 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770221.986:7680): avc: denied { read } for pid=71401 comm="safe_timer" name="os-release" dev="sda1" ino=660337 scontext=system_u:object_r:unlabeled_t:s0 tcontext=unconfined_u:object_r:container_ro_file_t:s0 tclass=lnk_file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770255.720:7854): avc: denied { open } for pid=89172 comm="node_exporter" path="/proc/sys/fs/file-nr" dev="proc" ino=324146 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_fs_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770223.953:7708): avc: denied { dac_read_search } for pid=76122 comm="safe_timer" capability=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=capability permissive=1 srawcon="system_u:system_r:container_runtime_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770221.021:7668): avc: denied { use } for pid=74194 comm="fn_monstore" path="pipe:[276435]" dev="pipefs" ino=276435 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=fd permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770221.021:7671): avc: denied { search } for pid=74171 comm="conmon" name="systemd" dev="tmpfs" ino=17428 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:init_var_run_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770221.021:7668): avc: denied { write } for pid=74194 comm="fn_monstore" path="pipe:[276435]" dev="pipefs" ino=276435 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=fifo_file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770255.721:7861): avc: denied { getattr } for pid=89172 comm="node_exporter" name="/" dev="tmpfs" ino=17420 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.722:7744): avc: denied { search } for pid=89172 comm="node_exporter" name="var" dev="sda1" ino=1281 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:var_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770258.157:7874): avc: denied { read } for pid=71401 comm="safe_timer" name="os-release" dev="overlay" ino=661282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770235.109:7769): avc: denied { setopt } for pid=74194 comm="msgr-worker-1" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.719:7930): avc: denied { open } for pid=89172 comm="node_exporter" path="/proc/7/fd" dev="proc" ino=326712 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770235.109:7768): avc: denied { create } for pid=71401 comm="msgr-worker-1" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770255.720:7858): avc: denied { read } for pid=89172 comm="node_exporter" name="stats" dev="sysfs" ino=45517 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770221.175:7676): avc: denied { search } for pid=81926 comm="osd_srv_heartbt" name="/" dev="overlay" ino=2491705 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c246,c551 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770255.720:7855): avc: denied { search } for pid=89172 comm="node_exporter" name="kernel" dev="proc" ino=326666 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_kernel_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770262.169:7917): avc: denied { read } for pid=71401 comm="safe_timer" name="memory.limit_in_bytes" dev="cgroup" ino=22058 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:cgroup_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770255.721:7862): avc: denied { search } for pid=89172 comm="node_exporter" name="user" dev="tmpfs" ino=297 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:user_tmp_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.722:7740): avc: denied { read } for pid=89172 comm="node_exporter" name="cpufreq" dev="sysfs" ino=30601 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=lnk_file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.720:7942): avc: denied { read } for pid=89172 comm="node_exporter" name="hwmon0" dev="sysfs" ino=35334 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=lnk_file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770238.048:7792): avc: denied { search } for pid=71401 comm="safe_timer" name="/" dev="sysfs" ino=1 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770290.112:7988): avc: denied { write } for pid=74194 comm="msgr-worker-1" laddr=172.21.15.63 lport=60436 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618771565.182:8214): avc: denied { write } for pid=71401 comm="msgr-worker-0" laddr=172.21.15.63 lport=60442 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770223.012:7701): avc: denied { write } for pid=74194 comm="safe_timer" path="/var/lib/ceph/mon/ceph-a/store.db/000160.log" dev="dm-4" ino=16797845 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770220.610:7661): avc: denied { search } for pid=84497 comm="osd_srv_heartbt" name="diff" dev="sda1" ino=2491724 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_ro_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618771028.689:8129): avc: denied { write } for pid=76122 comm="msgr-worker-2" laddr=172.21.15.63 lport=60462 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770225.720:7727): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/proc" dev="proc" ino=1 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770244.523:7809): avc: denied { search } for pid=71401 comm="mgr-fin" name="diff" dev="sda1" ino=1972390 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_ro_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770265.720:7931): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/sys/devices/system/edac/mc" dev="sysfs" ino=21409 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770224.498:7718): avc: denied { search } for pid=81926 comm="safe_timer" name="/" dev="proc" ino=1 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.722:7745): avc: denied { search } for pid=89172 comm="node_exporter" name="containers" dev="sda1" ino=1452 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:object_r:container_var_lib_t:s0"', 'type=AVC msg=audit(1618770263.955:7925): avc: denied { getattr } for pid=76122 comm="safe_timer" path="/var/lib/ceph/mon/ceph-c/store.db" dev="dm-4" ino=16797836 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770221.986:7680): avc: denied { search } for pid=71401 comm="safe_timer" name="usr" dev="sda1" ino=1837271 scontext=system_u:object_r:unlabeled_t:s0 tcontext=unconfined_u:object_r:container_ro_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770262.054:7911): avc: denied { connect } for pid=90893 comm="alertmanager" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=udp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770224.537:7721): avc: denied { add_name } for pid=74194 comm="rstore_compact" name="000163.log" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770261.595:7900): avc: denied { write } for pid=74171 comm="conmon" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=unix_dgram_socket permissive=1 srawcon="system_u:system_r:container_runtime_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618771150.158:8158): avc: denied { write } for pid=71401 comm="msgr-worker-0" laddr=172.21.15.63 lport=60442 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770263.955:7923): avc: denied { search } for pid=76122 comm="safe_timer" name="var" dev="sda1" ino=2491675 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c139,c805 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770262.053:7909): avc: denied { create } for pid=90893 comm="alertmanager" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=udp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.721:7735): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/sys/fs/xfs" dev="sysfs" ino=45445 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770221.974:7679): avc: denied { getattr } for pid=71401 comm="safe_timer" path="/sys/fs/cgroup/memory/memory.limit_in_bytes" dev="cgroup" ino=22058 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:cgroup_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770221.986:7680): avc: denied { getattr } for pid=71401 comm="safe_timer" name="os-release" dev="sda1" ino=661282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=unconfined_u:object_r:container_ro_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770255.476:7845): avc: denied { search } for pid=81926 comm="safe_timer" name="/" dev="proc" ino=1 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770710.131:8075): avc: denied { write } for pid=74194 comm="msgr-worker-1" laddr=172.21.15.63 lport=60436 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770238.060:7793): avc: denied { read } for pid=71401 comm="safe_timer" name="os-release" dev="overlay" ino=661282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770256.144:7867): avc: denied { getattr } for pid=71401 comm="safe_timer" name="os-release" dev="sda1" ino=661282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=unconfined_u:object_r:container_ro_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770244.306:7803): avc: denied { open } for pid=111160 comm="pgrep" path="/proc/71378/status" dev="proc" ino=269948 scontext=system_u:system_r:ksmtuned_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770221.986:7680): avc: denied { read } for pid=71401 comm="safe_timer" name="os-release" dev="overlay" ino=661282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770262.053:7910): avc: denied { setopt } for pid=90893 comm="alertmanager" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=udp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770223.979:7713): avc: denied { ioctl } for pid=71401 comm="mgr-fin" path="/usr/share/ceph/mgr/cephadm/serve.py" dev="overlay" ino=2102566 ioctlcmd=0x5401 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770238.955:7795): avc: denied { getattr } for pid=76122 comm="safe_timer" name="/" dev="dm-4" ino=128 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:fs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770245.719:7816): avc: denied { search } for pid=89172 comm="node_exporter" name="7" dev="proc" ino=324713 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770262.181:7919): avc: denied { read } for pid=71401 comm="safe_timer" name="os-release" dev="overlay" ino=661282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770224.536:7720): avc: denied { search } for pid=74194 comm="safe_timer" name="var" dev="sda1" ino=2491651 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c53,c547 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770244.306:7805): avc: denied { read } for pid=111160 comm="pgrep" name="status" dev="proc" ino=285292 scontext=system_u:system_r:ksmtuned_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618771320.168:8193): avc: denied { write } for pid=71401 comm="msgr-worker-0" laddr=172.21.15.63 lport=60442 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770255.720:7859): avc: denied { open } for pid=89172 comm="node_exporter" path="/sys/devices/pci0000:00/0000:00:1c.4/0000:07:00.0/hwmon/hwmon0/temp1_crit" dev="sysfs" ino=35339 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770235.109:7767): avc: denied { shutdown } for pid=71401 comm="msgr-worker-0" laddr=172.21.15.63 lport=6800 faddr=172.21.15.63 fport=59614 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770237.508:7786): avc: denied { search } for pid=81926 comm="safe_timer" name="/" dev="overlay" ino=2491705 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c246,c551 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770261.595:7900): avc: denied { write } for pid=74171 comm="conmon" name="socket" dev="tmpfs" ino=16420 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:syslogd_var_run_t:s0 tclass=sock_file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770244.524:7810): avc: denied { read } for pid=81926 comm="safe_timer" name="online" dev="sysfs" ino=35 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:cpu_online_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770220.610:7661): avc: denied { search } for pid=84497 comm="osd_srv_heartbt" name="/" dev="proc" ino=1 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.720:7943): avc: denied { read } for pid=89172 comm="node_exporter" name="mdstat" dev="proc" ino=4026532018 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_mdstat_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770221.021:7670): avc: denied { search } for pid=74171 comm="conmon" name="/" dev="sda1" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:root_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770265.719:7929): avc: denied { open } for pid=89172 comm="node_exporter" path="/proc/7/stat" dev="proc" ino=326711 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770261.595:7899): avc: denied { search } for pid=74171 comm="conmon" name="/" dev="devtmpfs" ino=1025 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:device_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770225.722:7748): avc: denied { search } for pid=89172 comm="node_exporter" name="sys" dev="proc" ino=4026531854 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770244.306:7802): avc: denied { getattr } for pid=111160 comm="pgrep" path="/proc/71378" dev="proc" ino=269944 scontext=system_u:system_r:ksmtuned_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1 trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770225.721:7736): avc: denied { open } for pid=89172 comm="node_exporter" path="/sys/fs/xfs" dev="sysfs" ino=45445 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770220.578:7658): avc: denied { read } for pid=76122 comm="msgr-worker-0" path="pipe:[283611]" dev="pipefs" ino=283611 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=fifo_file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.719:7930): avc: denied { read } for pid=89172 comm="node_exporter" name="fd" dev="proc" ino=326712 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770223.953:7708): avc: denied { getattr } for pid=76122 comm="safe_timer" name="/" dev="dm-4" ino=128 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:fs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618771138.696:8135): avc: denied { write } for pid=76122 comm="msgr-worker-2" laddr=172.21.15.63 lport=60462 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770263.955:7924): avc: denied { open } for pid=76122 comm="safe_timer" path="/var/lib/ceph/mon/ceph-c/store.db" dev="dm-4" ino=16797836 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770263.955:7924): avc: denied { search } for pid=76122 comm="safe_timer" name="mon.c" dev="dm-4" ino=8388751 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770237.288:7784): avc: denied { write } for pid=84497 comm="msgr-worker-1" laddr=172.21.15.63 lport=37226 faddr=172.21.15.63 fport=6804 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770259.543:7878): avc: denied { search } for pid=74194 comm="safe_timer" name="var" dev="sda1" ino=2491651 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c53,c547 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770259.935:7880): avc: denied { search } for pid=87023 comm="safe_timer" name="/" dev="sysfs" ino=1 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770234.035:7765): avc: denied { search } for pid=71401 comm="safe_timer" name="/" dev="overlay" ino=1972390 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770263.955:7923): avc: denied { dac_read_search } for pid=76122 comm="safe_timer" capability=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=capability permissive=1 srawcon="system_u:system_r:container_runtime_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770223.012:7702): avc: denied { write } for pid=74194 comm="safe_timer" path="pipe:[273197]" dev="pipefs" ino=273197 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=fifo_file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.722:7945): avc: denied { search } for pid=89172 comm="node_exporter" name="kernel" dev="proc" ino=326666 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_kernel_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.719:7928): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/proc/7" dev="proc" ino=324713 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770260.593:7889): avc: denied { write } for pid=74194 comm="msgr-worker-1" laddr=172.21.15.63 lport=3300 faddr=172.21.15.63 fport=56864 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770255.719:7852): avc: denied { search } for pid=89172 comm="node_exporter" name="sys" dev="proc" ino=4026531854 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.722:7743): avc: denied { search } for pid=89172 comm="node_exporter" name="user" dev="tmpfs" ino=297 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:user_tmp_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770223.979:7712): avc: denied { read } for pid=71401 comm="mgr-fin" name="serve.py" dev="sda1" ino=2102566 scontext=system_u:object_r:unlabeled_t:s0 tcontext=unconfined_u:object_r:container_ro_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770221.974:7678): avc: denied { open } for pid=71401 comm="safe_timer" path="/sys/fs/cgroup/memory/memory.limit_in_bytes" dev="cgroup" ino=22058 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:cgroup_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770255.720:7860): avc: denied { read } for pid=89172 comm="node_exporter" name="mounts" dev="proc" ino=322167 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770225.721:7731): avc: denied { read } for pid=89172 comm="node_exporter" name="net" dev="proc" ino=4026531844 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_t:s0 tclass=lnk_file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770244.306:7805): avc: denied { open } for pid=111160 comm="pgrep" path="/proc/71401/status" dev="proc" ino=285292 scontext=system_u:system_r:ksmtuned_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770243.649:7801): avc: denied { getattr } for pid=71401 comm="msgr-worker-1" laddr=172.21.15.63 lport=6800 faddr=172.21.15.63 fport=60462 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770262.054:7914): avc: denied { write } for pid=90893 comm="alertmanager" path="socket:[387892]" dev="sockfs" ino=387892 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=udp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.719:7929): avc: denied { search } for pid=89172 comm="node_exporter" name="7" dev="proc" ino=324713 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.720:7934): avc: denied { read } for pid=89172 comm="node_exporter" name="arp" dev="proc" ino=4026532043 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_net_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770252.972:7836): avc: denied { write } for pid=71401 comm="safe_timer" path="pipe:[278205]" dev="pipefs" ino=278205 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=fifo_file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.726:7752): avc: denied { search } for pid=89172 comm="node_exporter" name="net" dev="proc" ino=324719 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_net_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770256.144:7867): avc: denied { read } for pid=71401 comm="safe_timer" name="os-release" dev="sda1" ino=661282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=unconfined_u:object_r:container_ro_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770265.719:7927): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/proc" dev="proc" ino=1 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770255.720:7857): avc: denied { read } for pid=89172 comm="node_exporter" name="arp" dev="proc" ino=4026532043 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_net_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.725:7953): avc: denied { search } for pid=89172 comm="node_exporter" name="host" dev="sda1" ino=2491771 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c257,c697 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770225.721:7737): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/sys/fs/xfs/stats/stats" dev="sysfs" ino=45447 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770247.574:7835): avc: denied { sendto } for pid=74171 comm="conmon" path="/run/systemd/journal/socket" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=unix_dgram_socket permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770903.682:8100): avc: denied { write } for pid=76122 comm="msgr-worker-2" laddr=172.21.15.63 lport=60462 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770238.047:7791): avc: denied { open } for pid=71401 comm="safe_timer" path="/proc/meminfo" dev="proc" ino=4026532032 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770261.595:7899): avc: denied { open } for pid=74171 comm="conmon" path="/dev/null" dev="devtmpfs" ino=12 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:null_device_t:s0 tclass=chr_file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770225.721:7734): avc: denied { read } for pid=89172 comm="node_exporter" name="mounts" dev="proc" ino=322167 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770220.610:7661): avc: denied { read } for pid=84497 comm="osd_srv_heartbt" name="loadavg" dev="proc" ino=4026532031 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770223.979:7714): avc: denied { write } for pid=71378 comm="conmon" name="socket" dev="tmpfs" ino=16420 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:syslogd_var_run_t:s0 tclass=sock_file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770265.725:7948): avc: denied { getattr } for pid=89172 comm="node_exporter" name="/" dev="tmpfs" ino=17420 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770260.579:7885): avc: denied { search } for pid=84497 comm="safe_timer" name="/" dev="sysfs" ino=1 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770262.054:7913): avc: denied { getattr } for pid=90893 comm="alertmanager" laddr=172.21.15.63 lport=57967 faddr=172.21.0.1 fport=53 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=udp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770220.835:7663): avc: denied { read } for pid=79399 comm="safe_timer" name="online" dev="sysfs" ino=35 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:cpu_online_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.720:7937): avc: denied { read } for pid=89172 comm="node_exporter" name="ce_count" dev="sysfs" ino=43800 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770221.012:7667): avc: denied { write } for pid=74194 comm="safe_timer" path="/var/lib/ceph/mon/ceph-a/store.db/000160.log" dev="dm-4" ino=16797845 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770262.181:7919): avc: denied { search } for pid=71401 comm="safe_timer" name="usr" dev="sda1" ino=1837271 scontext=system_u:object_r:unlabeled_t:s0 tcontext=unconfined_u:object_r:container_ro_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770261.595:7900): avc: denied { search } for pid=74171 comm="conmon" name="systemd" dev="tmpfs" ino=17428 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:init_var_run_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770243.649:7798): avc: denied { shutdown } for pid=76122 comm="msgr-worker-2" laddr=172.21.15.63 lport=59622 faddr=172.21.15.63 fport=6800 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770578.663:8060): avc: denied { write } for pid=76122 comm="msgr-worker-2" laddr=172.21.15.63 lport=60462 faddr=172.21.15.63 fport=6800 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770255.588:7849): avc: denied { search } for pid=74171 comm="conmon" name="systemd" dev="tmpfs" ino=17428 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:init_var_run_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770255.720:7853): avc: denied { search } for pid=89172 comm="node_exporter" name="rpc" dev="proc" ino=4026532408 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysctl_rpc_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770255.719:7851): avc: denied { open } for pid=89172 comm="node_exporter" path="/proc/7/stat" dev="proc" ino=326711 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770245.719:7817): avc: denied { open } for pid=89172 comm="node_exporter" path="/proc/7/fd" dev="proc" ino=326712 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.720:7941): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/sys/class/hwmon/hwmon2" dev="sysfs" ino=43612 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=lnk_file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770262.169:7916): avc: denied { search } for pid=71401 comm="safe_timer" name="/" dev="overlay" ino=1972390 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770247.053:7834): avc: denied { search } for pid=90893 comm="alertmanager" name="/" dev="overlay" ino=2491781 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c21,c809 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770260.593:7888): avc: denied { read } for pid=74194 comm="msgr-worker-1" path="pipe:[273197]" dev="pipefs" ino=273197 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=fifo_file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770225.723:7749): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/sys/class/hwmon/hwmon2" dev="sysfs" ino=43612 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=lnk_file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770235.109:7772): avc: denied { accept } for pid=71401 comm="msgr-worker-0" lport=6800 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618771135.109:8130): avc: denied { read } for pid=71401 comm="msgr-worker-1" path="socket:[378809]" dev="sockfs" ino=378809 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1', 'type=AVC msg=audit(1618770235.109:7774): avc: denied { setopt } for pid=71401 comm="msgr-worker-0" laddr=172.21.15.63 lport=6800 faddr=172.21.15.63 fport=60436 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770261.595:7900): avc: denied { search } for pid=74171 comm="conmon" name="journal" dev="tmpfs" ino=16412 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:syslogd_var_run_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770223.979:7712): avc: denied { open } for pid=71401 comm="mgr-fin" path="/usr/share/ceph/mgr/cephadm/serve.py" dev="sda1" ino=2102566 scontext=system_u:object_r:unlabeled_t:s0 tcontext=unconfined_u:object_r:container_ro_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770225.722:7739): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/proc/loadavg" dev="proc" ino=4026532031 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770221.974:7678): avc: denied { search } for pid=71401 comm="safe_timer" name="container" dev="cgroup" ino=22051 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:cgroup_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.720:7937): avc: denied { open } for pid=89172 comm="node_exporter" path="/sys/devices/system/edac/mc/mc0/ce_count" dev="sysfs" ino=43800 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770263.955:7923): avc: denied { search } for pid=76122 comm="safe_timer" name="/" dev="overlay" ino=2491667 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c139,c805 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770258.955:7876): avc: denied { search } for pid=76122 comm="safe_timer" name="/" dev="overlay" ino=2491667 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c139,c805 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.724:7947): avc: denied { read } for pid=89172 comm="node_exporter" name="mounts" dev="proc" ino=322167 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770225.721:7734): avc: denied { open } for pid=89172 comm="node_exporter" path="/proc/1/mounts" dev="proc" ino=322167 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770232.055:7762): avc: denied { write } for pid=90893 comm="alertmanager" path="socket:[384534]" dev="sockfs" ino=384534 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=udp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770220.524:7655): avc: denied { append } for pid=81926 comm="log" path="/var/log/ceph/ceph-osd.1.log" dev="sda1" ino=396091 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770245.719:7819): avc: denied { search } for pid=89172 comm="node_exporter" name="1" dev="proc" ino=320072 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770265.720:7932): avc: denied { getattr } for pid=89172 comm="node_exporter" path="/proc/loadavg" dev="proc" ino=4026532031 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770262.181:7920): avc: denied { getattr } for pid=71401 comm="safe_timer" path="/usr/lib/os-release" dev="overlay" ino=661282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770245.722:7833): avc: denied { search } for pid=89172 comm="node_exporter" name="host" dev="sda1" ino=2491771 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c257,c697 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770225.722:7741): avc: denied { search } for pid=89172 comm="node_exporter" name="/" dev="sda1" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:root_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770254.144:7838): avc: denied { search } for pid=71401 comm="safe_timer" name="usr" dev="sda1" ino=1837271 scontext=system_u:object_r:unlabeled_t:s0 tcontext=unconfined_u:object_r:container_ro_file_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770261.595:7898): avc: denied { read } for pid=74171 comm="conmon" path="pipe:[276435]" dev="pipefs" ino=276435 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=fifo_file permissive=1 srawcon="system_u:system_r:container_runtime_t:s0" trawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770265.725:7948): avc: denied { search } for pid=89172 comm="node_exporter" name="/" dev="sda1" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:root_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770222.842:7682): avc: denied { search } for pid=87023 comm="safe_timer" name="/" dev="overlay" ino=2491743 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c784,c795 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.720:7933): avc: denied { read } for pid=89172 comm="node_exporter" name="net" dev="proc" ino=4026531844 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_t:s0 tclass=lnk_file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770220.610:7661): avc: denied { search } for pid=84497 comm="osd_srv_heartbt" name="/" dev="overlay" ino=2491724 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c401,c484 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770255.588:7848): avc: denied { search } for pid=74171 comm="conmon" name="/" dev="sda1" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:root_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:container_runtime_t:s0"', 'type=AVC msg=audit(1618770265.720:7933): avc: denied { read } for pid=89172 comm="node_exporter" name="sockstat" dev="proc" ino=4026532060 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:proc_net_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770220.835:7663): avc: denied { open } for pid=79399 comm="safe_timer" path="/sys/devices/system/cpu/online" dev="sysfs" ino=35 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:cpu_online_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.719:7929): avc: denied { read } for pid=89172 comm="node_exporter" name="stat" dev="proc" ino=326711 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0" trawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770235.109:7775): avc: denied { read } for pid=71401 comm="msgr-worker-1" path="socket:[378809]" dev="sockfs" ino=378809 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770260.579:7885): avc: denied { open } for pid=84497 comm="safe_timer" path="/sys/devices/system/cpu/online" dev="sysfs" ino=35 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:cpu_online_t:s0 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770238.060:7793): avc: denied { open } for pid=71401 comm="safe_timer" path="/usr/lib/os-release" dev="overlay" ino=661282 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c549,c595 tclass=file permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770262.054:7912): avc: denied { connect } for pid=90893 comm="alertmanager" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=udp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770232.054:7761): avc: denied { getattr } for pid=90893 comm="alertmanager" laddr=172.21.15.63 lport=58946 faddr=172.21.0.1 fport=53 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=udp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770265.720:7936): avc: denied { read } for pid=89172 comm="node_exporter" name="mc" dev="sysfs" ino=21409 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:sysfs_t:s0 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770241.778:7796): avc: denied { search } for pid=87023 comm="safe_timer" name="/" dev="overlay" ino=2491743 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:container_file_t:s0:c784,c795 tclass=dir permissive=1 srawcon="system_u:system_r:spc_t:s0"', 'type=AVC msg=audit(1618770235.109:7776): avc: denied { getattr } for pid=74194 comm="msgr-worker-1" laddr=172.21.15.63 lport=60436 faddr=172.21.15.63 fport=6800 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=tcp_socket permissive=1 srawcon="system_u:system_r:spc_t:s0"']

fail 6056101 2021-04-18 16:10:17 2021-04-18 16:53:04 2021-04-18 17:21:08 0:28:04 0:19:25 0:08:39 smithi master centos 8.3 rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} 1
Failure Reason:

"2021-04-18T17:18:07.695424+0000 mgr.y (mgr.4102) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056102 2021-04-18 16:10:18 2021-04-18 16:53:04 2021-04-18 17:29:34 0:36:30 0:24:31 0:11:59 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/lockdep} 2
Failure Reason:

"2021-04-18T17:09:05.316016+0000 mgr.y (mgr.4109) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056103 2021-04-18 16:10:19 2021-04-18 16:53:04 2021-04-18 17:21:08 0:28:04 0:21:12 0:06:52 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/rgw-ingress 3-final} 2
pass 6056104 2021-04-18 16:10:20 2021-04-18 16:53:05 2021-04-18 17:49:54 0:56:49 0:45:31 0:11:18 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/cache-agent-big} 2
fail 6056105 2021-04-18 16:10:20 2021-04-18 16:53:05 2021-04-18 17:17:35 0:24:30 0:12:50 0:11:40 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/cache-agent-small} 2
Failure Reason:

"2021-04-18T17:10:44.845422+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056106 2021-04-18 16:10:21 2021-04-18 16:54:26 2021-04-18 17:23:36 0:29:10 0:19:39 0:09:31 smithi master centos 8.2 rados/cephadm/dashboard/{0-distro/centos_8.2_kubic_stable task/test_e2e} 2
fail 6056107 2021-04-18 16:10:22 2021-04-18 16:54:26 2021-04-18 17:39:52 0:45:26 0:36:18 0:09:08 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/cache-pool-snaps-readproxy} 2
Failure Reason:

"2021-04-18T17:18:36.199047+0000 mgr.x (mgr.4102) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056108 2021-04-18 16:10:23 2021-04-18 16:54:26 2021-04-18 17:11:35 0:17:09 0:08:32 0:08:37 smithi master centos 8.3 rados/singleton/{all/deduptool mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 1
pass 6056109 2021-04-18 16:10:24 2021-04-18 16:55:00 2021-04-18 17:13:26 0:18:26 0:07:58 0:10:28 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/balancer mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
fail 6056110 2021-04-18 16:10:25 2021-04-18 16:56:22 2021-04-18 17:19:39 0:23:17 0:11:48 0:11:29 smithi master ubuntu 20.04 rados/cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_20.04} 2-node-mgr orchestrator_cli} 2
Failure Reason:

"2021-04-18T17:11:55.594578+0000 mgr.y (mgr.4110) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056111 2021-04-18 16:10:26 2021-04-18 16:56:22 2021-04-18 17:26:16 0:29:54 0:18:41 0:11:13 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/cache-pool-snaps} 2
Failure Reason:

"2021-04-18T17:12:05.187208+0000 mgr.y (mgr.4105) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056112 2021-04-18 16:10:27 2021-04-18 16:56:22 2021-04-18 17:36:16 0:39:54 0:30:24 0:09:30 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/cache-snaps-balanced} 2
Failure Reason:

"2021-04-18T17:19:49.126454+0000 mgr.x (mgr.4110) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056113 2021-04-18 16:10:28 2021-04-18 16:58:27 2021-04-18 17:23:21 0:24:54 0:14:44 0:10:10 smithi master centos 8.2 rados/cephadm/smoke/{distro/centos_8.2_kubic_stable fixed-2 mon_election/classic start} 2
fail 6056114 2021-04-18 16:10:28 2021-04-18 16:58:28 2021-04-18 17:29:34 0:31:06 0:19:57 0:11:09 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/cache-snaps} 2
Failure Reason:

"2021-04-18T17:15:32.852703+0000 mgr.y (mgr.4101) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056115 2021-04-18 16:10:29 2021-04-18 16:58:58 2021-04-18 17:23:58 0:25:00 0:19:04 0:05:56 smithi master rhel 8.3 rados/singleton/{all/divergent_priors mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} 1
pass 6056116 2021-04-18 16:10:30 2021-04-18 16:59:08 2021-04-18 17:18:49 0:19:41 0:11:04 0:08:37 smithi master centos 8.2 rados/cephadm/smoke-singlehost/{0-distro$/{centos_8.2_kubic_stable} 1-start 2-services/basic 3-final} 1
pass 6056117 2021-04-18 16:10:31 2021-04-18 16:59:08 2021-04-18 17:16:48 0:17:40 0:08:48 0:08:52 smithi master centos 8.3 rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/classic rados supported-random-distro$/{centos_8}} 1
fail 6056118 2021-04-18 16:10:32 2021-04-18 16:59:19 2021-04-18 17:21:41 0:22:22 0:10:43 0:11:39 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/cache} 2
Failure Reason:

"2021-04-18T17:16:17.441515+0000 mgr.y (mgr.4098) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056119 2021-04-18 16:10:33 2021-04-18 17:00:29 2021-04-18 17:19:58 0:19:29 0:09:49 0:09:40 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_18.04 workloads/fio_4M_rand_write} 1
pass 6056120 2021-04-18 16:10:34 2021-04-18 17:00:29 2021-04-18 17:57:02 0:56:33 0:49:46 0:06:47 smithi master rhel 8.3 rados/standalone/{mon_election/connectivity supported-random-distro$/{rhel_8} workloads/erasure-code} 1
pass 6056121 2021-04-18 16:10:35 2021-04-18 17:00:30 2021-04-18 17:40:18 0:39:48 0:28:45 0:11:03 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/classic} 2
fail 6056122 2021-04-18 16:10:36 2021-04-18 17:00:30 2021-04-18 17:21:21 0:20:51 0:10:59 0:09:52 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/dedup-io-mixed} 2
Failure Reason:

"2021-04-18T17:16:16.237616+0000 mgr.x (mgr.4110) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056123 2021-04-18 16:10:36 2021-04-18 17:00:30 2021-04-18 17:36:17 0:35:47 0:28:56 0:06:51 smithi master rhel 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} tasks/rados_api_tests} 2
Failure Reason:

"2021-04-18T17:16:07.599257+0000 mgr.x (mgr.4111) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056124 2021-04-18 16:10:37 2021-04-18 17:01:01 2021-04-18 17:27:34 0:26:33 0:13:49 0:12:44 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/dedup-io-snaps} 2
Failure Reason:

"2021-04-18T17:19:49.795505+0000 mgr.y (mgr.4113) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056125 2021-04-18 16:10:38 2021-04-18 17:02:59 2021-04-18 17:23:09 0:20:10 0:10:05 0:10:05 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_adoption} 1
pass 6056126 2021-04-18 16:10:39 2021-04-18 17:02:59 2021-04-18 17:23:31 0:20:32 0:09:52 0:10:40 smithi master centos 8.3 rados/singleton/{all/divergent_priors2 mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} 1
fail 6056127 2021-04-18 16:10:40 2021-04-18 17:04:22 2021-04-18 17:44:18 0:39:56 0:32:50 0:07:06 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/pool-snaps-few-objects} 2
Failure Reason:

"2021-04-18T17:26:41.268510+0000 mgr.y (mgr.4109) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056128 2021-04-18 16:10:41 2021-04-18 17:04:23 2021-04-18 18:43:08 1:38:45 1:25:46 0:12:59 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/default thrashosds-health workloads/radosbench} 3
pass 6056129 2021-04-18 16:10:42 2021-04-18 17:04:23 2021-04-18 17:22:02 0:17:39 0:08:13 0:09:26 smithi master centos 8.3 rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 6056130 2021-04-18 16:10:43 2021-04-18 17:04:53 2021-04-18 17:26:16 0:21:23 0:11:13 0:10:10 smithi master ubuntu 20.04 rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{ubuntu_latest}} 1
pass 6056131 2021-04-18 16:10:44 2021-04-18 17:04:54 2021-04-18 17:40:18 0:35:24 0:23:43 0:11:41 smithi master ubuntu 20.04 rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
fail 6056132 2021-04-18 16:10:45 2021-04-18 17:06:24 2021-04-18 17:34:17 0:27:53 0:20:57 0:06:56 smithi master rhel 8.3 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

"2021-04-18T17:30:24.169560+0000 mgr.y (mgr.4109) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056133 2021-04-18 16:10:46 2021-04-18 17:06:25 2021-04-18 17:45:52 0:39:27 0:26:36 0:12:51 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/rados_api_tests} 2
Failure Reason:

"2021-04-18T17:23:10.333581+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056134 2021-04-18 16:10:46 2021-04-18 17:07:25 2021-04-18 17:36:20 0:28:55 0:22:06 0:06:49 smithi master rhel 8.3 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-comp-zstd supported-random-distro$/{rhel_8} tasks/failover} 2
Failure Reason:

"2021-04-18T17:28:26.980410+0000 mgr.y (mgr.4114) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056135 2021-04-18 16:10:47 2021-04-18 17:07:25 2021-04-18 17:35:51 0:28:26 0:17:05 0:11:21 smithi master centos 8.3 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/many workloads/pool-create-delete} 2
fail 6056136 2021-04-18 16:10:48 2021-04-18 17:08:57 2021-04-18 17:44:19 0:35:22 0:26:11 0:09:11 smithi master centos 8.3 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} 2
Failure Reason:

"2021-04-18T17:24:57.503450+0000 mgr.x (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056137 2021-04-18 16:10:49 2021-04-18 17:09:18 2021-04-18 17:51:54 0:42:36 0:28:44 0:13:52 smithi master centos 8.3 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
fail 6056138 2021-04-18 16:10:50 2021-04-18 17:11:41 2021-04-18 19:11:44 2:00:03 1:48:30 0:11:33 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

"2021-04-18T17:41:03.202994+0000 mgr.y (mgr.4113) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056139 2021-04-18 16:10:51 2021-04-18 17:12:51 2021-04-18 17:50:32 0:37:41 0:26:18 0:11:23 smithi master centos 8.3 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

"2021-04-18T17:29:09.051085+0000 mgr.x (mgr.4114) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056140 2021-04-18 16:10:52 2021-04-18 17:12:52 2021-04-18 17:32:17 0:19:25 0:06:08 0:13:17 smithi master ubuntu 20.04 rados/multimon/{clusters/3 mon_election/classic msgr-failures/many msgr/async no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} 2
fail 6056141 2021-04-18 16:10:53 2021-04-18 17:14:52 2021-04-18 18:18:10 1:03:18 0:56:14 0:07:04 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/radosbench-high-concurrency} 2
Failure Reason:

"2021-04-18T17:36:36.010418+0000 mgr.y (mgr.4101) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056142 2021-04-18 16:10:54 2021-04-18 17:14:53 2021-04-18 17:40:17 0:25:24 0:16:12 0:09:12 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 1-start 2-services/rgw 3-final} 2
pass 6056143 2021-04-18 16:10:55 2021-04-18 17:14:53 2021-04-18 17:32:16 0:17:23 0:09:45 0:07:38 smithi master centos 8.3 rados/singleton/{all/dump-stuck mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} 1
fail 6056144 2021-04-18 16:10:55 2021-04-18 17:14:53 2021-04-18 18:45:07 1:30:14 1:17:15 0:12:59 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/radosbench} 2
Failure Reason:

"2021-04-18T17:30:49.239567+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056145 2021-04-18 16:10:56 2021-04-18 17:15:04 2021-04-18 17:57:02 0:41:58 0:31:25 0:10:33 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} 2
pass 6056146 2021-04-18 16:10:57 2021-04-18 17:15:04 2021-04-18 17:30:16 0:15:12 0:06:58 0:08:14 smithi master centos 8.3 rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{centos_8}} 1
fail 6056147 2021-04-18 16:10:58 2021-04-18 17:15:04 2021-04-18 17:36:18 0:21:14 0:11:51 0:09:23 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/redirect} 2
Failure Reason:

"2021-04-18T17:30:49.765035+0000 mgr.x (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056148 2021-04-18 16:10:59 2021-04-18 17:15:14 2021-04-18 17:38:17 0:23:03 0:12:08 0:10:55 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_18.04 workloads/radosbench_4K_rand_read} 1
Failure Reason:

"2021-04-18T17:30:12.018277+0000 mgr.x (mgr.4099) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056149 2021-04-18 16:11:00 2021-04-18 17:15:15 2021-04-18 17:36:17 0:21:02 0:10:54 0:10:08 smithi master centos 8.3 rados/standalone/{mon_election/classic supported-random-distro$/{centos_8} workloads/mgr} 1
pass 6056150 2021-04-18 16:11:01 2021-04-18 17:16:55 2021-04-18 17:42:18 0:25:23 0:18:38 0:06:45 smithi master rhel 8.3 rados/cephadm/smoke/{distro/rhel_8.3_kubic_stable fixed-2 mon_election/connectivity start} 2
fail 6056151 2021-04-18 16:11:02 2021-04-18 17:16:55 2021-04-18 17:48:32 0:31:37 0:23:57 0:07:40 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/redirect_promote_tests} 2
Failure Reason:

"2021-04-18T17:39:07.972518+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056152 2021-04-18 16:11:02 2021-04-18 17:16:56 2021-04-18 17:38:18 0:21:22 0:10:53 0:10:29 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/redirect_set_object} 2
Failure Reason:

"2021-04-18T17:33:35.252304+0000 mgr.x (mgr.4110) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056153 2021-04-18 16:11:03 2021-04-18 17:16:56 2021-04-18 18:16:09 0:59:13 0:49:55 0:09:18 smithi master ubuntu 20.04 rados/singleton/{all/ec-lost-unfound mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} 1
pass 6056154 2021-04-18 16:11:04 2021-04-18 17:16:58 2021-04-18 17:43:52 0:26:54 0:16:39 0:10:15 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_cephadm} 1
fail 6056155 2021-04-18 16:11:05 2021-04-18 17:17:28 2021-04-18 17:47:52 0:30:24 0:19:45 0:10:39 smithi master ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 2
Failure Reason:

"2021-04-18T17:33:10.202807+0000 mgr.y (mgr.4106) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056156 2021-04-18 16:11:06 2021-04-18 17:17:39 2021-04-18 17:50:32 0:32:53 0:23:54 0:08:59 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/set-chunks-read} 2
Failure Reason:

"2021-04-18T17:41:18.723549+0000 mgr.y (mgr.4110) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056157 2021-04-18 16:11:07 2021-04-18 17:18:59 2021-04-18 17:36:17 0:17:18 0:07:56 0:09:22 smithi master centos 8.3 rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 6056158 2021-04-18 16:11:08 2021-04-18 17:18:59 2021-04-18 17:43:52 0:24:53 0:14:52 0:10:01 smithi master ubuntu 18.04 rados/cephadm/smoke/{distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
fail 6056159 2021-04-18 16:11:09 2021-04-18 17:19:20 2021-04-18 17:44:18 0:24:58 0:18:51 0:06:07 smithi master rhel 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} tasks/rados_cls_all} 2
Failure Reason:

"2021-04-18T17:35:07.708049+0000 mgr.x (mgr.4110) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056160 2021-04-18 16:11:10 2021-04-18 17:19:20 2021-04-18 17:47:52 0:28:32 0:18:40 0:09:52 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/small-objects-balanced} 2
Failure Reason:

"2021-04-18T17:35:07.022298+0000 mgr.y (mgr.4122) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056161 2021-04-18 16:11:11 2021-04-18 17:19:40 2021-04-18 17:52:32 0:32:52 0:22:10 0:10:42 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/small-objects-localized} 2
Failure Reason:

"2021-04-18T17:37:00.180536+0000 mgr.x (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056162 2021-04-18 16:11:11 2021-04-18 17:21:18 2021-04-18 17:47:53 0:26:35 0:15:21 0:11:14 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/basic 3-final} 2
pass 6056163 2021-04-18 16:11:12 2021-04-18 17:21:18 2021-04-18 17:36:18 0:15:00 0:07:10 0:07:50 smithi master centos 8.3 rados/singleton/{all/erasure-code-nonregression mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} 1
fail 6056164 2021-04-18 16:11:13 2021-04-18 17:21:18 2021-04-18 17:55:55 0:34:37 0:22:02 0:12:35 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/small-objects} 2
Failure Reason:

"2021-04-18T17:38:11.596920+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056165 2021-04-18 16:11:14 2021-04-18 17:21:19 2021-04-18 17:57:03 0:35:44 0:25:33 0:10:11 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/rbd_cls} 3
pass 6056166 2021-04-18 16:11:15 2021-04-18 17:21:19 2021-04-18 17:57:54 0:36:35 0:25:48 0:10:47 smithi master centos 8.3 rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/connectivity} 2
pass 6056167 2021-04-18 16:11:16 2021-04-18 17:21:20 2021-04-18 17:44:18 0:22:58 0:17:07 0:05:51 smithi master rhel 8.3 rados/singleton-nomsgr/{all/full-tiering mon_election/classic rados supported-random-distro$/{rhel_8}} 1
fail 6056168 2021-04-18 16:11:17 2021-04-18 17:21:30 2021-04-18 18:01:56 0:40:26 0:28:20 0:12:06 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects-balanced} 2
Failure Reason:

"2021-04-18T17:39:36.212297+0000 mgr.x (mgr.4123) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056169 2021-04-18 16:11:18 2021-04-18 17:21:50 2021-04-18 17:37:52 0:16:02 0:06:37 0:09:25 smithi master ubuntu 20.04 rados/objectstore/{backends/filejournal supported-random-distro$/{ubuntu_latest}} 1
fail 6056170 2021-04-18 16:11:18 2021-04-18 17:21:50 2021-04-18 17:47:52 0:26:02 0:11:07 0:14:55 smithi master centos 8.3 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

"2021-04-18T17:42:46.765641+0000 mgr.x (mgr.4101) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056171 2021-04-18 16:11:19 2021-04-18 17:23:10 2021-04-18 17:43:52 0:20:42 0:10:36 0:10:06 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_18.04 workloads/radosbench_4K_seq_read} 1
pass 6056172 2021-04-18 16:11:20 2021-04-18 17:23:30 2021-04-18 18:09:03 0:45:33 0:39:23 0:06:10 smithi master rhel 8.3 rados/standalone/{mon_election/connectivity supported-random-distro$/{rhel_8} workloads/misc} 1
pass 6056173 2021-04-18 16:11:21 2021-04-18 17:23:31 2021-04-18 17:40:48 0:17:17 0:07:24 0:09:53 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_cephadm_repos} 1
fail 6056174 2021-04-18 16:11:22 2021-04-18 17:23:41 2021-04-18 17:57:55 0:34:14 0:23:30 0:10:44 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/snaps-few-objects-localized} 2
Failure Reason:

"2021-04-18T17:40:04.067531+0000 mgr.x (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056175 2021-04-18 16:11:23 2021-04-18 17:23:41 2021-04-18 17:48:32 0:24:51 0:12:43 0:12:08 smithi master ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-hybrid supported-random-distro$/{ubuntu_latest} tasks/insights} 2
Failure Reason:

"2021-04-18T17:40:16.654875+0000 mgr.z (mgr.4129) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056176 2021-04-18 16:11:24 2021-04-18 17:25:12 2021-04-18 17:51:55 0:26:43 0:17:48 0:08:55 smithi master rhel 8.3 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/one workloads/rados_5925} 2
pass 6056177 2021-04-18 16:11:25 2021-04-18 17:26:25 2021-04-18 18:30:11 1:03:46 0:54:15 0:09:31 smithi master ubuntu 20.04 rados/singleton/{all/lost-unfound-delete mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} 1
fail 6056178 2021-04-18 16:11:25 2021-04-18 17:26:25 2021-04-18 18:06:38 0:40:13 0:28:01 0:12:12 smithi master ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} 2
Failure Reason:

"2021-04-18T17:44:02.067778+0000 mgr.y (mgr.4116) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056179 2021-04-18 16:11:26 2021-04-18 17:27:43 2021-04-18 17:55:02 0:27:19 0:20:45 0:06:34 smithi master rhel 8.3 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
fail 6056180 2021-04-18 16:11:27 2021-04-18 17:27:44 2021-04-18 17:50:32 0:22:48 0:12:22 0:10:26 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/lockdep} 2
Failure Reason:

"2021-04-18T17:45:28.722335+0000 mgr.y (mgr.4112) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056181 2021-04-18 16:11:28 2021-04-18 17:29:44 2021-04-18 18:06:39 0:36:55 0:27:00 0:09:55 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/snaps-few-objects} 2
Failure Reason:

"2021-04-18T17:45:59.672496+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056182 2021-04-18 16:11:29 2021-04-18 17:29:45 2021-04-18 18:05:03 0:35:18 0:26:43 0:08:35 smithi master centos 8.3 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

"2021-04-18T17:45:13.063722+0000 mgr.x (mgr.4105) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056183 2021-04-18 16:11:30 2021-04-18 17:29:45 2021-04-18 17:51:54 0:22:09 0:10:24 0:11:45 smithi master ubuntu 20.04 rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} 2
Failure Reason:

"2021-04-18T17:45:39.319048+0000 mgr.y (mgr.4111) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056184 2021-04-18 16:11:31 2021-04-18 17:29:45 2021-04-18 18:05:04 0:35:19 0:23:18 0:12:01 smithi master centos 8.2 rados/cephadm/with-work/{0-distro/centos_8.2_kubic_stable fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} 2
fail 6056185 2021-04-18 16:11:32 2021-04-18 17:31:57 2021-04-18 17:57:03 0:25:06 0:15:12 0:09:54 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/write_fadvise_dontneed} 2
Failure Reason:

"2021-04-18T17:47:49.560612+0000 mgr.y (mgr.4113) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056186 2021-04-18 16:11:33 2021-04-18 17:31:57 2021-04-18 17:49:53 0:17:56 0:08:15 0:09:41 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/health-warnings mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 6056187 2021-04-18 16:11:34 2021-04-18 17:31:58 2021-04-18 18:18:09 0:46:11 0:35:37 0:10:34 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} 2
fail 6056188 2021-04-18 16:11:34 2021-04-18 17:32:18 2021-04-18 18:22:09 0:49:51 0:38:12 0:11:39 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} 2
Failure Reason:

"2021-04-18T17:50:10.944394+0000 mgr.y (mgr.4101) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056189 2021-04-18 16:11:35 2021-04-18 17:34:00 2021-04-18 18:13:28 0:39:28 0:29:16 0:10:12 smithi master centos 8.3 rados/singleton/{all/lost-unfound mon_election/classic msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} 1
fail 6056190 2021-04-18 16:11:36 2021-04-18 17:34:01 2021-04-18 18:01:55 0:27:54 0:17:48 0:10:06 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/cache-agent-big} 2
Failure Reason:

"2021-04-18T17:49:55.040152+0000 mgr.x (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056191 2021-04-18 16:11:37 2021-04-18 17:34:21 2021-04-18 18:03:03 0:28:42 0:16:32 0:12:10 smithi master ubuntu 20.04 rados/cephadm/smoke/{distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} 2
fail 6056192 2021-04-18 16:11:38 2021-04-18 17:34:22 2021-04-18 17:59:54 0:25:32 0:13:21 0:12:11 smithi master ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} tasks/rados_python} 2
Failure Reason:

"2021-04-18T17:51:58.547270+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056193 2021-04-18 16:11:39 2021-04-18 17:36:02 2021-04-18 17:59:54 0:23:52 0:13:18 0:10:34 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/cache-agent-small} 2
Failure Reason:

"2021-04-18T17:52:06.574187+0000 mgr.y (mgr.4113) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056194 2021-04-18 16:11:40 2021-04-18 17:36:23 2021-04-18 17:52:58 0:16:35 0:08:19 0:08:16 smithi master centos 8.3 rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 6056195 2021-04-18 16:11:41 2021-04-18 17:36:23 2021-04-18 18:06:38 0:30:15 0:21:35 0:08:40 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_orch_cli} 1
fail 6056196 2021-04-18 16:11:41 2021-04-18 17:36:24 2021-04-18 18:06:39 0:30:15 0:18:53 0:11:22 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/cache-pool-snaps-readproxy} 2
Failure Reason:

"2021-04-18T17:52:12.581609+0000 mgr.x (mgr.4126) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056197 2021-04-18 16:11:42 2021-04-18 17:36:24 2021-04-18 18:22:09 0:45:45 0:39:41 0:06:04 smithi master rhel 8.3 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

"2021-04-18T18:01:58.972062+0000 mgr.x (mgr.6011) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056198 2021-04-18 16:11:43 2021-04-18 17:36:25 2021-04-18 17:59:53 0:23:28 0:14:07 0:09:21 smithi master centos 8.2 rados/cephadm/smoke-roleless/{0-distro/centos_8.2_kubic_stable 1-start 2-services/iscsi 3-final} 2
pass 6056199 2021-04-18 16:11:44 2021-04-18 17:36:25 2021-04-18 17:53:54 0:17:29 0:07:56 0:09:33 smithi master centos 8.3 rados/singleton/{all/max-pg-per-osd.from-mon mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} 1
pass 6056200 2021-04-18 16:11:45 2021-04-18 17:36:25 2021-04-18 17:55:55 0:19:30 0:10:36 0:08:54 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/wpq_default_shards settings/optimized ubuntu_18.04 workloads/radosbench_4M_rand_read} 1
fail 6056201 2021-04-18 16:11:46 2021-04-18 17:36:26 2021-04-18 18:16:09 0:39:43 0:27:25 0:12:18 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/cache-pool-snaps} 2
Failure Reason:

"2021-04-18T17:53:40.268610+0000 mgr.y (mgr.4109) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056202 2021-04-18 16:11:47 2021-04-18 17:37:58 2021-04-18 18:32:10 0:54:12 0:45:34 0:08:38 smithi master ubuntu 20.04 rados/standalone/{mon_election/classic supported-random-distro$/{ubuntu_latest} workloads/mon} 1
fail 6056203 2021-04-18 16:11:48 2021-04-18 17:38:27 2021-04-18 17:56:02 0:17:35 0:07:29 0:10:06 smithi master centos 8.3 rados/upgrade/pacific-x/rgw-multisite/{clusters frontend overrides realm supported-random-distro$/{centos_8} tasks upgrade/primary} 2
Failure Reason:

An attempt to upgrade from a higher version to a lower one will always fail. Hint: check tags in the target git branch.

fail 6056204 2021-04-18 16:11:49 2021-04-18 17:38:28 2021-04-18 18:20:10 0:41:42 0:31:36 0:10:06 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/cache-snaps-balanced} 2
Failure Reason:

"2021-04-18T18:03:36.122457+0000 mgr.x (mgr.4110) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056205 2021-04-18 16:11:50 2021-04-18 17:39:58 2021-04-18 18:38:12 0:58:14 0:39:54 0:18:20 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} 3
pass 6056206 2021-04-18 16:11:50 2021-04-18 17:40:19 2021-04-18 18:26:09 0:45:50 0:36:24 0:09:26 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
pass 6056207 2021-04-18 16:11:51 2021-04-18 17:40:19 2021-04-18 17:57:54 0:17:35 0:08:30 0:09:05 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 6056208 2021-04-18 16:11:52 2021-04-18 17:40:19 2021-04-18 18:22:11 0:41:52 0:34:43 0:07:09 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/cache-snaps} 2
pass 6056209 2021-04-18 16:11:53 2021-04-18 17:42:00 2021-04-18 18:34:12 0:52:12 0:44:58 0:07:14 smithi master rhel 8.3 rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} 2
dead 6056210 2021-04-18 16:11:54 2021-04-18 17:42:00 2021-04-18 20:08:58 2:26:58 smithi master centos 8.3 rados/objectstore/{backends/filestore-idempotent-aio-journal supported-random-distro$/{centos_8}} 1
fail 6056211 2021-04-18 16:11:55 2021-04-18 17:42:00 2021-04-18 18:06:38 0:24:38 0:13:47 0:10:51 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/cache} 2
Failure Reason:

"2021-04-18T17:57:57.434448+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056212 2021-04-18 16:11:56 2021-04-18 17:42:21 2021-04-18 18:01:54 0:19:33 0:11:52 0:07:41 smithi master centos 8.3 rados/singleton/{all/max-pg-per-osd.from-primary mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 1
fail 6056213 2021-04-18 16:11:57 2021-04-18 17:42:21 2021-04-18 18:13:28 0:31:07 0:20:41 0:10:26 smithi master rhel 8.3 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

"2021-04-18T18:08:05.927381+0000 mgr.x (mgr.4103) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056214 2021-04-18 16:11:58 2021-04-18 17:43:56 2021-04-18 18:11:04 0:27:08 0:19:33 0:07:35 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/dedup-io-mixed} 2
Failure Reason:

"2021-04-18T18:06:07.463733+0000 mgr.x (mgr.4109) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056215 2021-04-18 16:11:59 2021-04-18 17:43:57 2021-04-18 18:08:40 0:24:43 0:14:26 0:10:17 smithi master centos 8.2 rados/cephadm/smoke/{distro/centos_8.2_kubic_stable fixed-2 mon_election/connectivity start} 2
fail 6056216 2021-04-18 16:12:00 2021-04-18 17:44:27 2021-04-18 18:18:10 0:33:43 0:22:45 0:10:58 smithi master centos 8.3 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-overwrites} 2
Failure Reason:

"2021-04-18T18:00:37.425631+0000 mgr.x (mgr.4103) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056217 2021-04-18 16:12:01 2021-04-18 17:44:27 2021-04-18 18:26:11 0:41:44 0:30:47 0:10:57 smithi master centos 8.3 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-low-osd-mem-target supported-random-distro$/{centos_8} tasks/module_selftest} 2
Failure Reason:

"2021-04-18T18:00:46.406825+0000 mgr.y (mgr.4111) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056218 2021-04-18 16:12:02 2021-04-18 17:44:28 2021-04-18 18:22:11 0:37:43 0:25:30 0:12:13 smithi master centos 8.3 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/sync-many workloads/rados_api_tests} 2
fail 6056219 2021-04-18 16:12:02 2021-04-18 17:45:58 2021-04-18 19:52:20 2:06:22 1:56:41 0:09:41 smithi master rhel 8.3 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-radosbench} 2
Failure Reason:

"2021-04-18T18:09:40.292601+0000 mgr.x (mgr.4114) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056220 2021-04-18 16:12:03 2021-04-18 17:47:59 2021-04-18 18:16:10 0:28:11 0:20:38 0:07:33 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/dedup-io-snaps} 2
Failure Reason:

"2021-04-18T18:10:24.858144+0000 mgr.y (mgr.4109) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056221 2021-04-18 16:12:04 2021-04-18 17:47:59 2021-04-18 18:06:38 0:18:39 0:07:57 0:10:42 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
pass 6056222 2021-04-18 16:12:05 2021-04-18 17:47:59 2021-04-18 18:24:11 0:36:12 0:26:50 0:09:22 smithi master centos 8.3 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
fail 6056223 2021-04-18 16:12:06 2021-04-18 17:48:00 2021-04-18 18:45:08 0:57:08 0:47:00 0:10:08 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

"2021-04-18T18:16:24.608608+0000 mgr.y (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056224 2021-04-18 16:12:07 2021-04-18 17:48:00 2021-04-18 18:09:04 0:21:04 0:12:15 0:08:49 smithi master centos 8.2 rados/cephadm/smoke-singlehost/{0-distro$/{centos_8.2_kubic_stable} 1-start 2-services/rgw 3-final} 1
fail 6056225 2021-04-18 16:12:08 2021-04-18 17:48:00 2021-04-18 18:26:12 0:38:12 0:28:06 0:10:06 smithi master centos 8.3 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

"2021-04-18T18:04:21.146959+0000 mgr.x (mgr.4111) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056226 2021-04-18 16:12:09 2021-04-18 17:48:42 2021-04-18 18:09:16 0:20:34 0:06:51 0:13:43 smithi master ubuntu 20.04 rados/multimon/{clusters/9 mon_election/classic msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} 3
Failure Reason:

"2021-04-18T18:06:31.499550+0000 mon.a (mon.0) 14 : cluster [WRN] Health check failed: 1/9 mons down, quorum a,b,c,d,e,f,g,i (MON_DOWN)" in cluster log

fail 6056227 2021-04-18 16:12:10 2021-04-18 17:50:02 2021-04-18 18:34:12 0:44:10 0:37:20 0:06:50 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/pool-snaps-few-objects} 2
Failure Reason:

"2021-04-18T18:11:54.846239+0000 mgr.y (mgr.4104) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056228 2021-04-18 16:12:11 2021-04-18 17:50:03 2021-04-18 18:13:27 0:23:24 0:12:13 0:11:11 smithi master centos 8.3 rados/singleton/{all/max-pg-per-osd.from-replica mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} 1
fail 6056229 2021-04-18 16:12:12 2021-04-18 17:50:43 2021-04-18 18:18:10 0:27:27 0:17:07 0:10:20 smithi master ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} tasks/rados_stress_watch} 2
Failure Reason:

"2021-04-18T18:07:04.526428+0000 mgr.x (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056230 2021-04-18 16:12:13 2021-04-18 17:50:43 2021-04-18 18:30:11 0:39:28 0:28:39 0:10:49 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/classic} 2
fail 6056231 2021-04-18 16:12:14 2021-04-18 17:50:44 2021-04-18 18:28:12 0:37:28 0:24:54 0:12:34 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/rados_api_tests} 2
Failure Reason:

"2021-04-18T18:08:42.250781+0000 mgr.x (mgr.4105) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056232 2021-04-18 16:12:14 2021-04-18 17:52:03 2021-04-18 18:11:19 0:19:16 0:10:43 0:08:33 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_18.04 workloads/radosbench_4M_seq_read} 1
pass 6056233 2021-04-18 16:12:15 2021-04-18 17:52:03 2021-04-18 19:47:29 1:55:26 1:44:33 0:10:53 smithi master ubuntu 20.04 rados/standalone/{mon_election/connectivity supported-random-distro$/{ubuntu_latest} workloads/osd-backfill} 1
fail 6056234 2021-04-18 16:12:16 2021-04-18 17:52:04 2021-04-18 19:05:11 1:13:07 1:06:54 0:06:13 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/radosbench-high-concurrency} 2
Failure Reason:

"2021-04-18T18:14:24.569308+0000 mgr.y (mgr.4116) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056235 2021-04-18 16:12:17 2021-04-18 17:52:04 2021-04-18 18:10:43 0:18:39 0:11:17 0:07:22 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_adoption} 1
pass 6056236 2021-04-18 16:12:18 2021-04-18 17:52:04 2021-04-18 18:22:11 0:30:07 0:24:01 0:06:06 smithi master rhel 8.3 rados/singleton-nomsgr/{all/msgr mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
fail 6056237 2021-04-18 16:12:19 2021-04-18 17:52:04 2021-04-18 19:33:55 1:41:51 1:35:03 0:06:48 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/radosbench} 2
Failure Reason:

"2021-04-18T18:13:42.209773+0000 mgr.x (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056238 2021-04-18 16:12:20 2021-04-18 17:52:35 2021-04-18 18:18:10 0:25:35 0:18:31 0:07:04 smithi master rhel 8.3 rados/singleton/{all/mon-auth-caps mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8}} 1
pass 6056239 2021-04-18 16:12:21 2021-04-18 17:53:08 2021-04-18 18:20:10 0:27:02 0:18:28 0:08:34 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/mirror 3-final} 2
fail 6056240 2021-04-18 16:12:22 2021-04-18 17:55:12 2021-04-18 18:24:11 0:28:59 0:20:44 0:08:15 smithi master centos 8.3 rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
Failure Reason:

"2021-04-18T18:21:03.418909+0000 mgr.x (mgr.4102) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056241 2021-04-18 16:12:23 2021-04-18 17:55:12 2021-04-18 18:20:12 0:25:00 0:14:05 0:10:55 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/redirect} 2
Failure Reason:

"2021-04-18T18:13:18.445076+0000 mgr.x (mgr.4105) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056242 2021-04-18 16:12:24 2021-04-18 17:56:03 2021-04-18 18:24:11 0:28:08 0:20:55 0:07:13 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/redirect_promote_tests} 2
Failure Reason:

"2021-04-18T18:18:03.225978+0000 mgr.y (mgr.4113) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056243 2021-04-18 16:12:25 2021-04-18 17:56:03 2021-04-18 18:26:09 0:30:06 0:19:06 0:11:00 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/none thrashosds-health workloads/test_rbd_api} 3
pass 6056244 2021-04-18 16:12:26 2021-04-18 17:57:04 2021-04-18 18:55:09 0:58:05 0:47:36 0:10:29 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} 2
fail 6056245 2021-04-18 16:12:27 2021-04-18 17:57:04 2021-04-18 18:20:10 0:23:06 0:13:01 0:10:05 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/redirect_set_object} 2
Failure Reason:

"2021-04-18T18:13:43.173609+0000 mgr.x (mgr.4110) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056246 2021-04-18 16:12:28 2021-04-18 17:57:05 2021-04-18 18:32:11 0:35:06 0:29:38 0:05:28 smithi master rhel 8.3 rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/classic rados supported-random-distro$/{rhel_8}} 2
pass 6056247 2021-04-18 16:12:28 2021-04-18 17:57:05 2021-04-18 18:24:11 0:27:06 0:19:04 0:08:02 smithi master rhel 8.3 rados/cephadm/smoke/{distro/rhel_8.3_kubic_stable fixed-2 mon_election/classic start} 2
pass 6056248 2021-04-18 16:12:29 2021-04-18 17:57:56 2021-04-18 18:18:11 0:20:15 0:09:53 0:10:22 smithi master ubuntu 20.04 rados/singleton/{all/mon-config-key-caps mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} 1
pass 6056249 2021-04-18 16:12:30 2021-04-18 17:57:56 2021-04-18 18:28:11 0:30:15 0:24:31 0:05:44 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/set-chunks-read} 2
dead 6056250 2021-04-18 16:12:31 2021-04-18 17:57:57 2021-04-18 20:08:58 2:11:01 smithi master ubuntu 20.04 rados/objectstore/{backends/filestore-idempotent supported-random-distro$/{ubuntu_latest}} 1
pass 6056251 2021-04-18 16:12:32 2021-04-18 17:57:57 2021-04-18 18:26:11 0:28:14 0:17:30 0:10:44 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_cephadm} 1
pass 6056252 2021-04-18 16:12:33 2021-04-18 17:59:58 2021-04-18 18:32:11 0:32:13 0:22:09 0:10:04 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/small-objects-balanced} 2
pass 6056253 2021-04-18 16:12:34 2021-04-18 17:59:59 2021-04-18 18:26:10 0:26:11 0:11:48 0:14:23 smithi master centos 8.3 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 6056254 2021-04-18 16:12:35 2021-04-18 18:01:59 2021-04-18 18:20:11 0:18:12 0:09:08 0:09:04 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_18.04 workloads/radosbench_4M_write} 1
pass 6056255 2021-04-18 16:12:36 2021-04-18 18:02:00 2021-04-18 19:54:20 1:52:20 1:43:37 0:08:43 smithi master centos 8.3 rados/standalone/{mon_election/classic supported-random-distro$/{centos_8} workloads/osd} 1
fail 6056256 2021-04-18 16:12:37 2021-04-18 18:02:00 2021-04-18 18:30:12 0:28:12 0:17:48 0:10:24 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/small-objects-localized} 2
Failure Reason:

"2021-04-18T18:17:44.198526+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056257 2021-04-18 16:12:38 2021-04-18 18:02:00 2021-04-18 18:34:13 0:32:13 0:23:08 0:09:05 smithi master ubuntu 18.04 rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} 2
fail 6056258 2021-04-18 16:12:39 2021-04-18 18:02:00 2021-04-18 18:38:11 0:36:11 0:28:30 0:07:41 smithi master rhel 8.3 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-stupid supported-random-distro$/{rhel_8} tasks/progress} 2
Failure Reason:

"2021-04-18T18:26:22.656292+0000 mgr.z (mgr.4111) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056259 2021-04-18 16:12:40 2021-04-18 18:03:11 2021-04-18 18:24:09 0:20:58 0:08:37 0:12:21 smithi master centos 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} tasks/rados_striper} 2
Failure Reason:

"2021-04-18T18:21:30.542068+0000 mgr.x (mgr.4099) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056260 2021-04-18 16:12:40 2021-04-18 18:05:07 2021-04-18 18:38:10 0:33:03 0:27:01 0:06:02 smithi master rhel 8.3 rados/singleton-nomsgr/{all/osd_stale_reads mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
pass 6056261 2021-04-18 16:12:41 2021-04-18 18:05:08 2021-04-18 18:57:09 0:52:01 0:40:26 0:11:35 smithi master ubuntu 20.04 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/sync workloads/rados_mon_osdmap_prune} 2
fail 6056262 2021-04-18 16:12:42 2021-04-18 18:05:08 2021-04-18 18:55:10 0:50:02 0:38:58 0:11:04 smithi master ubuntu 20.04 rados/singleton/{all/mon-config-keys mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command crashed: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6af76f5bbfd9f30f5d22ab88d9ff0fd40548fbfe TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/test_mon_config_key.py'

fail 6056263 2021-04-18 16:12:43 2021-04-18 18:06:48 2021-04-18 18:45:08 0:38:20 0:31:38 0:06:42 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/small-objects} 2
Failure Reason:

"2021-04-18T18:28:57.566946+0000 mgr.y (mgr.4104) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056264 2021-04-18 16:12:44 2021-04-18 18:06:49 2021-04-18 18:45:08 0:38:19 0:32:22 0:05:57 smithi master rhel 8.3 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-small-objects-balanced} 2
Failure Reason:

"2021-04-18T18:29:25.435745+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056265 2021-04-18 16:12:45 2021-04-18 18:06:49 2021-04-18 18:35:27 0:28:38 0:18:47 0:09:51 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 1-start 2-services/rgw-ingress 3-final} 2
pass 6056266 2021-04-18 16:12:46 2021-04-18 18:06:49 2021-04-18 18:34:13 0:27:24 0:20:37 0:06:47 smithi master rhel 8.3 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
fail 6056267 2021-04-18 16:12:47 2021-04-18 18:06:50 2021-04-18 18:36:13 0:29:23 0:16:54 0:12:29 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

"2021-04-18T18:24:49.717379+0000 mgr.x (mgr.4104) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056268 2021-04-18 16:12:48 2021-04-18 18:08:51 2021-04-18 18:45:06 0:36:15 0:24:52 0:11:23 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/snaps-few-objects-balanced} 2
Failure Reason:

"2021-04-18T18:25:35.341794+0000 mgr.x (mgr.4102) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056269 2021-04-18 16:12:49 2021-04-18 18:09:11 2021-04-18 18:45:08 0:35:57 0:25:22 0:10:35 smithi master centos 8.3 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

"2021-04-18T18:25:41.484738+0000 mgr.x (mgr.4114) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056270 2021-04-18 16:12:50 2021-04-18 18:09:22 2021-04-18 18:30:13 0:20:51 0:07:37 0:13:14 smithi master centos 8.3 rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/few msgr/async no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 3
fail 6056271 2021-04-18 16:12:51 2021-04-18 18:10:47 2021-04-18 18:49:09 0:38:22 0:26:56 0:11:26 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/snaps-few-objects-localized} 2
Failure Reason:

"2021-04-18T18:26:49.722488+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056272 2021-04-18 16:12:52 2021-04-18 18:11:07 2021-04-18 18:40:08 0:29:01 0:15:00 0:14:01 smithi master ubuntu 18.04 rados/cephadm/smoke/{distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
fail 6056273 2021-04-18 16:12:52 2021-04-18 18:13:36 2021-04-18 18:53:07 0:39:31 0:29:06 0:10:25 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/snaps-few-objects} 2
Failure Reason:

"2021-04-18T18:30:35.432776+0000 mgr.y (mgr.4103) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056274 2021-04-18 16:12:53 2021-04-18 18:13:37 2021-04-18 18:37:54 0:24:17 0:17:05 0:07:12 smithi master rhel 8.3 rados/singleton-nomsgr/{all/pool-access mon_election/classic rados supported-random-distro$/{rhel_8}} 1
pass 6056275 2021-04-18 16:12:54 2021-04-18 18:13:37 2021-04-18 18:30:28 0:16:51 0:08:47 0:08:04 smithi master centos 8.3 rados/singleton/{all/mon-config mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} 1
pass 6056276 2021-04-18 16:12:55 2021-04-18 18:13:37 2021-04-18 18:53:07 0:39:30 0:27:57 0:11:33 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/connectivity} 2
fail 6056277 2021-04-18 16:12:56 2021-04-18 18:13:38 2021-04-18 18:40:09 0:26:31 0:14:01 0:12:30 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/write_fadvise_dontneed} 2
Failure Reason:

"2021-04-18T18:31:57.836557+0000 mgr.y (mgr.4101) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056278 2021-04-18 16:12:57 2021-04-18 18:16:17 2021-04-18 18:53:07 0:36:50 0:28:02 0:08:48 smithi master centos 8.3 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-snaps-few-objects-overwrites} 2
Failure Reason:

"2021-04-18T18:31:56.328063+0000 mgr.y (mgr.4101) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056279 2021-04-18 16:12:58 2021-04-18 18:16:18 2021-04-18 18:33:27 0:17:09 0:07:14 0:09:55 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_cephadm_repos} 1
fail 6056280 2021-04-18 16:12:59 2021-04-18 18:16:18 2021-04-18 19:07:08 0:50:50 0:42:35 0:08:15 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/admin_socket_objecter_requests} 2
Failure Reason:

"2021-04-18T18:40:37.279337+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056281 2021-04-18 16:13:00 2021-04-18 18:18:19 2021-04-18 19:05:10 0:46:51 0:34:08 0:12:43 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/pggrow thrashosds-health workloads/cache-snaps} 3
fail 6056282 2021-04-18 16:13:01 2021-04-18 18:18:20 2021-04-18 19:03:08 0:44:48 0:27:39 0:17:09 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/wpq_default_shards settings/optimized ubuntu_18.04 workloads/radosbench_omap_write} 1
Failure Reason:

Found coredumps on ubuntu@smithi114.front.sepia.ceph.com

fail 6056283 2021-04-18 16:13:03 2021-04-18 18:18:20 2021-04-18 18:51:07 0:32:47 0:22:34 0:10:13 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/cache-agent-big} 2
Failure Reason:

"2021-04-18T18:34:27.721277+0000 mgr.x (mgr.4102) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

dead 6056284 2021-04-18 16:13:04 2021-04-18 18:18:21 2021-04-18 20:08:58 1:50:37 smithi master ubuntu 20.04 rados/standalone/{mon_election/connectivity supported-random-distro$/{ubuntu_latest} workloads/scrub} 1
pass 6056285 2021-04-18 16:13:05 2021-04-18 18:18:21 2021-04-18 18:59:11 0:40:50 0:30:01 0:10:49 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
pass 6056286 2021-04-18 16:13:06 2021-04-18 18:18:22 2021-04-18 18:49:06 0:30:44 0:20:47 0:09:57 smithi master centos 8.3 rados/singleton/{all/osd-backfill mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
pass 6056287 2021-04-18 16:13:07 2021-04-18 18:20:12 2021-04-18 18:51:09 0:30:57 0:23:12 0:07:45 smithi master rhel 8.3 rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
fail 6056288 2021-04-18 16:13:08 2021-04-18 18:20:13 2021-04-18 18:47:07 0:26:54 0:19:58 0:06:56 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/cache-agent-small} 2
Failure Reason:

"2021-04-18T18:42:32.102158+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056289 2021-04-18 16:13:09 2021-04-18 18:20:13 2021-04-18 18:47:06 0:26:53 0:17:30 0:09:23 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/rgw 3-final} 2
fail 6056290 2021-04-18 16:13:10 2021-04-18 18:20:13 2021-04-18 18:53:08 0:32:55 0:21:12 0:11:43 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/cache-pool-snaps-readproxy} 2
Failure Reason:

"2021-04-18T18:36:10.924663+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056291 2021-04-18 16:13:11 2021-04-18 18:20:14 2021-04-18 18:35:05 0:14:51 0:06:22 0:08:29 smithi master ubuntu 20.04 rados/objectstore/{backends/fusestore supported-random-distro$/{ubuntu_latest}} 1
pass 6056292 2021-04-18 16:13:12 2021-04-18 18:20:14 2021-04-18 19:01:08 0:40:54 0:30:06 0:10:48 smithi master centos 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/rados_workunit_loadgen_big} 2
pass 6056293 2021-04-18 16:13:13 2021-04-18 18:22:15 2021-04-18 18:57:07 0:34:52 0:25:09 0:09:43 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/cache-pool-snaps} 2
fail 6056294 2021-04-18 16:13:14 2021-04-18 18:22:15 2021-04-18 18:49:06 0:26:51 0:19:55 0:06:56 smithi master rhel 8.3 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

"2021-04-18T18:44:55.920847+0000 mgr.y (mgr.4104) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056295 2021-04-18 16:13:15 2021-04-18 18:22:15 2021-04-18 19:11:43 0:49:28 0:37:19 0:12:09 smithi master ubuntu 20.04 rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} 2
pass 6056296 2021-04-18 16:13:16 2021-04-18 18:24:16 2021-04-18 19:01:09 0:36:53 0:30:53 0:06:00 smithi master rhel 8.3 rados/singleton/{all/osd-recovery-incomplete mon_election/connectivity msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} 1
pass 6056297 2021-04-18 16:13:17 2021-04-18 18:24:17 2021-04-18 18:43:06 0:18:49 0:07:55 0:10:54 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/version-number-sanity mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
fail 6056298 2021-04-18 16:13:18 2021-04-18 18:24:17 2021-04-18 18:59:07 0:34:50 0:26:08 0:08:42 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/cache-snaps-balanced} 2
Failure Reason:

"2021-04-18T18:40:39.454851+0000 mgr.y (mgr.4115) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056299 2021-04-18 16:13:19 2021-04-18 18:24:17 2021-04-18 18:51:07 0:26:50 0:17:11 0:09:39 smithi master ubuntu 20.04 rados/cephadm/smoke/{distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
fail 6056300 2021-04-18 16:13:20 2021-04-18 18:24:18 2021-04-18 18:51:06 0:26:48 0:20:42 0:06:06 smithi master rhel 8.3 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/filestore-xfs supported-random-distro$/{rhel_8} tasks/prometheus} 2
Failure Reason:

"2021-04-18T18:45:44.198463+0000 mgr.z (mgr.4102) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056301 2021-04-18 16:13:21 2021-04-18 18:24:18 2021-04-18 19:11:43 0:47:25 0:36:59 0:10:26 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/cache-snaps} 2
Failure Reason:

"2021-04-18T18:49:12.494207+0000 mgr.y (mgr.4112) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056302 2021-04-18 16:13:22 2021-04-18 18:26:19 2021-04-18 19:13:49 0:47:30 0:40:10 0:07:20 smithi master rhel 8.3 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

"2021-04-18T18:53:14.971743+0000 mgr.x (mgr.6011) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056303 2021-04-18 16:13:23 2021-04-18 18:26:19 2021-04-18 19:05:09 0:38:50 0:27:30 0:11:20 smithi master ubuntu 20.04 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/force-sync-many workloads/rados_mon_workunits} 2
fail 6056304 2021-04-18 16:13:24 2021-04-18 18:26:19 2021-04-18 19:03:06 0:36:47 0:30:50 0:05:57 smithi master rhel 8.3 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-small-objects-fast-read} 2
Failure Reason:

"2021-04-18T18:48:54.923981+0000 mgr.y (mgr.4110) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056305 2021-04-18 16:13:25 2021-04-18 18:26:20 2021-04-18 18:57:07 0:30:47 0:22:30 0:08:17 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_orch_cli} 1
fail 6056306 2021-04-18 16:13:26 2021-04-18 18:26:20 2021-04-18 18:51:06 0:24:46 0:14:14 0:10:32 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/cache} 2
Failure Reason:

"2021-04-18T18:42:56.757368+0000 mgr.y (mgr.4115) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056307 2021-04-18 16:13:26 2021-04-18 18:26:21 2021-04-18 19:05:08 0:38:47 0:28:51 0:09:56 smithi master centos 8.3 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
fail 6056308 2021-04-18 16:13:27 2021-04-18 18:26:21 2021-04-18 19:05:07 0:38:46 0:29:23 0:09:23 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

"2021-04-18T18:54:43.312276+0000 mgr.y (mgr.4103) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056309 2021-04-18 16:13:29 2021-04-18 18:26:21 2021-04-18 19:11:44 0:45:23 0:36:51 0:08:32 smithi master rhel 8.3 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

"2021-04-18T18:50:42.424050+0000 mgr.y (mgr.4119) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056310 2021-04-18 16:13:29 2021-04-18 18:28:22 2021-04-18 18:47:07 0:18:45 0:09:07 0:09:38 smithi master ubuntu 20.04 rados/multimon/{clusters/3 mon_election/classic msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} 2
pass 6056311 2021-04-18 16:13:30 2021-04-18 18:28:23 2021-04-18 18:59:07 0:30:44 0:23:23 0:07:21 smithi master rhel 8.3 rados/singleton/{all/osd-recovery mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} 1
pass 6056312 2021-04-18 16:13:31 2021-04-18 18:30:13 2021-04-18 18:51:06 0:20:53 0:13:11 0:07:42 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_18.04 workloads/sample_fio} 1
fail 6056313 2021-04-18 16:13:32 2021-04-18 18:30:14 2021-04-18 18:57:08 0:26:54 0:20:10 0:06:44 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/dedup-io-mixed} 2
Failure Reason:

"2021-04-18T18:53:36.357145+0000 mgr.y (mgr.4104) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056314 2021-04-18 16:13:33 2021-04-18 18:30:14 2021-04-18 19:17:49 0:47:35 0:37:27 0:10:08 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
fail 6056315 2021-04-18 16:13:34 2021-04-18 18:30:15 2021-04-18 19:52:20 1:22:05 1:10:05 0:12:00 smithi master centos 8.3 rados/dashboard/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-hybrid} supported-random-distro$/{centos_8} tasks/dashboard} 2
Failure Reason:

"2021-04-18T18:46:29.430175+0000 mgr.y (mgr.4102) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056316 2021-04-18 16:13:35 2021-04-18 18:30:15 2021-04-18 18:55:08 0:24:53 0:15:34 0:09:19 smithi master centos 8.3 rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 6056317 2021-04-18 16:13:36 2021-04-18 18:30:35 2021-04-18 18:59:09 0:28:34 0:20:56 0:07:38 smithi master rhel 8.3 rados/standalone/{mon_election/connectivity supported-random-distro$/{rhel_8} workloads/crush} 1
dead 6056318 2021-04-18 16:13:37 2021-04-18 18:32:16 2021-04-18 20:08:58 1:36:42 smithi master rhel 8.3 rados/upgrade/pacific-x/parallel/{0-start 1-tasks distro1$/{rhel_8.3_kubic_stable} mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
fail 6056319 2021-04-18 16:13:38 2021-04-18 18:32:16 2021-04-18 19:03:09 0:30:53 0:24:08 0:06:45 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/dedup-io-snaps} 2
Failure Reason:

"2021-04-18T18:54:49.280807+0000 mgr.x (mgr.4104) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056320 2021-04-18 16:13:39 2021-04-18 18:32:17 2021-04-18 18:42:10 0:09:53 smithi master centos 8.2 rados/cephadm/dashboard/{0-distro/centos_8.2_kubic_stable task/test_e2e} 2
Failure Reason:

Command failed on smithi013 with status 100: 'sudo apt-get clean'

pass 6056321 2021-04-18 16:13:40 2021-04-18 18:33:47 2021-04-18 19:13:50 0:40:03 0:30:24 0:09:39 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/pool-snaps-few-objects} 2
pass 6056322 2021-04-18 16:13:41 2021-04-18 18:33:58 2021-04-18 20:06:53 1:32:55 1:21:39 0:11:16 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/careful thrashosds-health workloads/radosbench} 3
pass 6056323 2021-04-18 16:13:42 2021-04-18 18:33:58 2021-04-18 19:13:32 0:39:34 0:27:35 0:11:59 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/rados_api_tests} 2
dead 6056324 2021-04-18 16:13:43 2021-04-18 18:34:18 2021-04-18 18:42:10 0:07:52 smithi master ubuntu 18.04 rados/cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_18.04} 2-node-mgr orchestrator_cli} 2
Failure Reason:

SSH connection to smithi013 was lost: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y install linux-image-current-generic'

pass 6056325 2021-04-18 16:13:44 2021-04-18 18:34:29 2021-04-18 18:52:11 0:17:42 0:08:20 0:09:22 smithi master centos 8.3 rados/singleton/{all/peer mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 1
pass 6056326 2021-04-18 16:13:45 2021-04-18 18:34:29 2021-04-18 19:00:32 0:26:03 0:18:49 0:07:14 smithi master rhel 8.3 rados/singleton-nomsgr/{all/balancer mon_election/classic rados supported-random-distro$/{rhel_8}} 1
fail 6056327 2021-04-18 16:13:46 2021-04-18 18:35:09 2021-04-18 19:13:50 0:38:41 0:26:58 0:11:43 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/radosbench-high-concurrency} 2
Failure Reason:

"2021-04-18T18:50:57.843948+0000 mgr.y (mgr.4110) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056328 2021-04-18 16:13:47 2021-04-18 18:35:30 2021-04-18 19:07:12 0:31:42 0:19:43 0:11:59 smithi master ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/rados_workunit_loadgen_mix} 2
Failure Reason:

"2021-04-18T18:53:33.809509+0000 mgr.y (mgr.4106) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056329 2021-04-18 16:13:48 2021-04-18 18:36:20 2021-04-18 19:02:32 0:26:12 0:14:31 0:11:41 smithi master centos 8.2 rados/cephadm/smoke/{distro/centos_8.2_kubic_stable fixed-2 mon_election/classic start} 2
dead 6056330 2021-04-18 16:13:49 2021-04-18 18:38:01 2021-04-18 20:08:58 1:30:57 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/radosbench} 2
pass 6056331 2021-04-18 16:13:50 2021-04-18 18:38:21 2021-04-18 19:02:32 0:24:11 0:14:05 0:10:06 smithi master ubuntu 20.04 rados/objectstore/{backends/keyvaluedb supported-random-distro$/{ubuntu_latest}} 1
pass 6056332 2021-04-18 16:13:51 2021-04-18 18:38:22 2021-04-18 19:02:33 0:24:11 0:13:58 0:10:13 smithi master centos 8.2 rados/cephadm/smoke-roleless/{0-distro/centos_8.2_kubic_stable 1-start 2-services/basic 3-final} 2
fail 6056333 2021-04-18 16:13:52 2021-04-18 18:38:22 2021-04-18 19:02:32 0:24:10 0:10:51 0:13:19 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/redirect} 2
Failure Reason:

"2021-04-18T18:56:01.633190+0000 mgr.y (mgr.4104) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056334 2021-04-18 16:13:53 2021-04-18 18:40:18 2021-04-18 19:00:32 0:20:14 0:09:23 0:10:51 smithi master centos 8.3 rados/singleton/{all/pg-autoscaler-progress-off mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} 2
fail 6056335 2021-04-18 16:13:54 2021-04-18 18:40:18 2021-04-18 19:09:43 0:29:25 0:11:52 0:17:33 smithi master ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

"2021-04-18T19:03:05.715298+0000 mgr.x (mgr.4110) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056336 2021-04-18 16:13:55 2021-04-18 18:43:17 2021-04-18 19:06:51 0:23:34 0:13:10 0:10:24 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/redirect_promote_tests} 2
Failure Reason:

"2021-04-18T18:59:25.640825+0000 mgr.y (mgr.4113) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056337 2021-04-18 16:13:56 2021-04-18 18:43:17 2021-04-18 19:03:01 0:19:44 0:09:30 0:10:14 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/dmclock_default_shards settings/optimized ubuntu_18.04 workloads/sample_radosbench} 1
pass 6056338 2021-04-18 16:13:57 2021-04-18 18:43:18 2021-04-18 19:02:32 0:19:14 0:08:41 0:10:33 smithi master centos 8.3 rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 6056339 2021-04-18 16:13:58 2021-04-18 18:45:10 2021-04-18 19:04:33 0:19:23 0:10:55 0:08:28 smithi master ubuntu 18.04 rados/cephadm/smoke-singlehost/{0-distro$/{ubuntu_18.04} 1-start 2-services/basic 3-final} 1
pass 6056340 2021-04-18 16:13:59 2021-04-18 18:45:11 2021-04-18 19:33:54 0:48:43 0:40:46 0:07:57 smithi master ubuntu 20.04 rados/standalone/{mon_election/classic supported-random-distro$/{ubuntu_latest} workloads/erasure-code} 1
fail 6056341 2021-04-18 16:14:00 2021-04-18 18:45:11 2021-04-18 19:10:51 0:25:40 0:15:00 0:10:40 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/redirect_set_object} 2
Failure Reason:

"2021-04-18T19:01:36.492053+0000 mgr.y (mgr.4106) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056342 2021-04-18 16:14:01 2021-04-18 18:45:12 2021-04-18 19:23:37 0:38:25 0:28:10 0:10:15 smithi master centos 8.3 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-snaps-few-objects-overwrites} 2
Failure Reason:

"2021-04-18T19:01:54.281220+0000 mgr.y (mgr.4100) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056343 2021-04-18 16:14:02 2021-04-18 18:45:12 2021-04-18 19:04:34 0:19:22 0:08:58 0:10:24 smithi master centos 8.3 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-bitmap supported-random-distro$/{centos_8} tasks/workunits} 2
Failure Reason:

"2021-04-18T19:01:32.259696+0000 mgr.z (mgr.4102) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056344 2021-04-18 16:14:03 2021-04-18 18:45:13 2021-04-18 19:19:36 0:34:23 0:25:06 0:09:17 smithi master centos 8.3 rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/classic} 2
fail 6056345 2021-04-18 16:14:04 2021-04-18 18:45:13 2021-04-18 19:13:32 0:28:19 0:16:14 0:12:05 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/set-chunks-read} 2
Failure Reason:

"2021-04-18T19:02:59.745282+0000 mgr.x (mgr.4105) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056346 2021-04-18 16:14:05 2021-04-18 18:47:08 2021-04-18 19:38:21 0:51:13 0:43:46 0:07:27 smithi master rhel 8.3 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/many workloads/snaps-few-objects} 2
pass 6056347 2021-04-18 16:14:06 2021-04-18 18:47:08 2021-04-18 19:07:14 0:20:06 0:08:42 0:11:24 smithi master ubuntu 20.04 rados/singleton/{all/pg-autoscaler mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} 2
fail 6056348 2021-04-18 16:14:07 2021-04-18 18:47:09 2021-04-18 19:29:53 0:42:44 0:33:34 0:09:10 smithi master rhel 8.3 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-many-deletes} 2
Failure Reason:

"2021-04-18T19:11:06.582369+0000 mgr.y (mgr.4110) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056349 2021-04-18 16:14:08 2021-04-18 18:49:09 2021-04-18 19:19:36 0:30:27 0:19:05 0:11:22 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/small-objects-balanced} 2
Failure Reason:

"2021-04-18T19:05:08.521774+0000 mgr.x (mgr.4110) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056350 2021-04-18 16:14:09 2021-04-18 18:49:10 2021-04-18 19:08:39 0:19:29 0:10:23 0:09:06 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_adoption} 1
pass 6056351 2021-04-18 16:14:10 2021-04-18 18:49:10 2021-04-18 19:13:32 0:24:22 0:17:36 0:06:46 smithi master rhel 8.3 rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/classic rados supported-random-distro$/{rhel_8}} 1
pass 6056352 2021-04-18 16:14:11 2021-04-18 18:49:10 2021-04-18 19:15:32 0:26:22 0:10:17 0:16:05 smithi master ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
fail 6056353 2021-04-18 16:14:12 2021-04-18 18:51:12 2021-04-18 19:27:37 0:36:25 0:25:20 0:11:05 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/lockdep} 2
Failure Reason:

"2021-04-18T19:08:13.312598+0000 mgr.x (mgr.4105) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056354 2021-04-18 16:14:13 2021-04-18 18:51:12 2021-04-18 19:29:53 0:38:41 0:32:07 0:06:34 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/small-objects-localized} 2
Failure Reason:

"2021-04-18T19:13:23.606147+0000 mgr.y (mgr.4103) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056355 2021-04-18 16:14:14 2021-04-18 18:51:12 2021-04-18 19:23:37 0:32:25 0:22:21 0:10:04 smithi master centos 8.3 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

"2021-04-18T19:07:59.850514+0000 mgr.y (mgr.4104) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056356 2021-04-18 16:14:15 2021-04-18 18:51:13 2021-04-18 19:08:39 0:17:26 0:06:27 0:10:59 smithi master ubuntu 20.04 rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} 2
Failure Reason:

"2021-04-18T19:05:54.925420+0000 mgr.x (mgr.4102) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056357 2021-04-18 16:14:16 2021-04-18 18:51:13 2021-04-18 19:23:37 0:32:24 0:22:51 0:09:33 smithi master centos 8.2 rados/cephadm/with-work/{0-distro/centos_8.2_kubic_stable fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} 2
fail 6056358 2021-04-18 16:14:17 2021-04-18 18:51:13 2021-04-18 19:21:37 0:30:24 0:18:54 0:11:30 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/small-objects} 2
Failure Reason:

"2021-04-18T19:09:29.143219+0000 mgr.x (mgr.4102) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056359 2021-04-18 16:14:18 2021-04-18 18:53:14 2021-04-18 19:31:54 0:38:40 0:26:56 0:11:44 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/default thrashosds-health workloads/rbd_cls} 3
pass 6056360 2021-04-18 16:14:19 2021-04-18 18:53:14 2021-04-18 19:10:43 0:17:29 0:08:02 0:09:27 smithi master centos 8.3 rados/singleton/{all/pg-removal-interruption mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} 1
pass 6056361 2021-04-18 16:14:20 2021-04-18 18:53:15 2021-04-18 19:40:21 0:47:06 0:36:37 0:10:29 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
fail 6056362 2021-04-18 16:14:21 2021-04-18 18:53:15 2021-04-18 19:36:21 0:43:06 0:34:10 0:08:56 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/snaps-few-objects-balanced} 2
Failure Reason:

"2021-04-18T19:16:34.539465+0000 mgr.x (mgr.4110) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056363 2021-04-18 16:14:22 2021-04-18 18:55:16 2021-04-18 19:38:21 0:43:05 0:32:36 0:10:29 smithi master ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/rados_workunit_loadgen_mostlyread} 2
Failure Reason:

"2021-04-18T19:10:41.379985+0000 mgr.x (mgr.4113) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056364 2021-04-18 16:14:23 2021-04-18 18:55:16 2021-04-18 19:31:54 0:36:38 0:24:29 0:12:09 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/snaps-few-objects-localized} 2
Failure Reason:

"2021-04-18T19:13:11.713060+0000 mgr.y (mgr.4098) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056365 2021-04-18 16:14:24 2021-04-18 18:57:17 2021-04-18 19:19:36 0:22:19 0:15:59 0:06:20 smithi master rhel 8.3 rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
pass 6056366 2021-04-18 16:14:25 2021-04-18 18:57:17 2021-04-18 19:21:36 0:24:19 0:18:35 0:05:44 smithi master rhel 8.3 rados/cephadm/smoke/{distro/rhel_8.3_kubic_stable fixed-2 mon_election/connectivity start} 2
pass 6056367 2021-04-18 16:14:26 2021-04-18 18:57:18 2021-04-18 19:23:38 0:26:20 0:16:17 0:10:03 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/wpq_default_shards settings/optimized ubuntu_18.04 workloads/cosbench_64K_read_write} 1
pass 6056368 2021-04-18 16:14:27 2021-04-18 18:57:18 2021-04-18 19:17:32 0:20:14 0:10:53 0:09:21 smithi master centos 8.3 rados/standalone/{mon_election/connectivity supported-random-distro$/{centos_8} workloads/mgr} 1
fail 6056369 2021-04-18 16:14:28 2021-04-18 18:57:18 2021-04-18 19:33:54 0:36:36 0:23:30 0:13:06 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/snaps-few-objects} 2
Failure Reason:

"2021-04-18T19:14:36.728483+0000 mgr.x (mgr.4103) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056370 2021-04-18 16:14:29 2021-04-18 18:59:09 2021-04-18 19:25:37 0:26:28 0:16:34 0:09:54 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_cephadm} 1
pass 6056371 2021-04-18 16:14:30 2021-04-18 18:59:09 2021-04-18 19:21:36 0:22:27 0:13:40 0:08:47 smithi master centos 8.3 rados/singleton/{all/radostool mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} 1
fail 6056372 2021-04-18 16:14:31 2021-04-18 18:59:10 2021-04-18 19:25:50 0:26:40 0:14:56 0:11:44 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/write_fadvise_dontneed} 2
Failure Reason:

"2021-04-18T19:14:54.280303+0000 mgr.y (mgr.4104) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056373 2021-04-18 16:14:32 2021-04-18 18:59:20 2021-04-18 19:36:21 0:37:01 0:27:43 0:09:18 smithi master ubuntu 20.04 rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{ubuntu_latest}} 1
fail 6056374 2021-04-18 16:14:33 2021-04-18 18:59:20 2021-04-18 19:56:20 0:57:00 0:45:59 0:11:01 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/admin_socket_objecter_requests} 2
Failure Reason:

"2021-04-18T19:17:04.427902+0000 mgr.y (mgr.4109) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056375 2021-04-18 16:14:34 2021-04-18 19:00:40 2021-04-18 19:27:50 0:27:10 0:18:12 0:08:58 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/iscsi 3-final} 2
pass 6056376 2021-04-18 16:14:34 2021-04-18 19:01:11 2021-04-18 19:17:49 0:16:38 0:06:32 0:10:06 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/export-after-evict mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
fail 6056377 2021-04-18 16:14:35 2021-04-18 19:01:11 2021-04-18 19:31:54 0:30:43 0:14:39 0:16:04 smithi master centos 8.3 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

"2021-04-18T19:25:09.149902+0000 mgr.y (mgr.4104) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056378 2021-04-18 16:14:36 2021-04-18 19:02:42 2021-04-18 19:33:55 0:31:13 0:18:46 0:12:27 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/cache-agent-big} 2
Failure Reason:

"2021-04-18T19:21:31.569759+0000 mgr.y (mgr.4103) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056379 2021-04-18 16:14:37 2021-04-18 19:02:42 2021-04-18 19:29:54 0:27:12 0:15:31 0:11:41 smithi master ubuntu 18.04 rados/cephadm/smoke/{distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
fail 6056380 2021-04-18 16:14:38 2021-04-18 19:02:42 2021-04-18 19:25:37 0:22:55 0:12:52 0:10:03 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/cache-agent-small} 2
Failure Reason:

"2021-04-18T19:19:17.183725+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056381 2021-04-18 16:14:39 2021-04-18 19:03:03 2021-04-18 19:49:44 0:46:41 0:36:45 0:09:56 smithi master rhel 8.3 rados/singleton/{all/random-eio mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} 2
fail 6056382 2021-04-18 16:14:40 2021-04-18 19:03:13 2021-04-18 19:23:37 0:20:24 0:11:25 0:08:59 smithi master centos 8.3 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-comp-lz4 supported-random-distro$/{centos_8} tasks/crash} 2
Failure Reason:

"2021-04-18T19:19:02.483990+0000 mgr.z (mgr.4111) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056383 2021-04-18 16:14:41 2021-04-18 19:03:14 2021-04-18 19:31:55 0:28:41 0:20:36 0:08:05 smithi master centos 8.3 rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} 1
Failure Reason:

"2021-04-18T19:29:12.775517+0000 mgr.y (mgr.4110) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056384 2021-04-18 16:14:42 2021-04-18 19:03:14 2021-04-18 19:44:52 0:41:38 0:29:49 0:11:49 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/connectivity} 2
fail 6056385 2021-04-18 16:14:43 2021-04-18 19:03:15 2021-04-18 19:38:21 0:35:06 0:22:48 0:12:18 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/cache-pool-snaps-readproxy} 2
Failure Reason:

"2021-04-18T19:22:03.202034+0000 mgr.y (mgr.4113) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056386 2021-04-18 16:14:44 2021-04-18 19:04:35 2021-04-18 19:33:54 0:29:19 0:17:30 0:11:49 smithi master ubuntu 20.04 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/one workloads/pool-create-delete} 2
fail 6056387 2021-04-18 16:14:45 2021-04-18 19:05:16 2021-04-18 19:38:21 0:33:05 0:21:01 0:12:04 smithi master centos 8.3 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects} 2
Failure Reason:

"2021-04-18T19:23:05.158838+0000 mgr.x (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056388 2021-04-18 16:14:46 2021-04-18 19:05:16 2021-04-18 19:31:55 0:26:39 0:19:11 0:07:28 smithi master rhel 8.3 rados/singleton-nomsgr/{all/full-tiering mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
fail 6056389 2021-04-18 16:14:47 2021-04-18 19:05:16 2021-04-18 19:40:21 0:35:05 0:25:07 0:09:58 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/cache-pool-snaps} 2
Failure Reason:

"2021-04-18T19:22:55.285547+0000 mgr.y (mgr.4104) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056390 2021-04-18 16:14:48 2021-04-18 19:05:17 2021-04-18 19:23:51 0:18:34 0:08:37 0:09:57 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_cephadm_repos} 1
pass 6056391 2021-04-18 16:14:49 2021-04-18 19:05:17 2021-04-18 19:42:21 0:37:04 0:28:10 0:08:54 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_18.04 workloads/cosbench_64K_write} 1
fail 6056392 2021-04-18 16:14:50 2021-04-18 19:05:17 2021-04-18 19:42:22 0:37:05 0:30:49 0:06:16 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/cache-snaps-balanced} 2
Failure Reason:

"2021-04-18T19:28:26.683064+0000 mgr.y (mgr.4112) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056393 2021-04-18 16:14:51 2021-04-18 19:05:17 2021-04-18 20:04:22 0:59:05 0:46:31 0:12:34 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/pacific backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
fail 6056394 2021-04-18 16:14:52 2021-04-18 19:06:58 2021-04-18 19:31:54 0:24:56 0:15:59 0:08:57 smithi master rhel 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/readwrite} 2
Failure Reason:

"2021-04-18T19:25:06.625625+0000 mgr.x (mgr.4105) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056395 2021-04-18 16:14:53 2021-04-18 19:07:18 2021-04-18 19:52:20 0:45:02 0:38:41 0:06:21 smithi master rhel 8.3 rados/standalone/{mon_election/classic supported-random-distro$/{rhel_8} workloads/misc} 1
pass 6056396 2021-04-18 16:14:54 2021-04-18 19:07:19 2021-04-18 19:47:29 0:40:10 0:28:02 0:12:08 smithi master ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
dead 6056397 2021-04-18 16:14:55 2021-04-18 19:07:19 2021-04-18 20:08:58 1:01:39 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados tasks/rados_cls_all validater/valgrind} 2
pass 6056398 2021-04-18 16:14:56 2021-04-18 19:09:45 2021-04-18 19:34:20 0:24:35 0:16:06 0:08:29 smithi master centos 8.3 rados/singleton/{all/rebuild-mondb mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
pass 6056399 2021-04-18 16:14:57 2021-04-18 19:09:45 2021-04-18 19:54:20 0:44:35 0:36:27 0:08:08 smithi master rhel 8.3 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 6056400 2021-04-18 16:14:58 2021-04-18 19:09:56 2021-04-18 20:02:21 0:52:25 0:46:02 0:06:23 smithi master rhel 8.3 rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_api_tests} 2
pass 6056401 2021-04-18 16:14:59 2021-04-18 19:09:56 2021-04-18 19:30:34 0:20:38 0:07:40 0:12:58 smithi master centos 8.3 rados/multimon/{clusters/9 mon_election/classic msgr-failures/many msgr/async no_pools objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 3
fail 6056402 2021-04-18 16:15:00 2021-04-18 19:11:02 2021-04-18 19:52:20 0:41:18 0:34:04 0:07:14 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/cache-snaps} 2
Failure Reason:

"2021-04-18T19:34:22.428703+0000 mgr.y (mgr.4106) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056403 2021-04-18 16:15:01 2021-04-18 19:11:52 2021-04-18 19:38:21 0:26:29 0:20:26 0:06:03 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/cache} 2
Failure Reason:

"2021-04-18T19:34:47.272567+0000 mgr.x (mgr.4111) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

dead 6056404 2021-04-18 16:15:02 2021-04-18 19:11:52 2021-04-18 20:08:58 0:57:06 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
pass 6056405 2021-04-18 16:15:03 2021-04-18 19:11:53 2021-04-18 19:29:53 0:18:00 0:08:17 0:09:43 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/health-warnings mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
fail 6056406 2021-04-18 16:15:04 2021-04-18 19:11:53 2021-04-18 19:49:44 0:37:51 0:25:45 0:12:06 smithi master ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 2
Failure Reason:

"2021-04-18T19:28:05.184460+0000 mgr.x (mgr.4106) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6056407 2021-04-18 16:15:05 2021-04-18 19:11:53 2021-04-18 19:33:54 0:22:01 0:09:41 0:12:20 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/dedup-io-mixed} 2
Failure Reason:

"2021-04-18T19:30:00.732391+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

pass 6056408 2021-04-18 16:15:06 2021-04-18 19:13:51 2021-04-18 19:38:22 0:24:31 0:14:48 0:09:43 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 1-start 2-services/mirror 3-final} 2
pass 6056409 2021-04-18 16:15:07 2021-04-18 19:13:51 2021-04-18 19:44:52 0:31:01 0:24:44 0:06:17 smithi master rhel 8.3 rados/singleton/{all/recovery-preemption mon_election/classic msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} 1