Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 4628363 2019-12-23 16:20:44 2019-12-23 16:27:25 2019-12-23 16:43:24 0:15:59 0:07:42 0:08:17 smithi master centos 8.0 rados/cephadm/{fixed-2.yaml mode/root.yaml msgr/async-v1only.yaml start.yaml supported-random-distro$/{centos_8.yaml} tasks/rados_python.yaml} 2
Failure Reason:

Command failed on smithi057 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 1846c09a-25a3-11ea-8285-001a4aab830c --force'

fail 4628364 2019-12-23 16:20:45 2019-12-23 16:27:25 2019-12-23 17:09:26 0:42:01 0:35:15 0:06:46 smithi master rhel 8.0 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi184 with status 11: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067742409578fe705cdfd829b53be781fdbe3816 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 4628365 2019-12-23 16:20:46 2019-12-23 16:27:25 2019-12-23 16:43:24 0:15:59 0:07:56 0:08:03 smithi master ubuntu 18.04 rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/cephadm_orchestrator.yaml} 2
Failure Reason:

Test failure: test_host_ls (tasks.mgr.test_cephadm_orchestrator.TestOrchestratorCli)

pass 4628366 2019-12-23 16:20:47 2019-12-23 16:27:55 2019-12-23 16:55:54 0:27:59 0:22:31 0:05:28 smithi master rhel 8.0 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} 2
pass 4628367 2019-12-23 16:20:48 2019-12-23 16:27:58 2019-12-23 17:31:59 1:04:01 0:11:40 0:52:21 smithi master ubuntu 18.04 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-avl.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
fail 4628368 2019-12-23 16:20:50 2019-12-23 16:28:05 2019-12-23 16:50:04 0:21:59 0:03:06 0:18:53 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/hammer.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} 4
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F18.04%2Fx86_64&ref=hammer

pass 4628369 2019-12-23 16:20:51 2019-12-23 16:29:17 2019-12-23 16:51:17 0:22:00 0:14:11 0:07:49 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} 2
pass 4628370 2019-12-23 16:20:52 2019-12-23 16:30:02 2019-12-23 17:02:01 0:31:59 0:22:25 0:09:34 smithi master rhel 8.0 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml objectstore/bluestore-avl.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read.yaml} 2
pass 4628371 2019-12-23 16:20:53 2019-12-23 16:30:03 2019-12-23 17:04:02 0:33:59 0:25:40 0:08:19 smithi master rhel 8.0 rados/objectstore/{backends/objectcacher-stress.yaml supported-random-distro$/{rhel_8.yaml}} 1
pass 4628372 2019-12-23 16:20:54 2019-12-23 16:30:08 2019-12-23 16:58:07 0:27:59 0:18:27 0:09:32 smithi master centos 8.0 rados/singleton-nomsgr/{all/osd_stale_reads.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} 1
pass 4628373 2019-12-23 16:20:55 2019-12-23 16:30:08 2019-12-23 17:00:07 0:29:59 0:11:00 0:18:59 smithi master centos 8.0 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} 3
pass 4628374 2019-12-23 16:20:56 2019-12-23 16:36:06 2019-12-23 17:18:06 0:42:00 0:28:10 0:13:50 smithi master centos 8.0 rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} 2
pass 4628375 2019-12-23 16:20:57 2019-12-23 16:38:41 2019-12-23 17:18:40 0:39:59 0:27:36 0:12:23 smithi master rhel 8.0 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
pass 4628376 2019-12-23 16:20:58 2019-12-23 16:40:14 2019-12-23 16:58:13 0:17:59 0:10:30 0:07:29 smithi master centos 8.0 rados/singleton/{all/mon-config.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} 1
fail 4628377 2019-12-23 16:20:59 2019-12-23 16:40:39 2019-12-23 17:14:39 0:34:00 0:26:26 0:07:34 smithi master rhel 8.0 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/small-objects.yaml} 2
Failure Reason:

Command failed on smithi075 with status 11: u'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg deep-scrub 1.bb'

pass 4628378 2019-12-23 16:21:00 2019-12-23 16:42:08 2019-12-23 17:04:07 0:21:59 0:12:35 0:09:24 smithi master centos 8.0 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_8.yaml} tasks/rados_cls_all.yaml} 2
pass 4628379 2019-12-23 16:21:01 2019-12-23 16:42:08 2019-12-23 17:16:07 0:33:59 0:13:08 0:20:51 smithi master ubuntu 18.04 rados/multimon/{clusters/9.yaml msgr-failures/few.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_recovery.yaml} 3
fail 4628380 2019-12-23 16:21:02 2019-12-23 16:43:41 2019-12-23 17:29:41 0:46:00 0:36:37 0:09:23 smithi master centos 8.0 rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{centos_8.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_all (tasks.mgr.dashboard.test_rgw.RgwBucketTest)

fail 4628381 2019-12-23 16:21:03 2019-12-23 16:43:41 2019-12-23 16:59:40 0:15:59 0:10:37 0:05:22 smithi master rhel 8.0 rados/singleton-nomsgr/{all/pool-access.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} 1
Failure Reason:

SELinux denials found on ubuntu@smithi187.front.sepia.ceph.com: ['type=AVC msg=audit(1577119867.826:3992): avc: denied { open } for pid=16561 comm="rhsmcertd-worke" path="/etc/dnf/modules.d/satellite-5-client.module" dev="sda1" ino=57237 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:root_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577119867.737:3991): avc: denied { unlink } for pid=16561 comm="rhsmcertd-worke" name="metadata_lock.pid" dev="sda1" ino=58126 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577119868.342:3993): avc: denied { read } for pid=16635 comm="setroubleshootd" name="Packages" dev="sda1" ino=61046 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577119867.489:3983): avc: denied { read write } for pid=16561 comm="rhsmcertd-worke" name=".dbenv.lock" dev="sda1" ino=61154 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577119867.737:3991): avc: denied { remove_name } for pid=16561 comm="rhsmcertd-worke" name="metadata_lock.pid" dev="sda1" ino=58126 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=dir permissive=1', 'type=AVC msg=audit(1577119867.826:3992): avc: denied { read } for pid=16561 comm="rhsmcertd-worke" name="satellite-5-client.module" dev="sda1" ino=57237 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:root_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577119867.557:3988): avc: denied { add_name } for pid=16561 comm="rhsmcertd-worke" name="metadata_lock.pid" scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=dir permissive=1', 'type=AVC msg=audit(1577119867.553:3986): avc: denied { map } for pid=16561 comm="rhsmcertd-worke" path="/var/lib/rpm/Name" dev="sda1" ino=61070 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577119867.659:3989): avc: denied { open } for pid=16561 comm="rhsmcertd-worke" path="/var/cache/dnf/epel-fafd94c310c51e1e/metalink.xml" dev="sda1" ino=262188 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577119867.683:3990): avc: denied { setattr } for pid=16561 comm="rhsmcertd-worke" name="6e2fe611f78ac434c2918bac1eec468dbd24c9b4cdb65bf6a744d10f764f3284-primary.xml.gz" dev="sda1" ino=262175 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577119871.332:4078): avc: denied { read } for pid=16865 comm="rpm" name="Packages" dev="sda1" ino=61046 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577119868.342:3993): avc: denied { open } for pid=16635 comm="setroubleshootd" path="/var/lib/rpm/Packages" dev="sda1" ino=61046 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577119867.557:3988): avc: denied { write } for pid=16561 comm="rhsmcertd-worke" name="dnf" dev="sda1" ino=60792 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=dir permissive=1', 'type=AVC msg=audit(1577119871.332:4079): avc: denied { lock } for pid=16865 comm="rpm" path="/var/lib/rpm/Packages" dev="sda1" ino=61046 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577119868.342:3994): avc: denied { lock } for pid=16635 comm="setroubleshootd" path="/var/lib/rpm/Packages" dev="sda1" ino=61046 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577119868.342:3995): avc: denied { map } for pid=16635 comm="setroubleshootd" path="/var/lib/rpm/Name" dev="sda1" ino=61070 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577119871.332:4080): avc: denied { map } for pid=16865 comm="rpm" path="/var/lib/rpm/Name" dev="sda1" ino=61070 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577119867.557:3987): avc: denied { open } for pid=16561 comm="rhsmcertd-worke" path="/var/log/hawkey.log" dev="sda1" ino=60817 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577119867.557:3988): avc: denied { create } for pid=16561 comm="rhsmcertd-worke" name="metadata_lock.pid" scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577119867.557:3988): avc: denied { open } for pid=16561 comm="rhsmcertd-worke" path="/var/cache/dnf/metadata_lock.pid" dev="sda1" ino=58126 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577119871.332:4078): avc: denied { open } for pid=16865 comm="rpm" path="/var/lib/rpm/Packages" dev="sda1" ino=61046 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577119867.489:3983): avc: denied { open } for pid=16561 comm="rhsmcertd-worke" path="/var/lib/rpm/.dbenv.lock" dev="sda1" ino=61154 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577119867.490:3985): avc: denied { getattr } for pid=16561 comm="rhsmcertd-worke" path="/var/lib/rpm/Packages" dev="sda1" ino=61046 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577119867.489:3984): avc: denied { lock } for pid=16561 comm="rhsmcertd-worke" path="/var/lib/rpm/.dbenv.lock" dev="sda1" ino=61154 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1']

pass 4628382 2019-12-23 16:21:04 2019-12-23 16:44:16 2019-12-23 17:08:15 0:23:59 0:10:22 0:13:37 smithi master centos 8.0 rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{centos_8.yaml} tasks/crash.yaml} 2
fail 4628383 2019-12-23 16:21:05 2019-12-23 16:45:54 2019-12-23 18:23:55 1:38:01 0:03:22 1:34:39 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/jewel-v1only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 4
Failure Reason:

Command failed on smithi103 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=10.2.11-18-g115560f-1bionic ceph-mds=10.2.11-18-g115560f-1bionic ceph-common=10.2.11-18-g115560f-1bionic ceph-fuse=10.2.11-18-g115560f-1bionic ceph-test=10.2.11-18-g115560f-1bionic radosgw=10.2.11-18-g115560f-1bionic python3-rados=10.2.11-18-g115560f-1bionic python3-rgw=10.2.11-18-g115560f-1bionic python3-cephfs=10.2.11-18-g115560f-1bionic python3-rbd=10.2.11-18-g115560f-1bionic librados2=10.2.11-18-g115560f-1bionic librbd1=10.2.11-18-g115560f-1bionic rbd-fuse=10.2.11-18-g115560f-1bionic librados2=10.2.11-18-g115560f-1bionic'

pass 4628384 2019-12-23 16:21:06 2019-12-23 16:45:54 2019-12-23 17:59:55 1:14:01 1:08:24 0:05:37 smithi master ubuntu 18.04 rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/scrub.yaml} 1
fail 4628385 2019-12-23 16:21:07 2019-12-23 16:45:55 2019-12-23 17:17:54 0:31:59 0:24:05 0:07:54 smithi master rhel 8.0 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 2
Failure Reason:

SELinux denials found on ubuntu@smithi174.front.sepia.ceph.com: ['type=AVC msg=audit(1577120415.545:6593): avc: denied { getattr } for pid=30186 comm="rhsmcertd-worke" path="/var/lib/rpm/__db.001" dev="sda1" ino=262271 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577120415.757:6598): avc: denied { setattr } for pid=30186 comm="rhsmcertd-worke" name="6e2fe611f78ac434c2918bac1eec468dbd24c9b4cdb65bf6a744d10f764f3284-primary.xml.gz" dev="sda1" ino=264733 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577120415.958:6600): avc: denied { open } for pid=30186 comm="rhsmcertd-worke" path="/etc/dnf/modules.d/satellite-5-client.module" dev="sda1" ino=57237 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:root_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577120415.620:6596): avc: denied { open } for pid=30186 comm="rhsmcertd-worke" path="/var/cache/dnf/metadata_lock.pid" dev="sda1" ino=59894 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577120416.081:6602): avc: denied { lock } for pid=30240 comm="setroubleshootd" path="/var/lib/rpm/Packages" dev="sda1" ino=262250 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577120415.545:6592): avc: denied { lock } for pid=30186 comm="rhsmcertd-worke" path="/var/lib/rpm/.dbenv.lock" dev="sda1" ino=262270 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577120415.545:6591): avc: denied { read write } for pid=30186 comm="rhsmcertd-worke" name=".dbenv.lock" dev="sda1" ino=262270 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577120415.620:6596): avc: denied { add_name } for pid=30186 comm="rhsmcertd-worke" name="metadata_lock.pid" scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=dir permissive=1', 'type=AVC msg=audit(1577120415.620:6595): avc: denied { open } for pid=30186 comm="rhsmcertd-worke" path="/var/log/hawkey.log" dev="sda1" ino=60817 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577120415.620:6596): avc: denied { create } for pid=30186 comm="rhsmcertd-worke" name="metadata_lock.pid" scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577120416.081:6601): avc: denied { open } for pid=30240 comm="setroubleshootd" path="/var/lib/rpm/Packages" dev="sda1" ino=262250 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577120415.545:6591): avc: denied { open } for pid=30186 comm="rhsmcertd-worke" path="/var/lib/rpm/.dbenv.lock" dev="sda1" ino=262270 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577120416.081:6601): avc: denied { read } for pid=30240 comm="setroubleshootd" name="Packages" dev="sda1" ino=262250 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577120415.958:6600): avc: denied { read } for pid=30186 comm="rhsmcertd-worke" name="satellite-5-client.module" dev="sda1" ino=57237 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:root_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577120415.876:6599): avc: denied { remove_name } for pid=30186 comm="rhsmcertd-worke" name="metadata_lock.pid" dev="sda1" ino=59894 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=dir permissive=1', 'type=AVC msg=audit(1577120415.729:6597): avc: denied { open } for pid=30186 comm="rhsmcertd-worke" path="/var/cache/dnf/ceph-5406f893dcfa5b2c/repodata/repomd.xml" dev="sda1" ino=262154 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577120415.620:6596): avc: denied { write } for pid=30186 comm="rhsmcertd-worke" name="dnf" dev="sda1" ino=60792 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=dir permissive=1', 'type=AVC msg=audit(1577120415.545:6594): avc: denied { map } for pid=30186 comm="rhsmcertd-worke" path="/var/lib/rpm/__db.001" dev="sda1" ino=262271 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577120416.081:6603): avc: denied { map } for pid=30240 comm="setroubleshootd" path="/var/lib/rpm/Name" dev="sda1" ino=262251 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577120415.876:6599): avc: denied { unlink } for pid=30186 comm="rhsmcertd-worke" name="metadata_lock.pid" dev="sda1" ino=59894 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1']

pass 4628386 2019-12-23 16:21:08 2019-12-23 16:46:47 2019-12-23 17:06:46 0:19:59 0:15:00 0:04:59 smithi master rhel 8.0 rados/singleton/{all/osd-recovery-incomplete.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-avl.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} 1
pass 4628387 2019-12-23 16:21:09 2019-12-23 16:48:28 2019-12-23 21:12:32 4:24:04 4:12:19 0:11:45 smithi master ubuntu 18.04 rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 4628388 2019-12-23 16:21:10 2019-12-23 16:48:28 2019-12-23 17:10:28 0:22:00 0:14:32 0:07:28 smithi master rhel 8.0 rados/singleton/{all/osd-recovery.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} 1
pass 4628389 2019-12-23 16:21:11 2019-12-23 16:50:21 2019-12-23 17:22:21 0:32:00 0:24:08 0:07:52 smithi master ubuntu 18.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} 2
pass 4628390 2019-12-23 16:21:13 2019-12-23 16:50:21 2019-12-23 17:34:21 0:44:00 0:11:39 0:32:21 smithi master centos 8.0 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
pass 4628391 2019-12-23 16:21:14 2019-12-23 16:50:52 2019-12-23 17:20:52 0:30:00 0:21:04 0:08:56 smithi master ubuntu 18.04 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-small-objects-many-deletes.yaml} 2
pass 4628392 2019-12-23 16:21:15 2019-12-23 16:56:20 2019-12-23 17:20:20 0:24:00 0:14:14 0:09:46 smithi master centos 8.0 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_8.yaml} tasks/rados_python.yaml} 2
fail 4628393 2019-12-23 16:21:16 2019-12-23 16:56:21 2019-12-23 17:16:20 0:19:59 0:12:24 0:07:35 smithi master rhel 8.0 rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/sync-many.yaml workloads/rados_5925.yaml} 2
Failure Reason:

SELinux denials found on ubuntu@smithi114.front.sepia.ceph.com: ['type=AVC msg=audit(1577121312.008:7622): avc: denied { add_name } for pid=37569 comm="rhsmcertd-worke" name="metadata_lock.pid" scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=dir permissive=1', 'type=AVC msg=audit(1577121311.929:7619): avc: denied { getattr } for pid=37569 comm="rhsmcertd-worke" path="/var/lib/rpm/__db.001" dev="sda1" ino=262271 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577121312.513:7628): avc: denied { lock } for pid=37678 comm="setroubleshootd" path="/var/lib/rpm/Packages" dev="sda1" ino=262250 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577121312.008:7622): avc: denied { write } for pid=37569 comm="rhsmcertd-worke" name="dnf" dev="sda1" ino=60792 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=dir permissive=1', 'type=AVC msg=audit(1577121311.929:7620): avc: denied { map } for pid=37569 comm="rhsmcertd-worke" path="/var/lib/rpm/__db.001" dev="sda1" ino=262271 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577121312.008:7622): avc: denied { open } for pid=37569 comm="rhsmcertd-worke" path="/var/cache/dnf/metadata_lock.pid" dev="sda1" ino=55617 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577121312.008:7622): avc: denied { create } for pid=37569 comm="rhsmcertd-worke" name="metadata_lock.pid" scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577121312.514:7629): avc: denied { map } for pid=37678 comm="setroubleshootd" path="/var/lib/rpm/Name" dev="sda1" ino=262251 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577121312.430:7626): avc: denied { read } for pid=37569 comm="rhsmcertd-worke" name="satellite-5-client.module" dev="sda1" ino=57237 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:root_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577121311.929:7617): avc: denied { open } for pid=37569 comm="rhsmcertd-worke" path="/var/lib/rpm/.dbenv.lock" dev="sda1" ino=262270 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577121311.929:7617): avc: denied { read write } for pid=37569 comm="rhsmcertd-worke" name=".dbenv.lock" dev="sda1" ino=262270 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577121312.139:7624): avc: denied { setattr } for pid=37569 comm="rhsmcertd-worke" name="6e2fe611f78ac434c2918bac1eec468dbd24c9b4cdb65bf6a744d10f764f3284-primary.xml.gz" dev="sda1" ino=264733 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577121312.349:7625): avc: denied { remove_name } for pid=37569 comm="rhsmcertd-worke" name="metadata_lock.pid" dev="sda1" ino=55617 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=dir permissive=1', 'type=AVC msg=audit(1577121312.008:7621): avc: denied { open } for pid=37569 comm="rhsmcertd-worke" path="/var/log/hawkey.log" dev="sda1" ino=60817 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577121312.513:7627): avc: denied { open } for pid=37678 comm="setroubleshootd" path="/var/lib/rpm/Packages" dev="sda1" ino=262250 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577121312.349:7625): avc: denied { unlink } for pid=37569 comm="rhsmcertd-worke" name="metadata_lock.pid" dev="sda1" ino=55617 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577121312.430:7626): avc: denied { open } for pid=37569 comm="rhsmcertd-worke" path="/etc/dnf/modules.d/satellite-5-client.module" dev="sda1" ino=57237 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:root_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577121312.513:7627): avc: denied { read } for pid=37678 comm="setroubleshootd" name="Packages" dev="sda1" ino=262250 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577121311.929:7618): avc: denied { lock } for pid=37569 comm="rhsmcertd-worke" path="/var/lib/rpm/.dbenv.lock" dev="sda1" ino=262270 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577121312.112:7623): avc: denied { open } for pid=37569 comm="rhsmcertd-worke" path="/var/cache/dnf/ceph-5406f893dcfa5b2c/repodata/repomd.xml" dev="sda1" ino=262174 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1']

pass 4628394 2019-12-23 16:21:17 2019-12-23 16:56:31 2019-12-23 17:22:30 0:25:59 0:16:00 0:09:59 smithi master rhel 8.0 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} 2
pass 4628395 2019-12-23 16:21:18 2019-12-23 16:58:17 2019-12-23 17:14:16 0:15:59 0:10:05 0:05:54 smithi master rhel 8.0 rados/singleton/{all/peer.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} 1
pass 4628396 2019-12-23 16:21:19 2019-12-23 16:58:17 2019-12-23 17:32:17 0:34:00 0:25:02 0:08:58 smithi master ubuntu 18.04 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} 3
pass 4628397 2019-12-23 16:21:20 2019-12-23 16:58:29 2019-12-23 17:20:28 0:21:59 0:13:01 0:08:58 smithi master centos 8.0 rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-avl.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} 2
fail 4628398 2019-12-23 16:21:21 2019-12-23 16:59:04 2019-12-23 17:15:03 0:15:59 0:03:15 0:12:44 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} 4
Failure Reason:

Command failed on smithi133 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=10.2.11-18-g115560f-1bionic ceph-mds=10.2.11-18-g115560f-1bionic ceph-common=10.2.11-18-g115560f-1bionic ceph-fuse=10.2.11-18-g115560f-1bionic ceph-test=10.2.11-18-g115560f-1bionic radosgw=10.2.11-18-g115560f-1bionic python3-rados=10.2.11-18-g115560f-1bionic python3-rgw=10.2.11-18-g115560f-1bionic python3-cephfs=10.2.11-18-g115560f-1bionic python3-rbd=10.2.11-18-g115560f-1bionic librados2=10.2.11-18-g115560f-1bionic librbd1=10.2.11-18-g115560f-1bionic rbd-fuse=10.2.11-18-g115560f-1bionic librados2=10.2.11-18-g115560f-1bionic'

pass 4628399 2019-12-23 16:21:22 2019-12-23 16:59:58 2019-12-23 17:29:58 0:30:00 0:24:28 0:05:32 smithi master rhel 8.0 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
pass 4628400 2019-12-23 16:21:23 2019-12-23 16:59:58 2019-12-23 17:23:58 0:24:00 0:11:58 0:12:02 smithi master centos 8.0 rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{centos_8.yaml} tasks/failover.yaml} 2
pass 4628401 2019-12-23 16:21:24 2019-12-23 17:00:09 2019-12-23 17:18:08 0:17:59 0:11:36 0:06:23 smithi master rhel 8.0 rados/singleton-nomsgr/{all/version-number-sanity.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} 1
pass 4628402 2019-12-23 16:21:25 2019-12-23 17:00:31 2019-12-23 17:32:30 0:31:59 0:14:54 0:17:05 smithi master centos 8.0 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} 2
pass 4628403 2019-12-23 16:21:26 2019-12-23 17:01:40 2019-12-23 17:25:39 0:23:59 0:10:20 0:13:39 smithi master rhel 8.0 rados/multimon/{clusters/21.yaml msgr-failures/many.yaml msgr/async-v2only.yaml no_pools.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} tasks/mon_clock_no_skews.yaml} 3
pass 4628404 2019-12-23 16:21:27 2019-12-23 17:01:43 2019-12-23 17:17:43 0:16:00 0:08:30 0:07:30 smithi master centos 8.0 rados/singleton/{all/pg-removal-interruption.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} 1
pass 4628405 2019-12-23 16:21:28 2019-12-23 17:02:03 2019-12-23 17:32:02 0:29:59 0:16:50 0:13:09 smithi master rhel 8.0 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-avl.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} tasks/rados_stress_watch.yaml} 2
pass 4628406 2019-12-23 16:21:29 2019-12-23 17:04:20 2019-12-23 17:32:19 0:27:59 0:19:03 0:08:56 smithi master centos 8.0 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} 2
fail 4628407 2019-12-23 16:21:30 2019-12-23 17:04:20 2019-12-23 17:52:19 0:47:59 0:36:33 0:11:26 smithi master centos 8.0 rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-avl.yaml supported-random-distro$/{centos_8.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_all (tasks.mgr.dashboard.test_rgw.RgwBucketTest)

fail 4628408 2019-12-23 16:21:31 2019-12-23 17:04:26 2019-12-23 17:26:25 0:21:59 0:13:04 0:08:55 smithi master centos 8.0 rados/rest/{mgr-restful.yaml supported-random-distro$/{centos_8.yaml}} 1
Failure Reason:

Command failed (workunit test rest/test-restful.sh) on smithi139 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.a/client.a/tmp && cd -- /home/ubuntu/cephtest/mnt.a/client.a/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067742409578fe705cdfd829b53be781fdbe3816 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="a" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.a CEPH_ROOT=/home/ubuntu/cephtest/clone.client.a adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.a/qa/workunits/rest/test-restful.sh'

pass 4628409 2019-12-23 16:21:32 2019-12-23 17:04:56 2019-12-23 17:46:56 0:42:00 0:30:22 0:11:38 smithi master centos 8.0 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} 1
pass 4628410 2019-12-23 16:21:33 2019-12-23 17:07:04 2019-12-23 17:33:03 0:25:59 0:16:35 0:09:24 smithi master centos 8.0 rados/singleton-flat/valgrind-leaks/{centos_latest.yaml valgrind-leaks.yaml} 1
fail 4628411 2019-12-23 16:21:34 2019-12-23 17:07:11 2019-12-23 17:29:10 0:21:59 0:14:04 0:07:55 smithi master centos 8.0 rados/singleton-nomsgr/{all/admin_socket_output.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} 1
Failure Reason:

"2019-12-23T17:21:20.518301+0000 mon.a (mon.0) 180 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

pass 4628412 2019-12-23 16:21:35 2019-12-23 17:07:23 2019-12-23 17:27:22 0:19:59 0:11:23 0:08:36 smithi master centos 8.0 rados/standalone/{supported-random-distro$/{centos_8.yaml} workloads/crush.yaml} 1
pass 4628413 2019-12-23 16:21:36 2019-12-23 17:08:29 2019-12-23 17:28:29 0:20:00 0:12:58 0:07:02 smithi master centos 8.0 rados/singleton/{all/radostool.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} 1
fail 4628414 2019-12-23 16:21:37 2019-12-23 17:08:30 2019-12-23 17:52:29 0:43:59 0:36:20 0:07:39 smithi master rhel 8.0 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-small-objects.yaml} 2
Failure Reason:

"2019-12-23T17:40:37.657386+0000 mon.b (mon.0) 2219 : cluster [WRN] Health check failed: 4 daemons have recently crashed (RECENT_CRASH)" in cluster log

pass 4628415 2019-12-23 16:21:38 2019-12-23 17:09:19 2019-12-23 17:37:18 0:27:59 0:20:59 0:07:00 smithi master rhel 8.0 rados/singleton/{all/random-eio.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-avl.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} 2
pass 4628416 2019-12-23 16:21:39 2019-12-23 17:09:27 2019-12-23 17:29:26 0:19:59 0:13:12 0:06:47 smithi master rhel 8.0 rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-avl.yaml supported-random-distro$/{rhel_8.yaml} tasks/insights.yaml} 2
dead 4628417 2019-12-23 16:21:40 2019-12-23 17:14:39 2019-12-24 05:11:00 11:56:21 smithi master rhel 8.0 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} 2
pass 4628418 2019-12-23 16:21:41 2019-12-23 17:14:40 2019-12-23 19:18:42 2:04:02 0:11:39 1:52:23 smithi master centos 8.0 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
pass 4628419 2019-12-23 16:21:42 2019-12-23 17:15:04 2019-12-23 17:33:03 0:17:59 0:09:12 0:08:47 smithi master centos 8.0 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_8.yaml} tasks/rados_striper.yaml} 2
fail 4628420 2019-12-23 16:21:43 2019-12-23 17:15:30 2019-12-24 00:07:39 6:52:09 6:42:33 0:09:36 smithi master centos 8.0 rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi112 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067742409578fe705cdfd829b53be781fdbe3816 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

dead 4628421 2019-12-23 16:21:45 2019-12-23 17:16:15 2019-12-24 05:10:39 11:54:24 smithi master rhel 8.0 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
pass 4628422 2019-12-23 16:21:46 2019-12-23 17:16:15 2019-12-23 17:40:14 0:23:59 0:18:02 0:05:57 smithi master rhel 8.0 rados/singleton/{all/recovery-preemption.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} 1
pass 4628423 2019-12-23 16:21:47 2019-12-23 17:16:21 2019-12-23 17:48:21 0:32:00 0:22:49 0:09:11 smithi master centos 8.0 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} 2
fail 4628424 2019-12-23 16:21:48 2019-12-23 17:18:02 2019-12-23 17:38:01 0:19:59 0:12:12 0:07:47 smithi master rhel 8.0 rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_8.yaml} tasks/module_selftest.yaml} 2
Failure Reason:

Test failure: test_diskprediction_cloud (tasks.mgr.test_module_selftest.TestModuleSelftest)

pass 4628425 2019-12-23 16:21:49 2019-12-23 17:18:02 2019-12-23 17:50:02 0:32:00 0:22:00 0:10:00 smithi master rhel 8.0 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} 2
fail 4628426 2019-12-23 16:21:50 2019-12-23 17:18:07 2019-12-23 20:40:10 3:22:03 3:13:57 0:08:06 smithi master centos 8.0 rados/standalone/{supported-random-distro$/{centos_8.yaml} workloads/erasure-code.yaml} 1
Failure Reason:

Command failed (workunit test erasure-code/test-erasure-eio.sh) on smithi155 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067742409578fe705cdfd829b53be781fdbe3816 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/erasure-code/test-erasure-eio.sh'

pass 4628427 2019-12-23 16:21:51 2019-12-23 17:18:09 2019-12-23 17:36:08 0:17:59 0:12:03 0:05:56 smithi master rhel 8.0 rados/singleton/{all/test-crash.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} 1
pass 4628428 2019-12-23 16:21:52 2019-12-23 17:18:14 2019-12-23 17:36:13 0:17:59 0:11:27 0:06:32 smithi master rhel 8.0 rados/singleton-nomsgr/{all/ceph-kvstore-tool.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} 1
pass 4628429 2019-12-23 16:21:53 2019-12-23 17:18:41 2019-12-23 17:48:41 0:30:00 0:22:11 0:07:49 smithi master rhel 8.0 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} 2
pass 4628430 2019-12-23 16:21:54 2019-12-23 17:20:38 2019-12-23 17:50:37 0:29:59 0:22:18 0:07:41 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 2
dead 4628431 2019-12-23 16:21:55 2019-12-23 17:20:38 2019-12-24 05:11:00 11:50:22 smithi master rhel 8.0 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} 3
pass 4628432 2019-12-23 16:21:56 2019-12-23 17:20:53 2019-12-23 17:40:53 0:20:00 0:13:39 0:06:21 smithi master centos 8.0 rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_cls_all.yaml validater/lockdep.yaml} 2
pass 4628433 2019-12-23 16:21:57 2019-12-23 17:22:24 2019-12-23 17:54:24 0:32:00 0:22:31 0:09:29 smithi master rhel 8.0 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} tasks/rados_workunit_loadgen_mix.yaml} 2
fail 4628434 2019-12-23 16:21:58 2019-12-23 17:22:25 2019-12-23 17:42:24 0:19:59 0:10:44 0:09:15 smithi master rhel 8.0 rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{rhel_8.yaml} tasks/orchestrator_cli.yaml} 2
Failure Reason:

Test failure: test_device_ls (tasks.mgr.test_orchestrator_cli.TestOrchestratorCli)

pass 4628435 2019-12-23 16:21:59 2019-12-23 17:22:25 2019-12-23 17:40:24 0:17:59 0:08:18 0:09:41 smithi master centos 8.0 rados/objectstore/{backends/filejournal.yaml supported-random-distro$/{centos_8.yaml}} 1
pass 4628436 2019-12-23 16:22:00 2019-12-23 17:22:31 2019-12-23 17:40:31 0:18:00 0:10:01 0:07:59 smithi master rhel 8.0 rados/singleton-nomsgr/{all/ceph-post-file.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} 1
pass 4628437 2019-12-23 16:22:01 2019-12-23 17:24:16 2019-12-23 17:50:15 0:25:59 0:11:38 0:14:21 smithi master centos 8.0 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache.yaml} 2
pass 4628438 2019-12-23 16:22:02 2019-12-23 17:25:49 2019-12-23 18:07:49 0:42:00 0:31:16 0:10:44 smithi master centos 8.0 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} 1
pass 4628439 2019-12-23 16:22:03 2019-12-23 17:25:49 2019-12-23 17:53:49 0:28:00 0:12:37 0:15:23 smithi master centos 8.0 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/dedup_tier.yaml} 2
pass 4628440 2019-12-23 16:22:04 2019-12-23 17:26:27 2019-12-23 17:56:26 0:29:59 0:22:23 0:07:36 smithi master rhel 8.0 rados/singleton/{all/thrash_cache_writeback_proxy_none.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} 2
pass 4628441 2019-12-23 16:22:05 2019-12-23 17:27:40 2019-12-23 18:03:40 0:36:00 0:25:54 0:10:06 smithi master rhel 8.0 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} tasks/rados_workunit_loadgen_mostlyread.yaml} 2
fail 4628442 2019-12-23 16:22:06 2019-12-23 17:28:30 2019-12-23 17:46:29 0:17:59 0:09:13 0:08:46 smithi master centos 8.0 rados/standalone/{supported-random-distro$/{centos_8.yaml} workloads/mgr.yaml} 1
Failure Reason:

Command failed (workunit test mgr/balancer.sh) on smithi134 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067742409578fe705cdfd829b53be781fdbe3816 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mgr/balancer.sh'

pass 4628443 2019-12-23 16:22:07 2019-12-23 17:29:27 2019-12-23 17:47:26 0:17:59 0:08:50 0:09:09 smithi master centos 8.0 rados/singleton/{all/watch-notify-same-primary.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} 1
pass 4628444 2019-12-23 16:22:08 2019-12-23 17:29:28 2019-12-23 17:55:27 0:25:59 0:16:00 0:09:59 smithi master centos 8.0 rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{centos_8.yaml} tasks/progress.yaml} 2
pass 4628445 2019-12-23 16:22:09 2019-12-23 17:29:42 2019-12-23 17:53:41 0:23:59 0:11:59 0:12:00 smithi master centos 8.0 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
fail 4628446 2019-12-23 16:22:10 2019-12-23 17:29:59 2019-12-23 18:15:59 0:46:00 0:36:16 0:09:44 smithi master centos 8.0 rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{centos_8.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_all (tasks.mgr.dashboard.test_rgw.RgwBucketTest)

pass 4628447 2019-12-23 16:22:11 2019-12-23 17:31:45 2019-12-23 17:47:44 0:15:59 0:09:54 0:06:05 smithi master rhel 8.0 rados/singleton-nomsgr/{all/export-after-evict.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} 1
pass 4628448 2019-12-23 16:22:12 2019-12-23 17:32:00 2019-12-23 17:47:59 0:15:59 0:08:52 0:07:07 smithi master centos 8.0 rados/singleton/{all/admin-socket.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-avl.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} 1
fail 4628449 2019-12-23 16:22:13 2019-12-23 17:32:03 2019-12-23 18:14:03 0:42:00 0:14:23 0:27:37 smithi master rhel 8.0 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-avl.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} 3
Failure Reason:

SELinux denials found on ubuntu@smithi033.front.sepia.ceph.com: ['type=AVC msg=audit(1577124713.349:8238): avc: denied { unlink } for pid=35856 comm="rhsmcertd-worke" name="metadata_lock.pid" dev="sda1" ino=20 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577124712.897:8233): avc: denied { map } for pid=35856 comm="rhsmcertd-worke" path="/var/lib/rpm/__db.001" dev="sda1" ino=262271 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577124712.897:8231): avc: denied { lock } for pid=35856 comm="rhsmcertd-worke" path="/var/lib/rpm/.dbenv.lock" dev="sda1" ino=262270 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577124713.126:8237): avc: denied { setattr } for pid=35856 comm="rhsmcertd-worke" name="6e2fe611f78ac434c2918bac1eec468dbd24c9b4cdb65bf6a744d10f764f3284-primary.xml.gz" dev="sda1" ino=264734 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577124713.098:8236): avc: denied { open } for pid=35856 comm="rhsmcertd-worke" path="/var/cache/dnf/ceph-5406f893dcfa5b2c/repodata/repomd.xml" dev="sda1" ino=262154 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577124712.982:8235): avc: denied { write } for pid=35856 comm="rhsmcertd-worke" name="dnf" dev="sda1" ino=60792 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=dir permissive=1', 'type=AVC msg=audit(1577124714.185:8240): avc: denied { open } for pid=35986 comm="setroubleshootd" path="/var/lib/rpm/Packages" dev="sda1" ino=262250 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577124712.982:8234): avc: denied { open } for pid=35856 comm="rhsmcertd-worke" path="/var/log/hawkey.log" dev="sda1" ino=60817 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577124712.982:8235): avc: denied { open } for pid=35856 comm="rhsmcertd-worke" path="/var/cache/dnf/metadata_lock.pid" dev="sda1" ino=20 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577124712.897:8230): avc: denied { read write } for pid=35856 comm="rhsmcertd-worke" name=".dbenv.lock" dev="sda1" ino=262270 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577124712.897:8232): avc: denied { getattr } for pid=35856 comm="rhsmcertd-worke" path="/var/lib/rpm/__db.001" dev="sda1" ino=262271 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577124714.185:8241): avc: denied { lock } for pid=35986 comm="setroubleshootd" path="/var/lib/rpm/Packages" dev="sda1" ino=262250 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577124713.433:8239): avc: denied { open } for pid=35856 comm="rhsmcertd-worke" path="/etc/dnf/modules.d/satellite-5-client.module" dev="sda1" ino=57237 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:root_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577124714.185:8242): avc: denied { map } for pid=35986 comm="setroubleshootd" path="/var/lib/rpm/Name" dev="sda1" ino=262251 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577124712.897:8230): avc: denied { open } for pid=35856 comm="rhsmcertd-worke" path="/var/lib/rpm/.dbenv.lock" dev="sda1" ino=262270 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577124712.982:8235): avc: denied { add_name } for pid=35856 comm="rhsmcertd-worke" name="metadata_lock.pid" scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=dir permissive=1', 'type=AVC msg=audit(1577124713.349:8238): avc: denied { remove_name } for pid=35856 comm="rhsmcertd-worke" name="metadata_lock.pid" dev="sda1" ino=20 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=dir permissive=1', 'type=AVC msg=audit(1577124714.185:8240): avc: denied { read } for pid=35986 comm="setroubleshootd" name="Packages" dev="sda1" ino=262250 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577124712.982:8235): avc: denied { create } for pid=35856 comm="rhsmcertd-worke" name="metadata_lock.pid" scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577124713.433:8239): avc: denied { read } for pid=35856 comm="rhsmcertd-worke" name="satellite-5-client.module" dev="sda1" ino=57237 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:root_t:s0 tclass=file permissive=1']

pass 4628450 2019-12-23 16:22:14 2019-12-23 17:32:10 2019-12-23 18:12:09 0:39:59 0:23:52 0:16:07 smithi master centos 8.0 rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} 2
dead 4628451 2019-12-23 16:22:15 2019-12-23 17:32:11 2019-12-24 05:10:27 11:38:16 11:21:12 0:17:04 smithi master rhel 8.0 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
Failure Reason:

psutil.NoSuchProcess process no longer exists (pid=7871)

dead 4628452 2019-12-23 16:22:16 2019-12-23 17:32:19 2019-12-24 05:10:39 11:38:20 smithi master rhel 8.0 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
pass 4628453 2019-12-23 16:22:17 2019-12-23 17:32:20 2019-12-23 18:02:20 0:30:00 0:13:44 0:16:16 smithi master rhel 8.0 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} tasks/readwrite.yaml} 2
pass 4628454 2019-12-23 16:22:18 2019-12-23 17:32:23 2019-12-23 17:54:22 0:21:59 0:10:32 0:11:27 smithi master centos 8.0 rados/singleton/{all/deduptool.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} 1
pass 4628455 2019-12-23 16:22:19 2019-12-23 17:32:32 2019-12-23 17:50:31 0:17:59 0:08:54 0:09:05 smithi master centos 8.0 rados/singleton-nomsgr/{all/full-tiering.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} 1
pass 4628456 2019-12-23 16:22:20 2019-12-23 17:33:21 2019-12-23 17:53:20 0:19:59 0:09:54 0:10:05 smithi master centos 8.0 rados/singleton/{all/divergent_priors.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} 1
pass 4628457 2019-12-23 16:22:21 2019-12-23 17:33:21 2019-12-23 18:07:21 0:34:00 0:08:44 0:25:16 smithi master centos 8.0 rados/multimon/{clusters/9.yaml msgr-failures/few.yaml msgr/async-v2only.yaml no_pools.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_8.yaml} tasks/mon_clock_no_skews.yaml} 3
dead 4628458 2019-12-23 16:22:22 2019-12-23 17:34:39 2019-12-24 05:11:00 11:36:21 smithi master rhel 8.0 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} 2
pass 4628459 2019-12-23 16:22:23 2019-12-23 17:36:26 2019-12-23 17:56:25 0:19:59 0:12:52 0:07:07 smithi master rhel 8.0 rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{rhel_8.yaml} tasks/prometheus.yaml} 2
pass 4628460 2019-12-23 16:22:24 2019-12-23 17:36:26 2019-12-23 17:54:25 0:17:59 0:11:14 0:06:45 smithi master rhel 8.0 rados/singleton/{all/divergent_priors2.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} 1
pass 4628461 2019-12-23 16:22:25 2019-12-23 17:37:18 2019-12-23 18:11:18 0:34:00 0:25:26 0:08:34 smithi master centos 8.0 rados/standalone/{supported-random-distro$/{centos_8.yaml} workloads/misc.yaml} 1
pass 4628462 2019-12-23 16:22:26 2019-12-23 17:37:20 2019-12-23 18:03:19 0:25:59 0:16:39 0:09:20 smithi master centos 8.0 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-avl.yaml rados.yaml supported-random-distro$/{centos_8.yaml} tasks/repair_test.yaml} 2
pass 4628463 2019-12-23 16:22:27 2019-12-23 17:38:11 2019-12-23 19:16:14 1:38:03 0:26:03 1:12:00 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus-v2only.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 4
pass 4628464 2019-12-23 16:22:28 2019-12-23 17:38:11 2019-12-23 17:56:10 0:17:59 0:11:40 0:06:19 smithi master rhel 8.0 rados/singleton/{all/dump-stuck.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} 1
pass 4628465 2019-12-23 16:22:29 2019-12-23 17:40:21 2019-12-23 18:02:20 0:21:59 0:14:26 0:07:33 smithi master rhel 8.0 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/redirect.yaml} 2
pass 4628466 2019-12-23 16:22:30 2019-12-23 17:40:21 2019-12-23 18:16:21 0:36:00 0:26:51 0:09:09 smithi master centos 8.0 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} 2
pass 4628467 2019-12-23 16:22:31 2019-12-23 17:40:25 2019-12-23 18:14:25 0:34:00 0:11:10 0:22:50 smithi master centos 8.0 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
dead 4628468 2019-12-23 16:22:33 2019-12-23 17:40:32 2019-12-24 05:10:55 11:30:23 smithi master rhel 8.0 rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/one.yaml workloads/snaps-few-objects.yaml} 2
dead 4628469 2019-12-23 16:22:34 2019-12-23 17:40:54 2019-12-24 05:09:15 11:28:21 smithi master centos 8.0 rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml tasks/rados_api_tests.yaml validater/lockdep.yaml} 2
dead 4628470 2019-12-23 16:22:35 2019-12-23 17:42:43 2019-12-24 05:11:00 11:28:17 smithi master centos 8.0 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-avl.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
fail 4628471 2019-12-23 16:22:36 2019-12-23 17:43:04 2019-12-23 18:07:04 0:24:00 0:14:29 0:09:31 smithi master rhel 8.0 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} 2
Failure Reason:

SELinux denials found on ubuntu@smithi073.front.sepia.ceph.com: ['type=AVC msg=audit(1577123877.582:5316): avc: denied { open } for pid=24391 comm="rhsmcertd-worke" path="/var/log/hawkey.log" dev="sda1" ino=60817 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577123877.686:5318): avc: denied { open } for pid=24391 comm="rhsmcertd-worke" path="/var/cache/dnf/ceph-5406f893dcfa5b2c/repodata/repomd.xml" dev="sda1" ino=262154 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577123877.845:5320): avc: denied { remove_name } for pid=24391 comm="rhsmcertd-worke" name="metadata_lock.pid" dev="sda1" ino=59978 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=dir permissive=1', 'type=AVC msg=audit(1577123877.582:5317): avc: denied { create } for pid=24391 comm="rhsmcertd-worke" name="metadata_lock.pid" scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577123878.781:5327): avc: denied { lock } for pid=24605 comm="setroubleshootd" path="/var/lib/rpm/Packages" dev="sda1" ino=262250 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577123877.928:5321): avc: denied { read } for pid=24391 comm="rhsmcertd-worke" name="satellite-5-client.module" dev="sda1" ino=57237 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:root_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577123877.582:5317): avc: denied { write } for pid=24391 comm="rhsmcertd-worke" name="dnf" dev="sda1" ino=60792 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=dir permissive=1', 'type=AVC msg=audit(1577123877.928:5321): avc: denied { open } for pid=24391 comm="rhsmcertd-worke" path="/etc/dnf/modules.d/satellite-5-client.module" dev="sda1" ino=57237 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:root_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577123878.781:5326): avc: denied { open } for pid=24605 comm="setroubleshootd" path="/var/lib/rpm/Packages" dev="sda1" ino=262250 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577123877.510:5312): avc: denied { open } for pid=24391 comm="rhsmcertd-worke" path="/var/lib/rpm/.dbenv.lock" dev="sda1" ino=262270 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577123877.510:5313): avc: denied { lock } for pid=24391 comm="rhsmcertd-worke" path="/var/lib/rpm/.dbenv.lock" dev="sda1" ino=262270 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577123877.510:5314): avc: denied { getattr } for pid=24391 comm="rhsmcertd-worke" path="/var/lib/rpm/__db.001" dev="sda1" ino=262271 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577123878.781:5326): avc: denied { read } for pid=24605 comm="setroubleshootd" name="Packages" dev="sda1" ino=262250 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577123877.714:5319): avc: denied { setattr } for pid=24391 comm="rhsmcertd-worke" name="6e2fe611f78ac434c2918bac1eec468dbd24c9b4cdb65bf6a744d10f764f3284-primary.xml.gz" dev="sda1" ino=264734 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577123877.510:5312): avc: denied { read write } for pid=24391 comm="rhsmcertd-worke" name=".dbenv.lock" dev="sda1" ino=262270 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577123877.845:5320): avc: denied { unlink } for pid=24391 comm="rhsmcertd-worke" name="metadata_lock.pid" dev="sda1" ino=59978 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577123877.582:5317): avc: denied { add_name } for pid=24391 comm="rhsmcertd-worke" name="metadata_lock.pid" scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=dir permissive=1', 'type=AVC msg=audit(1577123877.582:5317): avc: denied { open } for pid=24391 comm="rhsmcertd-worke" path="/var/cache/dnf/metadata_lock.pid" dev="sda1" ino=59978 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:rpm_var_cache_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577123878.781:5328): avc: denied { map } for pid=24605 comm="setroubleshootd" path="/var/lib/rpm/Name" dev="sda1" ino=262251 scontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1577123877.511:5315): avc: denied { map } for pid=24391 comm="rhsmcertd-worke" path="/var/lib/rpm/__db.001" dev="sda1" ino=262271 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1']

pass 4628472 2019-12-23 16:22:37 2019-12-23 17:44:21 2019-12-23 18:02:20 0:17:59 0:09:34 0:08:25 smithi master centos 8.0 rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{centos_8.yaml} tasks/workunits.yaml} 2
pass 4628473 2019-12-23 16:22:38 2019-12-23 17:44:51 2019-12-23 18:22:51 0:38:00 0:29:53 0:08:07 smithi master ubuntu 18.04 rados/cephadm/{fixed-2.yaml mode/root.yaml msgr/async-v1only.yaml start.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_api_tests.yaml} 2
fail 4628474 2019-12-23 16:22:39 2019-12-23 17:46:15 2019-12-23 18:32:17 0:46:02 0:36:19 0:09:43 smithi master centos 8.0 rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{centos_8.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_all (tasks.mgr.dashboard.test_rgw.RgwBucketTest)

fail 4628475 2019-12-23 16:22:40 2019-12-23 17:46:31 2019-12-23 18:30:30 0:43:59 0:35:25 0:08:34 smithi master centos 8.0 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi168 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067742409578fe705cdfd829b53be781fdbe3816 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 4628476 2019-12-23 16:22:41 2019-12-23 17:46:43 2019-12-23 18:02:42 0:15:59 0:08:16 0:07:43 smithi master ubuntu 18.04 rados/singleton-nomsgr/{all/large-omap-object-warnings.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
Failure Reason:

Command failed (workunit test rados/test_large_omap_detection.py) on smithi174 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067742409578fe705cdfd829b53be781fdbe3816 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_large_omap_detection.py'

fail 4628477 2019-12-23 16:22:42 2019-12-23 17:46:58 2019-12-23 19:08:59 1:22:01 1:08:53 0:13:08 smithi master ubuntu 18.04 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/nautilus.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-octopus.yaml 8-workload/{rbd-python.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml thrashosds-health.yaml ubuntu_latest.yaml} 4
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi070 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067742409578fe705cdfd829b53be781fdbe3816 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

pass 4628478 2019-12-23 16:22:43 2019-12-23 17:47:45 2019-12-23 18:03:44 0:15:59 0:08:37 0:07:22 smithi master centos 8.0 rados/singleton/{all/erasure-code-nonregression.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-avl.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} 1
pass 4628479 2019-12-23 16:22:44 2019-12-23 17:47:46 2019-12-23 18:13:45 0:25:59 0:17:05 0:08:54 smithi master ubuntu 18.04 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rgw_snaps.yaml} 2
pass 4628480 2019-12-23 16:22:45 2019-12-23 17:48:00 2019-12-23 18:50:01 1:02:01 0:55:55 0:06:06 smithi master ubuntu 18.04 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml objectstore/bluestore-avl.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} 2
pass 4628481 2019-12-23 16:22:46 2019-12-23 17:48:22 2019-12-23 19:06:23 1:18:01 0:14:08 1:03:53 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/nautilus.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/radosbench.yaml} 4
pass 4628482 2019-12-23 16:22:47 2019-12-23 17:48:42 2019-12-23 18:06:41 0:17:59 0:08:43 0:09:16 smithi master centos 8.0 rados/multimon/{clusters/21.yaml msgr-failures/many.yaml msgr/async.yaml no_pools.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_8.yaml} tasks/mon_clock_with_skews.yaml} 3
pass 4628483 2019-12-23 16:22:48 2019-12-23 17:49:14 2019-12-23 18:11:13 0:21:59 0:13:26 0:08:33 smithi master centos 8.0 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} 2
pass 4628484 2019-12-23 16:22:50 2019-12-23 17:50:03 2019-12-23 18:08:02 0:17:59 0:11:59 0:06:00 smithi master rhel 8.0 rados/singleton-nomsgr/{all/lazy_omap_stats_output.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} 1
pass 4628485 2019-12-23 16:22:51 2019-12-23 17:50:34 2019-12-23 18:26:33 0:35:59 0:25:42 0:10:17 smithi master centos 8.0 rados/singleton/{all/lost-unfound.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} 1
fail 4628486 2019-12-23 16:22:52 2019-12-23 17:50:34 2019-12-23 21:58:40 4:08:06 4:00:06 0:08:00 smithi master centos 8.0 rados/standalone/{supported-random-distro$/{centos_8.yaml} workloads/mon.yaml} 1
Failure Reason:

Command failed (workunit test mon/misc.sh) on smithi169 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067742409578fe705cdfd829b53be781fdbe3816 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/misc.sh'

pass 4628487 2019-12-23 16:22:53 2019-12-23 17:50:39 2019-12-23 18:10:38 0:19:59 0:12:00 0:07:59 smithi master centos 8.0 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-avl.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/set-chunk-promote-flush.yaml} 2
pass 4628488 2019-12-23 16:22:54 2019-12-23 17:50:51 2019-12-23 18:06:50 0:15:59 0:10:01 0:05:58 smithi master rhel 8.0 rados/singleton/{all/max-pg-per-osd.from-mon.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} 1
pass 4628489 2019-12-23 16:22:55 2019-12-23 17:52:17 2019-12-23 18:14:16 0:21:59 0:13:12 0:08:47 smithi master rhel 8.0 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} tasks/scrub_test.yaml} 2
fail 4628490 2019-12-23 16:22:56 2019-12-23 17:52:21 2019-12-23 18:08:20 0:15:59 0:07:36 0:08:23 smithi master ubuntu 18.04 rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/cephadm_orchestrator.yaml} 2
Failure Reason:

Test failure: test_host_ls (tasks.mgr.test_cephadm_orchestrator.TestOrchestratorCli)

pass 4628491 2019-12-23 16:22:57 2019-12-23 17:52:31 2019-12-23 18:26:30 0:33:59 0:26:49 0:07:10 smithi master rhel 8.0 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} 2
pass 4628492 2019-12-23 16:22:58 2019-12-23 17:53:39 2019-12-23 18:23:38 0:29:59 0:11:39 0:18:20 smithi master centos 8.0 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-avl.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
pass 4628493 2019-12-23 16:22:59 2019-12-23 17:53:43 2019-12-23 18:11:43 0:18:00 0:11:22 0:06:38 smithi master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-basic-min-osd-mem-target.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/radosbench_4M_rand_read.yaml} 1
pass 4628494 2019-12-23 16:23:00 2019-12-23 17:53:50 2019-12-23 18:19:49 0:25:59 0:18:50 0:07:09 smithi master rhel 8.0 rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/sync-many.yaml workloads/pool-create-delete.yaml} 2
fail 4628495 2019-12-23 16:23:01 2019-12-23 17:54:24 2019-12-23 18:26:23 0:31:59 0:03:27 0:28:32 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/hammer.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} 4
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F18.04%2Fx86_64&ref=hammer

pass 4628496 2019-12-23 16:23:02 2019-12-23 17:54:24 2019-12-23 18:12:23 0:17:59 0:12:05 0:05:54 smithi master rhel 8.0 rados/singleton-nomsgr/{all/librados_hello_world.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} 1
pass 4628497 2019-12-23 16:23:03 2019-12-23 17:54:25 2019-12-23 18:16:25 0:22:00 0:15:07 0:06:53 smithi master rhel 8.0 rados/singleton/{all/max-pg-per-osd.from-primary.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} 1
pass 4628498 2019-12-23 16:23:04 2019-12-23 17:54:27 2019-12-23 18:14:26 0:19:59 0:10:20 0:09:39 smithi master ubuntu 18.04 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} 3
pass 4628499 2019-12-23 16:23:06 2019-12-23 17:55:05 2019-12-23 18:31:04 0:35:59 0:26:30 0:09:29 smithi master centos 8.0 rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} 2
pass 4628500 2019-12-23 16:23:07 2019-12-23 17:55:28 2019-12-23 18:19:28 0:24:00 0:16:40 0:07:20 smithi master centos 8.0 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} 2
dead 4628501 2019-12-23 16:23:08 2019-12-23 17:56:30 2019-12-24 05:10:55 11:14:25 smithi master centos 8.0 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
fail 4628502 2019-12-23 16:23:09 2019-12-23 17:56:30 2019-12-23 18:40:29 0:43:59 0:34:02 0:09:57 smithi master centos 8.0 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read.yaml} 2
Failure Reason:

"2019-12-23T18:29:40.082977+0000 mon.a (mon.0) 2301 : cluster [WRN] Health check failed: 2 daemons have recently crashed (RECENT_CRASH)" in cluster log

fail 4628503 2019-12-23 16:23:10 2019-12-23 17:56:30 2019-12-23 18:44:29 0:47:59 0:38:30 0:09:29 smithi master rhel 8.0 rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{rhel_8.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_all (tasks.mgr.dashboard.test_rgw.RgwBucketTest)

pass 4628504 2019-12-23 16:23:11 2019-12-23 17:58:23 2019-12-23 18:18:23 0:20:00 0:08:31 0:11:29 smithi master centos 8.0 rados/objectstore/{backends/keyvaluedb.yaml supported-random-distro$/{centos_8.yaml}} 1
pass 4628505 2019-12-23 16:23:12 2019-12-23 17:58:23 2019-12-23 18:24:23 0:26:00 0:16:28 0:09:32 smithi master centos 8.0 rados/singleton-nomsgr/{all/msgr.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} 1
pass 4628506 2019-12-23 16:23:13 2019-12-23 18:00:14 2019-12-23 18:20:14 0:20:00 0:10:00 0:10:00 smithi master centos 8.0 rados/multimon/{clusters/3.yaml msgr-failures/few.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_8.yaml} tasks/mon_recovery.yaml} 2
pass 4628507 2019-12-23 16:23:14 2019-12-23 18:00:14 2019-12-23 18:20:14 0:20:00 0:10:14 0:09:46 smithi master centos 8.0 rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-avl.yaml supported-random-distro$/{centos_8.yaml} tasks/crash.yaml} 2
fail 4628508 2019-12-23 16:23:15 2019-12-23 18:02:39 2019-12-23 18:42:39 0:40:00 0:03:11 0:36:49 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/jewel-v1only.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 4
Failure Reason:

Command failed on smithi140 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=10.2.11-18-g115560f-1bionic ceph-mds=10.2.11-18-g115560f-1bionic ceph-common=10.2.11-18-g115560f-1bionic ceph-fuse=10.2.11-18-g115560f-1bionic ceph-test=10.2.11-18-g115560f-1bionic radosgw=10.2.11-18-g115560f-1bionic python3-rados=10.2.11-18-g115560f-1bionic python3-rgw=10.2.11-18-g115560f-1bionic python3-cephfs=10.2.11-18-g115560f-1bionic python3-rbd=10.2.11-18-g115560f-1bionic librados2=10.2.11-18-g115560f-1bionic librbd1=10.2.11-18-g115560f-1bionic rbd-fuse=10.2.11-18-g115560f-1bionic librados2=10.2.11-18-g115560f-1bionic'

fail 4628509 2019-12-23 16:23:16 2019-12-23 18:02:39 2019-12-23 18:24:38 0:21:59 0:15:38 0:06:21 smithi master ubuntu 18.04 rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/osd.yaml} 1
Failure Reason:

Command failed (workunit test osd/divergent-priors.sh) on smithi134 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067742409578fe705cdfd829b53be781fdbe3816 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/divergent-priors.sh'

dead 4628510 2019-12-23 16:23:17 2019-12-23 18:02:39 2019-12-24 05:10:59 11:08:20 smithi master centos 8.0 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 2
pass 4628511 2019-12-23 16:23:18 2019-12-23 18:02:43 2019-12-23 18:24:42 0:21:59 0:16:30 0:05:29 smithi master rhel 8.0 rados/singleton/{all/mon-config-keys.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} 1
pass 4628512 2019-12-23 16:23:19 2019-12-23 18:02:52 2019-12-23 18:24:51 0:21:59 0:11:56 0:10:03 smithi master centos 8.0 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4