Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi158.front.sepia.ceph.com smithi True False rhel 8.1 x86_64 /home/teuthworker/archive/teuthology-2020-07-03_07:01:02-rados-master-distro-basic-smithi/5197264
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 5197398 2020-07-03 08:13:14 2020-07-03 08:41:42 2020-07-03 09:23:42 0:42:00 0:34:14 0:07:46 smithi master rhel 8.1 rados:cephadm/with-work/{distro/rhel_latest fixed-2 mode/root msgr/async-v1only start tasks/rados_api_tests} 2
pass 5197357 2020-07-03 08:12:41 2020-07-03 08:17:42 2020-07-03 08:43:42 0:26:00 0:17:19 0:08:41 smithi master rhel 8.1 rados:cephadm/upgrade/{1-start 2-start-upgrade 3-wait distro$/{rhel_latest} fixed-2} 2
pass 5197264 2020-07-03 07:07:26 2020-07-03 10:36:09 2020-07-03 11:02:08 0:25:59 0:16:06 0:09:53 smithi master rhel 8.1 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/filestore-xfs supported-random-distro$/{rhel_8} tasks/failover} 2
pass 5197176 2020-07-03 07:06:11 2020-07-03 10:03:59 2020-07-03 10:39:59 0:36:00 0:27:15 0:08:45 smithi master centos 8.1 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 5197069 2020-07-03 07:04:42 2020-07-03 09:21:48 2020-07-03 10:07:48 0:46:00 0:38:54 0:07:06 smithi master centos 8.1 rados/singleton/{all/thrash-eio msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} 2
fail 5196855 2020-07-03 05:16:53 2020-07-03 05:51:39 2020-07-03 08:21:41 2:30:02 2:16:31 0:13:31 smithi master centos 8.1 ceph-ansible/smoke/basic/{0-clusters/3-node 1-distros/centos_latest 2-ceph/ceph_ansible 3-config/dmcrypt_on 4-tasks/rest} 3
Failure Reason:

SELinux denials found on ubuntu@smithi158.front.sepia.ceph.com: ['type=AVC msg=audit(1593763869.769:6348): avc: denied { read open } for pid=29206 comm="sudo" path="/usr/sbin/unix_chkpwd" dev="sda1" ino=5903 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:chkpwd_exec_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1593763869.856:6364): avc: denied { open } for pid=29205 comm="sudo" path="/run/utmp" dev="tmpfs" ino=15407 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:initrc_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1593763869.769:6348): avc: denied { execute_no_trans } for pid=29206 comm="sudo" path="/usr/sbin/unix_chkpwd" dev="sda1" ino=5903 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:chkpwd_exec_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1593763869.723:6346): avc: denied { execute_no_trans } for pid=29205 comm="admin_socket" path="/usr/bin/sudo" dev="sda1" ino=7667 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:sudo_exec_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1593763869.732:6347): avc: denied { sys_resource } for pid=29205 comm="sudo" capability=24 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:system_r:ceph_t:s0 tclass=capability permissive=1', 'type=USER_AVC msg=audit(1593763869.772:6356): pid=2437 uid=81 auid=4294967295 ses=4294967295 subj=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 msg=\'avc: denied { send_msg } for msgtype=method_call interface=org.freedesktop.login1.Manager member=CreateSession dest=org.freedesktop.login1 spid=29205 tpid=2568 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:system_r:systemd_logind_t:s0 tclass=dbus permissive=1 exe="/usr/bin/dbus-daemon" sauid=81 hostname=? addr=? terminal=?\'\x1dUID="dbus" AUID="unset" SAUID="dbus"', 'type=AVC msg=audit(1593763869.856:6369): avc: denied { setrlimit } for pid=29219 comm="sudo" scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:system_r:ceph_t:s0 tclass=process permissive=1', 'type=AVC msg=audit(1593763869.723:6346): avc: denied { execute } for pid=29205 comm="admin_socket" name="sudo" dev="sda1" ino=7667 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:sudo_exec_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1593763869.723:6346): avc: denied { read open } for pid=29205 comm="admin_socket" path="/usr/bin/sudo" dev="sda1" ino=7667 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:sudo_exec_t:s0 tclass=file permissive=1', 'type=USER_AVC msg=audit(1593763869.855:6363): pid=2437 uid=81 auid=4294967295 ses=4294967295 subj=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 msg=\'avc: denied { send_msg } for msgtype=method_return dest=:1.1368 spid=2568 tpid=29205 scontext=system_u:system_r:systemd_logind_t:s0 tcontext=system_u:system_r:ceph_t:s0 tclass=dbus permissive=1 exe="/usr/bin/dbus-daemon" sauid=81 hostname=? addr=? terminal=?\'\x1dUID="dbus" AUID="unset" SAUID="dbus"', 'type=AVC msg=audit(1593763869.770:6349): avc: denied { open } for pid=29206 comm="unix_chkpwd" path="/etc/shadow" dev="sda1" ino=1545 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:shadow_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1593763869.771:6352): avc: denied { nlmsg_relay } for pid=29205 comm="sudo" scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:system_r:ceph_t:s0 tclass=netlink_audit_socket permissive=1', 'type=AVC msg=audit(1593763869.856:6364): avc: denied { read } for pid=29205 comm="sudo" name="utmp" dev="tmpfs" ino=15407 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:initrc_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1593763869.856:6365): avc: denied { lock } for pid=29205 comm="sudo" path="/run/utmp" dev="tmpfs" ino=15407 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:initrc_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1593763869.732:6347): avc: denied { setrlimit } for pid=29205 comm="sudo" scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:system_r:ceph_t:s0 tclass=process permissive=1', 'type=AVC msg=audit(1593763869.770:6350): avc: denied { getattr } for pid=29206 comm="unix_chkpwd" path="/etc/shadow" dev="sda1" ino=1545 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:shadow_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1593763869.771:6352): avc: denied { audit_write } for pid=29205 comm="sudo" capability=29 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:system_r:ceph_t:s0 tclass=capability permissive=1', 'type=AVC msg=audit(1593763869.856:6367): avc: denied { audit_write } for pid=29205 comm="sudo" capability=29 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:system_r:ceph_t:s0 tclass=capability permissive=1', 'type=AVC msg=audit(1593763869.856:6367): avc: denied { nlmsg_relay } for pid=29205 comm="sudo" scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:system_r:ceph_t:s0 tclass=netlink_audit_socket permissive=1', 'type=AVC msg=audit(1593763869.769:6348): avc: denied { map } for pid=29206 comm="unix_chkpwd" path="/usr/sbin/unix_chkpwd" dev="sda1" ino=5903 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:chkpwd_exec_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1593763869.856:6366): avc: denied { create } for pid=29205 comm="sudo" scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:system_r:ceph_t:s0 tclass=netlink_audit_socket permissive=1', 'type=AVC msg=audit(1593763869.856:6369): avc: denied { sys_resource } for pid=29219 comm="sudo" capability=24 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:system_r:ceph_t:s0 tclass=capability permissive=1', 'type=AVC msg=audit(1593763869.723:6346): avc: denied { map } for pid=29205 comm="sudo" path="/usr/bin/sudo" dev="sda1" ino=7667 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:sudo_exec_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1593763869.770:6349): avc: denied { read } for pid=29206 comm="unix_chkpwd" name="shadow" dev="sda1" ino=1545 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:shadow_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1593763869.771:6351): avc: denied { create } for pid=29205 comm="sudo" scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:system_r:ceph_t:s0 tclass=netlink_audit_socket permissive=1', 'type=AVC msg=audit(1593763869.769:6348): avc: denied { execute } for pid=29206 comm="sudo" name="unix_chkpwd" dev="sda1" ino=5903 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:chkpwd_exec_t:s0 tclass=file permissive=1']

pass 5196799 2020-07-03 05:07:41 2020-07-03 05:31:22 2020-07-03 05:57:21 0:25:59 0:13:57 0:12:02 smithi master ubuntu rgw/multifs/{clusters/fixed-2 frontend/civetweb objectstore/filestore-xfs overrides rgw_pool_type/ec-profile tasks/rgw_bucket_quota} 2
fail 5196730 2020-07-03 05:02:22 2020-07-03 05:02:27 2020-07-03 05:36:27 0:34:00 0:18:54 0:15:06 smithi master ubuntu 18.04 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/rados_cls_all} 3
Failure Reason:

Command failed on smithi158 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone git://git.ceph.com/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout baa1ea6a9656c3db06c66032fa80b476721947ba'

pass 5196703 2020-07-03 03:59:44 2020-07-03 04:35:11 2020-07-03 04:59:10 0:23:59 0:15:08 0:08:51 smithi master ubuntu 18.04 perf-basic/{ceph distros/ubuntu_latest objectstore/bluestore settings/optimized workloads/cosbench_64K_write} 1
pass 5196649 2020-07-03 03:18:53 2020-07-03 04:09:03 2020-07-03 04:37:02 0:27:59 0:09:52 0:18:07 smithi master ubuntu 18.04 fs/multiclient/{begin clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/ior-shared-file} 4
fail 5196613 2020-07-03 03:18:19 2020-07-03 03:52:48 2020-07-03 04:14:47 0:21:59 0:13:42 0:08:17 smithi master centos 8.1 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/filestore-xfs overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_misc} 2
Failure Reason:

Command failed on smithi149 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone git://git.ceph.com/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout baa1ea6a9656c3db06c66032fa80b476721947ba'

fail 5196562 2020-07-03 03:17:32 2020-07-03 03:34:52 2020-07-03 03:56:52 0:22:00 0:14:03 0:07:57 smithi master centos 8.1 fs/thrash/{begin ceph-thrash/mon clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_trivial_sync} 2
Failure Reason:

Command failed on smithi158 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone git://git.ceph.com/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout baa1ea6a9656c3db06c66032fa80b476721947ba'

pass 5196485 2020-07-03 01:51:08 2020-07-03 03:02:57 2020-07-03 03:36:57 0:34:00 0:27:11 0:06:49 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} d-balancer/crush-compat msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/cache-snaps-balanced} 2
pass 5196413 2020-07-03 01:50:18 2020-07-03 02:42:56 2020-07-03 03:02:56 0:20:00 0:12:43 0:07:17 smithi master rhel 8.1 rados/singleton/{all/mon-auth-caps msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} 1
pass 5196372 2020-07-03 01:49:47 2020-07-03 02:28:12 2020-07-03 02:44:11 0:15:59 0:09:25 0:06:34 smithi master centos 8.1 rados/singleton-nomsgr/{all/version-number-sanity rados supported-random-distro$/{centos_8}} 1
fail 5196291 2020-07-03 01:48:43 2020-07-03 01:48:53 2020-07-03 02:28:53 0:40:00 0:25:52 0:14:08 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_7.6} msgr-failures/few rados thrashers/none thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5195837 2020-07-02 15:42:03 2020-07-02 15:59:32 2020-07-02 16:23:32 0:24:00 0:13:34 0:10:26 smithi master centos 8.1 rados/singleton/{all/thrash_cache_writeback_proxy_none msgr-failures/many msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 10000 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op copy_from 50 --op write_excl 50 --pool base'

fail 5195799 2020-07-02 15:40:45 2020-07-02 18:07:43 2020-07-02 19:01:44 0:54:01 0:27:24 0:26:37 smithi master rhel 8.1 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/none min-compat-client/default msgr-failures/few objectstore/bluestore-comp-zstd pool/ec-data-pool supported-random-distro$/{rhel_8} workloads/c_api_tests} 3
Failure Reason:

"2020-07-02T18:45:42.380931+0000 mon.b (mon.0) 343 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive, 2 pgs peering (PG_AVAILABILITY)" in cluster log

fail 5195757 2020-07-02 15:39:37 2020-07-02 17:59:34 2020-07-02 18:29:34 0:30:00 0:22:22 0:07:38 smithi master centos 8.1 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/default msgr-failures/few objectstore/filestore-xfs pool/ec-data-pool supported-random-distro$/{centos_8} workloads/c_api_tests} 3
Failure Reason:

"2020-07-02T18:16:34.342579+0000 mon.b (mon.0) 390 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 5195656 2020-07-02 15:32:01 2020-07-02 16:45:36 2020-07-02 17:59:37 1:14:01 0:55:53 0:18:08 smithi master ubuntu 18.04 rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-bitmap pool/none supported-random-distro$/{ubuntu_latest} workloads/python_api_tests_with_journaling} 3
Failure Reason:

"2020-07-02T17:09:49.650836+0000 osd.4 (osd.4) 3 : cluster [WRN] slow request osd_op(client.4486.0:65 3.9 3.5871a049 (undecoded) ondisk+write+known_if_redirected e23) initiated 2020-07-02T17:09:19.108381+0000 currently delayed" in cluster log