Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 4666984 2020-01-14 08:27:52 2020-01-14 08:29:34 2020-01-14 09:13:34 0:44:00 0:32:06 0:11:54 smithi master centos 8.0 rados/cephadm/{fixed-2.yaml mode/packaged.yaml msgr/async-v1only.yaml start.yaml supported-random-distro$/{centos_8.yaml} tasks/rados_api_tests.yaml} 2
pass 4666985 2020-01-14 08:27:53 2020-01-14 08:29:40 2020-01-14 08:53:40 0:24:00 0:13:47 0:10:13 smithi master centos 8.0 rados/singleton-flat/cephadm/{cephadm.yaml distro/centos_latest.yaml} 1
pass 4666986 2020-01-14 08:27:53 2020-01-14 08:33:19 2020-01-14 09:07:19 0:34:00 0:25:01 0:08:59 smithi master rhel 8.0 rados/cephadm/{fixed-2.yaml mode/root.yaml msgr/async-v2only.yaml start.yaml supported-random-distro$/{rhel_8.yaml} tasks/rados_python.yaml} 2
fail 4666987 2020-01-14 08:27:54 2020-01-14 08:33:22 2020-01-14 09:17:22 0:44:00 0:31:20 0:12:40 smithi master centos 8.0 rados/cephadm/{fixed-2.yaml mode/packaged.yaml msgr/async.yaml start.yaml supported-random-distro$/{centos_8.yaml} tasks/rados_api_tests.yaml} 2
Failure Reason:

SELinux denials found on ubuntu@smithi073.front.sepia.ceph.com: ['type=AVC msg=audit(1578993121.388:6628): avc: denied { read } for pid=114583 comm="logrotate" name="f757962a-36aa-11ea-99db-001a4aab830c" dev="sda1" ino=396036 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:container_file_t:s0 tclass=dir permissive=1', 'type=AVC msg=audit(1578993121.389:6629): avc: denied { getattr } for pid=114583 comm="logrotate" path="/var/log/ceph/f757962a-36aa-11ea-99db-001a4aab830c/ceph.audit.log" dev="sda1" ino=396042 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:container_file_t:s0 tclass=file permissive=1']

pass 4666988 2020-01-14 08:27:55 2020-01-14 08:33:43 2020-01-14 09:09:43 0:36:00 0:24:33 0:11:27 smithi master rhel 8.0 rados/cephadm/{fixed-2.yaml mode/root.yaml msgr/async-v1only.yaml start.yaml supported-random-distro$/{rhel_8.yaml} tasks/rados_python.yaml} 2
pass 4666989 2020-01-14 08:27:56 2020-01-14 08:37:38 2020-01-14 08:55:37 0:17:59 0:09:04 0:08:55 smithi master centos 8.0 rados/singleton-flat/cephadm_orchestrator/{2-node-mgr.yaml centos_latest.yaml cephadm_orchestrator.yaml mgr.yaml} 2
pass 4666990 2020-01-14 08:27:56 2020-01-14 08:37:38 2020-01-14 09:19:38 0:42:00 0:30:51 0:11:09 smithi master centos 8.0 rados/cephadm/{fixed-2.yaml mode/packaged.yaml msgr/async-v2only.yaml start.yaml supported-random-distro$/{centos_8.yaml} tasks/rados_api_tests.yaml} 2
pass 4666991 2020-01-14 08:27:57 2020-01-14 08:37:38 2020-01-14 09:11:38 0:34:00 0:22:18 0:11:42 smithi master centos 8.0 rados/cephadm/{fixed-2.yaml mode/root.yaml msgr/async.yaml start.yaml supported-random-distro$/{centos_8.yaml} tasks/rados_python.yaml} 2
pass 4666992 2020-01-14 08:27:58 2020-01-14 08:39:33 2020-01-14 09:23:33 0:44:00 0:32:08 0:11:52 smithi master ubuntu 18.04 rados/cephadm/{fixed-2.yaml mode/root.yaml msgr/async-v1only.yaml start.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_api_tests.yaml} 2
fail 4666993 2020-01-14 08:27:59 2020-01-14 08:41:40 2020-01-14 09:15:40 0:34:00 0:22:32 0:11:28 smithi master ubuntu 18.04 rados/cephadm/{fixed-2.yaml mode/packaged.yaml msgr/async-v2only.yaml start.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_python.yaml} 2
Failure Reason:

"2020-01-14T08:58:30.313491+0000 mon.a (mon.0) 124 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log

fail 4666994 2020-01-14 08:28:00 2020-01-14 08:41:43 2020-01-14 09:21:43 0:40:00 0:32:59 0:07:01 smithi master rhel 8.0 rados/cephadm/{fixed-2.yaml mode/root.yaml msgr/async.yaml start.yaml supported-random-distro$/{rhel_8.yaml} tasks/rados_api_tests.yaml} 2
Failure Reason:

SELinux denials found on ubuntu@smithi093.front.sepia.ceph.com: ['type=AVC msg=audit(1578992821.488:6641): avc: denied { getattr } for pid=34847 comm="logrotate" path="/var/log/ceph/a8e35910-36ab-11ea-99db-001a4aab830c/ceph.audit.log" dev="sda1" ino=395949 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:container_file_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1578992821.488:6640): avc: denied { read } for pid=34847 comm="logrotate" name="a8e35910-36ab-11ea-99db-001a4aab830c" dev="sda1" ino=394401 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:container_file_t:s0 tclass=dir permissive=1']

pass 4666995 2020-01-14 08:28:01 2020-01-14 08:41:43 2020-01-14 09:13:43 0:32:00 0:21:38 0:10:22 smithi master centos 8.0 rados/cephadm/{fixed-2.yaml mode/packaged.yaml msgr/async-v1only.yaml start.yaml supported-random-distro$/{centos_8.yaml} tasks/rados_python.yaml} 2
fail 4666996 2020-01-14 08:28:01 2020-01-14 08:43:42 2020-01-14 09:07:41 0:23:59 0:12:09 0:11:50 smithi master ubuntu 18.04 rados/singleton-flat/cephadm/{cephadm.yaml distro/ubuntu_latest.yaml} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi065 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=87a58205bfbe5b8b0d6d2e8e8b0d63dea8ae02cc TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 4666997 2020-01-14 08:28:02 2020-01-14 08:45:18 2020-01-14 09:23:18 0:38:00 0:32:32 0:05:28 smithi master rhel 8.0 rados/cephadm/{fixed-2.yaml mode/root.yaml msgr/async-v2only.yaml start.yaml supported-random-distro$/{rhel_8.yaml} tasks/rados_api_tests.yaml} 2
fail 4666998 2020-01-14 08:28:03 2020-01-14 08:45:19 2020-01-14 09:19:18 0:33:59 0:20:49 0:13:10 smithi master centos 8.0 rados/cephadm/{fixed-2.yaml mode/packaged.yaml msgr/async.yaml start.yaml supported-random-distro$/{centos_8.yaml} tasks/rados_python.yaml} 2
Failure Reason:

"2020-01-14T09:05:29.428719+0000 mon.a (mon.0) 122 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log