Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 1255324 2017-06-02 07:09:03 2017-06-02 07:09:31 2017-06-02 07:33:30 0:23:59 0:20:47 0:03:12 smithi master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

SELinux denials found on ubuntu@smithi136.front.sepia.ceph.com: ['type=AVC msg=audit(1496387682.766:3747): avc: denied { write } for pid=20633 comm="ceph-mon" name="ceph" dev="sda1" ino=18352198 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=dir', 'type=AVC msg=audit(1496387682.766:3747): avc: denied { open } for pid=20633 comm="ceph-mon" path="/var/log/ceph/ceph-mon.smithi136.log" dev="sda1" ino=18352199 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file', 'type=AVC msg=audit(1496387682.766:3747): avc: denied { create } for pid=20633 comm="ceph-mon" name="ceph-mon.smithi136.log" scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file', 'type=AVC msg=audit(1496387682.766:3747): avc: denied { add_name } for pid=20633 comm="ceph-mon" name="ceph-mon.smithi136.log" scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=dir', 'type=AVC msg=audit(1496387723.662:3961): avc: denied { open } for pid=21357 comm="ceph-mon" path="/var/log/ceph/ceph-mon.smithi136.log" dev="sda1" ino=18352199 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file']

pass 1255325 2017-06-02 07:09:04 2017-06-02 07:09:31 2017-06-02 07:45:30 0:35:59 0:22:36 0:13:23 smithi master rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml workloads/rados_api_tests.yaml} 2
pass 1255326 2017-06-02 07:09:05 2017-06-02 07:09:31 2017-06-02 07:53:31 0:44:00 0:27:09 0:16:51 smithi master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml workloads/rados_api_tests.yaml} 2
pass 1255327 2017-06-02 07:09:05 2017-06-02 07:11:28 2017-06-02 07:47:27 0:35:59 0:25:39 0:10:20 smithi master rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml workloads/radosbench.yaml} 2
pass 1255328 2017-06-02 07:09:06 2017-06-02 07:11:42 2017-06-02 07:33:42 0:22:00 0:18:22 0:03:38 smithi master rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml workloads/rados_api_tests.yaml} 2
pass 1255329 2017-06-02 07:09:07 2017-06-02 07:14:41 2017-06-02 07:56:41 0:42:00 0:23:29 0:18:31 smithi master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml workloads/rados_api_tests.yaml} 2
pass 1255330 2017-06-02 07:09:07 2017-06-02 07:15:18 2017-06-02 07:53:18 0:38:00 0:20:21 0:17:39 smithi master rados/monthrash/{ceph.yaml clusters/3-mons.yaml d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/force-sync-many.yaml workloads/rados_api_tests.yaml} 2
pass 1255331 2017-06-02 07:09:08 2017-06-02 07:21:13 2017-06-02 07:53:12 0:31:59 0:28:28 0:03:31 smithi master rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml d-thrash/default.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/lockdep.yaml} 2
pass 1255332 2017-06-02 07:09:09 2017-06-02 07:22:49 2017-06-02 07:54:49 0:32:00 0:25:34 0:06:26 smithi master rados/singleton/{all/lost-unfound-delete.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml} 1
fail 1255333 2017-06-02 07:09:09 2017-06-02 07:23:19 2017-06-02 07:41:18 0:17:59 0:09:45 0:08:14 smithi master rados/singleton/{all/osd-recovery-incomplete.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml} 1
Failure Reason:

timed out waiting for mon to be updated with osd.2: 0 < 21474836529

fail 1255334 2017-06-02 07:09:10 2017-06-02 07:23:24 2017-06-02 07:41:24 0:18:00 0:07:01 0:10:59 smithi master rados/singleton/{all/rest-api.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml} 1
Failure Reason:

Command failed (workunit test rest/test.py) on smithi183 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mgr-stats TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rest/test.py'

fail 1255335 2017-06-02 07:09:11 2017-06-02 07:24:57 2017-06-02 07:52:57 0:28:00 0:21:03 0:06:57 smithi master rados/singleton/{all/thrash-rados.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml} 2
Failure Reason:

SELinux denials found on ubuntu@smithi131.front.sepia.ceph.com: ['type=AVC msg=audit(1496388908.483:26018): avc: denied { open } for pid=19319 comm="ceph-mon" path="/var/log/ceph/ceph-mon.smithi131.log" dev="sda1" ino=18483910 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file', 'type=AVC msg=audit(1496388908.483:26018): avc: denied { write } for pid=19319 comm="ceph-mon" name="ceph" dev="sda1" ino=18483909 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=dir', 'type=AVC msg=audit(1496388908.483:26018): avc: denied { add_name } for pid=19319 comm="ceph-mon" name="ceph-mon.smithi131.log" scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=dir', 'type=AVC msg=audit(1496388908.483:26018): avc: denied { create } for pid=19319 comm="ceph-mon" name="ceph-mon.smithi131.log" scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file', 'type=AVC msg=audit(1496388918.694:26062): avc: denied { open } for pid=19508 comm="ceph-mon" path="/var/log/ceph/ceph-mon.smithi131.log" dev="sda1" ino=18483910 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file']

fail 1255336 2017-06-02 07:09:11 2017-06-02 07:24:57 2017-06-02 10:41:01 3:16:04 3:07:56 0:08:08 smithi master rados/singleton-nomsgr/{all/health-warnings.yaml rados.yaml} 1
Failure Reason:

Command failed (workunit test rados/test_health_warnings.sh) on smithi098 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mgr-stats TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_health_warnings.sh'