User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
sage | 2018-05-23 13:25:38 | 2018-05-23 13:30:06 | 2018-05-23 16:08:19 | 2:38:13 | upgrade:luminous-x | wip-sage-testing-20180521-1653 | smithi | 2836256 | 7 | 17 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 2575998 | 2018-05-23 13:25:42 | 2018-05-23 13:30:06 | 2018-05-23 15:38:07 | 2:08:01 | 1:52:57 | 0:15:04 | smithi | master | centos | 7.4 | upgrade:luminous-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-ceph-install/luminous.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml rgw_ragweed_prepare.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw.yaml rgw_ragweed_check.yaml rgw_swift.yaml} distros/centos_latest.yaml objectstore/bluestore.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rbd/import_export.sh) on smithi176 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-sage-testing-20180521-1653 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 RBD_CREATE_ARGS=--new-format adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/rbd/import_export.sh' |
||||||||||||||
fail | 2575999 | 2018-05-23 13:25:43 | 2018-05-23 13:46:25 | 2018-05-23 14:08:20 | 0:21:55 | smithi | master | centos | 7.4 | upgrade:luminous-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-ceph-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_latest.yaml objectstore/bluestore.yaml thrashosds-health.yaml} | 2 | |||
Failure Reason:
machine smithi111.front.sepia.ceph.com is locked by scheduled_jdillaman@cube-1, not scheduled_sage@teuthology |
||||||||||||||
pass | 2576000 | 2018-05-23 13:25:44 | 2018-05-23 13:46:25 | 2018-05-23 14:48:20 | 1:01:55 | 0:51:04 | 0:10:51 | smithi | master | centos | 7.4 | upgrade:luminous-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-luminous-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-ec-workload.yaml 5-finish-upgrade.yaml 7-final-workload.yaml distros/centos_latest.yaml objectstore/bluestore.yaml thrashosds-health.yaml} | 3 | |
fail | 2576001 | 2018-05-23 13:25:45 | 2018-05-23 13:48:16 | 2018-05-23 14:08:14 | 0:19:58 | smithi | master | rhel | 7.5 | upgrade:luminous-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-ceph-install/luminous.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml rgw_ragweed_prepare.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw.yaml rgw_ragweed_check.yaml rgw_swift.yaml} distros/rhel_latest.yaml objectstore/filestore-xfs.yaml} | 2 | |||
Failure Reason:
machine smithi111.front.sepia.ceph.com is locked by scheduled_jdillaman@cube-1, not scheduled_sage@teuthology |
||||||||||||||
fail | 2576002 | 2018-05-23 13:25:45 | 2018-05-23 13:50:14 | 2018-05-23 15:48:16 | 1:58:02 | 1:45:22 | 0:12:40 | smithi | master | ubuntu | 16.04 | upgrade:luminous-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-ceph-install/luminous.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml rgw_ragweed_prepare.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw.yaml rgw_ragweed_check.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml objectstore/bluestore.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rbd/import_export.sh) on smithi007 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-sage-testing-20180521-1653 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 RBD_CREATE_ARGS=--new-format adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/rbd/import_export.sh' |
||||||||||||||
pass | 2576003 | 2018-05-23 13:25:46 | 2018-05-23 13:50:14 | 2018-05-23 15:36:15 | 1:46:01 | 1:40:15 | 0:05:46 | smithi | master | rhel | 7.5 | upgrade:luminous-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-ceph-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/rhel_latest.yaml objectstore/filestore-xfs.yaml thrashosds-health.yaml} | 3 | |
pass | 2576004 | 2018-05-23 13:25:47 | 2018-05-23 13:50:14 | 2018-05-23 14:42:14 | 0:52:00 | 0:46:44 | 0:05:16 | smithi | master | rhel | 7.5 | upgrade:luminous-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-luminous-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-ec-workload.yaml 5-finish-upgrade.yaml 7-final-workload.yaml distros/rhel_latest.yaml objectstore/filestore-xfs.yaml thrashosds-health.yaml} | 3 | |
fail | 2576005 | 2018-05-23 13:25:47 | 2018-05-23 13:52:16 | 2018-05-23 15:56:17 | 2:04:01 | 1:53:47 | 0:10:14 | smithi | master | centos | 7.4 | upgrade:luminous-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-ceph-install/luminous.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml rgw_ragweed_prepare.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw.yaml rgw_ragweed_check.yaml rgw_swift.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rbd/import_export.sh) on smithi146 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-sage-testing-20180521-1653 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 RBD_CREATE_ARGS=--new-format adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/rbd/import_export.sh' |
||||||||||||||
fail | 2576006 | 2018-05-23 13:25:48 | 2018-05-23 13:52:16 | 2018-05-23 15:48:18 | 1:56:02 | 1:48:26 | 0:07:36 | smithi | master | rhel | 7.5 | upgrade:luminous-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-ceph-install/luminous.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml rgw_ragweed_prepare.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw.yaml rgw_ragweed_check.yaml rgw_swift.yaml} distros/rhel_latest.yaml objectstore/bluestore.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rbd/import_export.sh) on smithi032 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-sage-testing-20180521-1653 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 RBD_CREATE_ARGS=--new-format adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/rbd/import_export.sh' |
||||||||||||||
fail | 2576007 | 2018-05-23 13:25:49 | 2018-05-23 13:52:16 | 2018-05-23 14:08:15 | 0:15:59 | smithi | master | ubuntu | 16.04 | upgrade:luminous-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-ceph-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_latest.yaml objectstore/bluestore.yaml thrashosds-health.yaml} | 2 | |||
Failure Reason:
machine smithi111.front.sepia.ceph.com is locked by scheduled_jdillaman@cube-1, not scheduled_sage@teuthology |
||||||||||||||
pass | 2576008 | 2018-05-23 13:25:49 | 2018-05-23 13:52:16 | 2018-05-23 14:50:16 | 0:58:00 | 0:45:12 | 0:12:48 | smithi | master | ubuntu | 16.04 | upgrade:luminous-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-luminous-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-ec-workload.yaml 5-finish-upgrade.yaml 7-final-workload.yaml distros/ubuntu_latest.yaml objectstore/bluestore.yaml thrashosds-health.yaml} | 3 | |
fail | 2576009 | 2018-05-23 13:25:50 | 2018-05-23 13:52:16 | 2018-05-23 15:46:17 | 1:54:01 | 1:43:58 | 0:10:03 | smithi | master | ubuntu | 16.04 | upgrade:luminous-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-ceph-install/luminous.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml rgw_ragweed_prepare.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw.yaml rgw_ragweed_check.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rbd/import_export.sh) on smithi027 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-sage-testing-20180521-1653 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 RBD_CREATE_ARGS=--new-format adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/rbd/import_export.sh' |
||||||||||||||
fail | 2576010 | 2018-05-23 13:25:51 | 2018-05-23 13:54:17 | 2018-05-23 15:58:18 | 2:04:01 | 1:53:30 | 0:10:31 | smithi | master | centos | 7.4 | upgrade:luminous-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-ceph-install/luminous.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml rgw_ragweed_prepare.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw.yaml rgw_ragweed_check.yaml rgw_swift.yaml} distros/centos_latest.yaml objectstore/bluestore.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rbd/import_export.sh) on smithi168 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-sage-testing-20180521-1653 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 RBD_CREATE_ARGS=--new-format adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/rbd/import_export.sh' |
||||||||||||||
pass | 2576011 | 2018-05-23 13:25:51 | 2018-05-23 13:54:17 | 2018-05-23 15:50:18 | 1:56:01 | 1:42:50 | 0:13:11 | smithi | master | centos | 7.4 | upgrade:luminous-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-ceph-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml thrashosds-health.yaml} | 3 | |
pass | 2576012 | 2018-05-23 13:25:52 | 2018-05-23 13:54:17 | 2018-05-23 15:00:17 | 1:06:00 | 0:54:33 | 0:11:27 | smithi | master | centos | 7.4 | upgrade:luminous-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-luminous-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-ec-workload.yaml 5-finish-upgrade.yaml 7-final-workload.yaml distros/centos_latest.yaml objectstore/filestore-xfs.yaml thrashosds-health.yaml} | 3 | |
fail | 2576013 | 2018-05-23 13:25:53 | 2018-05-23 13:54:17 | 2018-05-23 14:10:16 | 0:15:59 | smithi | master | rhel | 7.5 | upgrade:luminous-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-ceph-install/luminous.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml rgw_ragweed_prepare.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw.yaml rgw_ragweed_check.yaml rgw_swift.yaml} distros/rhel_latest.yaml objectstore/filestore-xfs.yaml} | 2 | |||
Failure Reason:
machine smithi111.front.sepia.ceph.com is locked by scheduled_jdillaman@cube-1, not scheduled_sage@teuthology |
||||||||||||||
fail | 2576014 | 2018-05-23 13:25:53 | 2018-05-23 13:54:17 | 2018-05-23 15:52:18 | 1:58:01 | 1:45:20 | 0:12:41 | smithi | master | ubuntu | 16.04 | upgrade:luminous-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-ceph-install/luminous.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml rgw_ragweed_prepare.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw.yaml rgw_ragweed_check.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml objectstore/bluestore.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rbd/import_export.sh) on smithi151 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-sage-testing-20180521-1653 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 RBD_CREATE_ARGS=--new-format adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/rbd/import_export.sh' |
||||||||||||||
pass | 2576015 | 2018-05-23 13:25:54 | 2018-05-23 13:54:17 | 2018-05-23 16:08:19 | 2:14:02 | 2:07:45 | 0:06:17 | smithi | master | rhel | 7.5 | upgrade:luminous-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-ceph-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/rhel_latest.yaml objectstore/bluestore.yaml thrashosds-health.yaml} | 3 | |
fail | 2576016 | 2018-05-23 13:25:55 | 2018-05-23 13:54:17 | 2018-05-23 14:52:17 | 0:58:00 | 0:50:49 | 0:07:11 | smithi | master | rhel | 7.5 | upgrade:luminous-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-luminous-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-ec-workload.yaml 5-finish-upgrade.yaml 7-final-workload.yaml distros/rhel_latest.yaml objectstore/bluestore.yaml thrashosds-health.yaml} | 3 | |
Failure Reason:
SELinux denials found on ubuntu@smithi104.front.sepia.ceph.com: ['type=AVC msg=audit(1527085772.212:4519): avc: denied { read write open } for pid=75356 comm="updatedb" path="/var/lib/mlocate/mlocate.db.NFYQAx" dev="sda1" ino=72422 scontext=system_u:system_r:locate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1527085776.464:4524): avc: denied { rename } for pid=75356 comm="updatedb" name="mlocate.db.NFYQAx" dev="sda1" ino=72422 scontext=system_u:system_r:locate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1527085772.212:4519): avc: denied { create } for pid=75356 comm="updatedb" name="mlocate.db.NFYQAx" scontext=system_u:system_r:locate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1527085772.212:4519): avc: denied { add_name } for pid=75356 comm="updatedb" name="mlocate.db.NFYQAx" scontext=system_u:system_r:locate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir', 'type=AVC msg=audit(1527085772.212:4519): avc: denied { write } for pid=75356 comm="updatedb" name="mlocate" dev="sda1" ino=414 scontext=system_u:system_r:locate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir', 'type=AVC msg=audit(1527085772.510:4521): avc: denied { write } for pid=75356 comm="updatedb" path="/var/lib/mlocate/mlocate.db.NFYQAx" dev="sda1" ino=72422 scontext=system_u:system_r:locate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1527085776.464:4523): avc: denied { setattr } for pid=75356 comm="updatedb" name="mlocate.db.NFYQAx" dev="sda1" ino=72422 scontext=system_u:system_r:locate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file'] |
||||||||||||||
fail | 2576017 | 2018-05-23 13:25:55 | 2018-05-23 13:54:17 | 2018-05-23 15:54:18 | 2:00:01 | 1:51:06 | 0:08:55 | smithi | master | centos | 7.4 | upgrade:luminous-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-ceph-install/luminous.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml rgw_ragweed_prepare.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw.yaml rgw_ragweed_check.yaml rgw_swift.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rbd/import_export.sh) on smithi018 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-sage-testing-20180521-1653 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 RBD_CREATE_ARGS=--new-format adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/rbd/import_export.sh' |
||||||||||||||
fail | 2576018 | 2018-05-23 13:25:56 | 2018-05-23 13:56:17 | 2018-05-23 15:54:18 | 1:58:01 | 1:50:35 | 0:07:26 | smithi | master | rhel | 7.5 | upgrade:luminous-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-ceph-install/luminous.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml rgw_ragweed_prepare.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw.yaml rgw_ragweed_check.yaml rgw_swift.yaml} distros/rhel_latest.yaml objectstore/bluestore.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rbd/import_export.sh) on smithi087 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-sage-testing-20180521-1653 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 RBD_CREATE_ARGS=--new-format adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/rbd/import_export.sh' |
||||||||||||||
fail | 2576019 | 2018-05-23 13:25:57 | 2018-05-23 13:56:17 | 2018-05-23 14:08:16 | 0:11:59 | smithi | master | ubuntu | 16.04 | upgrade:luminous-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-ceph-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml thrashosds-health.yaml} | 2 | |||
Failure Reason:
machine smithi111.front.sepia.ceph.com is locked by scheduled_jdillaman@cube-1, not scheduled_sage@teuthology |
||||||||||||||
fail | 2576020 | 2018-05-23 13:25:57 | 2018-05-23 13:56:17 | 2018-05-23 14:08:16 | 0:11:59 | smithi | master | ubuntu | 16.04 | upgrade:luminous-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-luminous-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-ec-workload.yaml 5-finish-upgrade.yaml 7-final-workload.yaml distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml thrashosds-health.yaml} | 2 | |||
Failure Reason:
machine smithi111.front.sepia.ceph.com is locked by scheduled_jdillaman@cube-1, not scheduled_sage@teuthology |
||||||||||||||
fail | 2576021 | 2018-05-23 13:25:58 | 2018-05-23 13:56:17 | 2018-05-23 15:52:18 | 1:56:01 | 1:43:28 | 0:12:33 | smithi | master | ubuntu | 16.04 | upgrade:luminous-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-ceph-install/luminous.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml rgw_ragweed_prepare.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw.yaml rgw_ragweed_check.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rbd/import_export.sh) on smithi022 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-sage-testing-20180521-1653 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 RBD_CREATE_ARGS=--new-format adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/rbd/import_export.sh' |