User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
sage | 2017-08-06 01:32:29 | 2017-08-06 01:34:16 | 2017-08-06 05:35:38 | 4:01:22 | upgrade:jewel-x | wip-jewel-x | smithi | 0925f01 | 14 | 8 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 1488013 | 2017-08-06 01:32:42 | 2017-08-06 01:34:17 | 2017-08-06 02:52:16 | 1:17:59 | 1:08:09 | 0:09:50 | smithi | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
"2017-08-06 01:49:52.759615 mon.a mon.0 172.21.15.83:6789/0 170 : cluster [WRN] MDS health message (mds.0): Behind on trimming (67/30)" in cluster log |
||||||||||||||
fail | 1488014 | 2017-08-06 01:32:43 | 2017-08-06 01:34:17 | 2017-08-06 03:14:17 | 1:40:00 | 1:07:45 | 0:32:15 | smithi | master | centos | 7.3 | upgrade:jewel-x/point-to-point-x/{distros/centos_7.3.yaml point-to-point-upgrade.yaml} | 3 | |
Failure Reason:
Command failed on smithi034 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1488015 | 2017-08-06 01:32:43 | 2017-08-06 01:34:17 | 2017-08-06 04:32:18 | 2:58:01 | 2:26:15 | 0:31:46 | smithi | master | centos | 7.3 | upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_latest.yaml thrashosds-health.yaml} | 3 | |
Failure Reason:
SELinux denials found on ubuntu@smithi004.front.sepia.ceph.com: ['type=AVC msg=audit(1501989810.269:71358): avc: denied { rename } for pid=19853 comm="logrotate" name="logrotate.status.tmp" dev="sda1" ino=27266481 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1501989810.219:71356): avc: denied { write } for pid=19853 comm="logrotate" path="/var/lib/logrotate/logrotate.status.tmp" dev="sda1" ino=27266481 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1501989810.219:71355): avc: denied { open } for pid=19853 comm="logrotate" path="/var/lib/logrotate/logrotate.status" dev="sda1" ino=27266489 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1501989721.485:71169): avc: denied { read } for pid=19853 comm="logrotate" name="logrotate.status" dev="sda1" ino=27266489 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1501989810.219:71356): avc: denied { create } for pid=19853 comm="logrotate" name="logrotate.status.tmp" scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1501989721.485:71169): avc: denied { open } for pid=19853 comm="logrotate" path="/var/lib/logrotate/logrotate.status" dev="sda1" ino=27266489 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1501989810.269:71358): avc: denied { unlink } for pid=19853 comm="logrotate" name="logrotate.status" dev="sda1" ino=27266489 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1501989810.219:71357): avc: denied { setattr } for pid=19853 comm="logrotate" name="logrotate.status.tmp" dev="sda1" ino=27266481 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1501989810.219:71355): avc: denied { read } for pid=19853 comm="logrotate" name="logrotate.status" dev="sda1" ino=27266489 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file'] |
||||||||||||||
pass | 1488016 | 2017-08-06 01:32:44 | 2017-08-06 01:34:16 | 2017-08-06 03:18:17 | 1:44:01 | 0:59:06 | 0:44:55 | smithi | master | centos | 7.3 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/ec-rados-default.yaml 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/centos_latest.yaml thrashosds-health.yaml} | 3 | |
fail | 1488017 | 2017-08-06 01:32:45 | 2017-08-06 01:34:16 | 2017-08-06 02:02:15 | 0:27:59 | 0:09:55 | 0:18:04 | smithi | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
Command failed on smithi004 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
pass | 1488018 | 2017-08-06 01:32:45 | 2017-08-06 01:34:16 | 2017-08-06 03:40:17 | 2:06:01 | 1:13:57 | 0:52:04 | smithi | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
pass | 1488019 | 2017-08-06 01:32:46 | 2017-08-06 01:34:16 | 2017-08-06 02:46:16 | 1:12:00 | 1:07:42 | 0:04:18 | smithi | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
pass | 1488020 | 2017-08-06 01:32:47 | 2017-08-06 01:34:18 | 2017-08-06 03:20:17 | 1:45:59 | 1:21:30 | 0:24:29 | smithi | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
pass | 1488021 | 2017-08-06 01:32:48 | 2017-08-06 01:34:17 | 2017-08-06 04:10:19 | 2:36:02 | 2:25:16 | 0:10:46 | smithi | master | ubuntu | 14.04 | upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml thrashosds-health.yaml} | 3 | |
pass | 1488022 | 2017-08-06 01:32:48 | 2017-08-06 01:34:17 | 2017-08-06 03:18:17 | 1:44:00 | 1:03:08 | 0:40:52 | smithi | master | ubuntu | 14.04 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/ec-rados-default.yaml 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_14.04.yaml thrashosds-health.yaml} | 3 | |
pass | 1488023 | 2017-08-06 01:32:49 | 2017-08-06 01:35:58 | 2017-08-06 02:55:58 | 1:20:00 | 1:09:55 | 0:10:05 | smithi | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
pass | 1488024 | 2017-08-06 01:32:50 | 2017-08-06 01:35:58 | 2017-08-06 04:23:59 | 2:48:01 | 1:10:08 | 1:37:53 | smithi | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
pass | 1488025 | 2017-08-06 01:32:50 | 2017-08-06 01:35:57 | 2017-08-06 03:45:58 | 2:10:01 | 1:22:26 | 0:47:35 | smithi | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
fail | 1488026 | 2017-08-06 01:32:51 | 2017-08-06 01:36:10 | 2017-08-06 03:22:12 | 1:46:02 | 1:04:49 | 0:41:13 | smithi | master | ubuntu | 14.04 | upgrade:jewel-x/point-to-point-x/{distros/ubuntu_14.04.yaml point-to-point-upgrade.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v11.0.0.sh) on smithi051 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test-upgrade-v11.0.0.sh' |
||||||||||||||
fail | 1488027 | 2017-08-06 01:32:52 | 2017-08-06 01:37:36 | 2017-08-06 02:41:34 | 1:03:58 | 0:49:24 | 0:14:34 | smithi | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
pass | 1488028 | 2017-08-06 01:32:52 | 2017-08-06 01:37:33 | 2017-08-06 03:05:33 | 1:28:00 | 1:24:15 | 0:03:45 | smithi | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
pass | 1488029 | 2017-08-06 01:32:53 | 2017-08-06 01:37:35 | 2017-08-06 05:35:38 | 3:58:03 | 2:28:37 | 1:29:26 | smithi | master | ubuntu | 16.04 | upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_latest.yaml thrashosds-health.yaml} | 3 | |
pass | 1488030 | 2017-08-06 01:32:54 | 2017-08-06 01:37:59 | 2017-08-06 03:00:00 | 1:22:01 | 1:12:26 | 0:09:35 | smithi | master | ubuntu | 16.04 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/ec-rados-default.yaml 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_latest.yaml thrashosds-health.yaml} | 3 | |
fail | 1488031 | 2017-08-06 01:32:55 | 2017-08-06 01:38:02 | 2017-08-06 02:20:02 | 0:42:00 | 0:13:32 | 0:28:28 | smithi | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rgw.sh) on smithi140 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh' |
||||||||||||||
pass | 1488032 | 2017-08-06 01:32:55 | 2017-08-06 01:38:20 | 2017-08-06 05:28:23 | 3:50:03 | 1:23:04 | 2:26:59 | smithi | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
pass | 1488033 | 2017-08-06 01:32:56 | 2017-08-06 01:38:24 | 2017-08-06 02:52:25 | 1:14:01 | 1:04:14 | 0:09:47 | smithi | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
fail | 1488034 | 2017-08-06 01:32:57 | 2017-08-06 01:38:49 | 2017-08-06 02:02:49 | 0:24:00 | 0:10:53 | 0:13:07 | smithi | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
Command failed on smithi001 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |