User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
trociny | 2017-06-11 06:01:33 | 2017-06-11 06:01:51 | 2017-06-11 18:08:29 | 12:06:38 | upgrade | wip-mgolub-testing | smithi | f68613d | 1 | 8 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 1278002 | 2017-06-11 06:01:37 | 2017-06-11 06:01:38 | 2017-06-11 06:21:38 | 0:20:00 | 0:03:09 | 0:16:51 | smithi | master | upgrade/client-upgrade/hammer-client-x/basic/{0-cluster/start.yaml 1-install/hammer-client-x.yaml 2-workload/rbd_api_tests.yaml} | 3 | |||
Failure Reason:
Command failed on smithi109 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph-mds=0.94.10-1xenial rbd-fuse=0.94.10-1xenial librbd1=0.94.10-1xenial ceph-fuse=0.94.10-1xenial python-ceph=0.94.10-1xenial ceph-common=0.94.10-1xenial libcephfs-java=0.94.10-1xenial ceph=0.94.10-1xenial libcephfs-jni=0.94.10-1xenial ceph-test=0.94.10-1xenial radosgw=0.94.10-1xenial librados2=0.94.10-1xenial' |
||||||||||||||
fail | 1278003 | 2017-06-11 06:01:37 | 2017-06-11 06:01:51 | 2017-06-11 06:39:51 | 0:38:00 | 0:32:51 | 0:05:09 | smithi | master | centos | 7.3 | upgrade/hammer-jewel-x/parallel/{0-cluster/start.yaml 1-hammer-jewel-install/hammer-jewel.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 3.5-finish.yaml 4-jewel.yaml 5-hammer-jewel-x-upgrade/hammer-jewel-x.yaml 6-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 7-upgrade-sequence/upgrade-all.yaml 8-kraken.yaml 9-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_s3tests.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
dead | 1278004 | 2017-06-11 06:01:38 | 2017-06-11 06:03:16 | 2017-06-11 18:08:29 | 12:05:13 | smithi | master | centos | 7.3 | upgrade/jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |||
fail | 1278005 | 2017-06-11 06:01:39 | 2017-06-11 06:03:18 | 2017-06-11 09:59:24 | 3:56:06 | 3:48:38 | 0:07:28 | smithi | master | centos | 7.3 | upgrade/jewel-x/point-to-point-x/{distros/centos_7.3.yaml point-to-point-upgrade.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v11.0.0.sh) on smithi023 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test-upgrade-v11.0.0.sh' |
||||||||||||||
pass | 1278006 | 2017-06-11 06:01:39 | 2017-06-11 06:03:24 | 2017-06-11 08:19:26 | 2:16:02 | 2:00:45 | 0:15:17 | smithi | master | centos | 7.3 | upgrade/jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_latest.yaml} | 3 | |
fail | 1278007 | 2017-06-11 06:01:40 | 2017-06-11 06:03:39 | 2017-06-11 08:39:43 | 2:36:04 | 0:37:57 | 1:58:07 | smithi | master | centos | 7.3 | upgrade/hammer-jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-hammer-install-and-upgrade-to-jewel/hammer-to-jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 1278008 | 2017-06-11 06:01:40 | 2017-06-11 06:03:40 | 2017-06-11 08:53:43 | 2:50:03 | 0:58:22 | 1:51:41 | smithi | master | centos | 7.3 | upgrade/jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/ec-rados-default.yaml 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/centos_latest.yaml} | 3 | |
Failure Reason:
SELinux denials found on ubuntu@smithi026.front.sepia.ceph.com: ['type=AVC msg=audit(1497169730.256:34946): avc: denied { dac_read_search } for pid=1768 comm="master" capability=2 scontext=system_u:system_r:postfix_master_t:s0 tcontext=system_u:system_r:postfix_master_t:s0 tclass=capability permissive=1', 'type=AVC msg=audit(1497171001.213:35544): avc: denied { dac_read_search } for pid=7900 comm="unix_chkpwd" capability=2 scontext=system_u:system_r:chkpwd_t:s0-s0:c0.c1023 tcontext=system_u:system_r:chkpwd_t:s0-s0:c0.c1023 tclass=capability permissive=1', 'type=AVC msg=audit(1497169309.923:33045): avc: denied { dac_read_search } for pid=1768 comm="master" capability=2 scontext=system_u:system_r:postfix_master_t:s0 tcontext=system_u:system_r:postfix_master_t:s0 tclass=capability permissive=1', 'type=AVC msg=audit(1497170630.667:35076): avc: denied { dac_read_search } for pid=1768 comm="master" capability=2 scontext=system_u:system_r:postfix_master_t:s0 tcontext=system_u:system_r:postfix_master_t:s0 tclass=capability permissive=1', 'type=AVC msg=audit(1497169501.599:34846): avc: denied { dac_read_search } for pid=3670 comm="unix_chkpwd" capability=2 scontext=system_u:system_r:chkpwd_t:s0-s0:c0.c1023 tcontext=system_u:system_r:chkpwd_t:s0-s0:c0.c1023 tclass=capability permissive=1', 'type=AVC msg=audit(1497169682.121:34945): avc: denied { dac_read_search } for pid=664 comm="systemd-logind" capability=2 scontext=system_u:system_r:systemd_logind_t:s0 tcontext=system_u:system_r:systemd_logind_t:s0 tclass=capability permissive=1', 'type=AVC msg=audit(1497169681.982:34937): avc: denied { dac_read_search } for pid=4541 comm="unix_chkpwd" capability=2 scontext=system_u:system_r:chkpwd_t:s0-s0:c0.c1023 tcontext=system_u:system_r:chkpwd_t:s0-s0:c0.c1023 tclass=capability permissive=1', 'type=AVC msg=audit(1497169501.972:34854): avc: denied { dac_read_search } for pid=664 comm="systemd-logind" capability=2 scontext=system_u:system_r:systemd_logind_t:s0 tcontext=system_u:system_r:systemd_logind_t:s0 tclass=capability permissive=1'] |
||||||||||||||
fail | 1278009 | 2017-06-11 06:01:41 | 2017-06-11 06:03:46 | 2017-06-11 06:43:46 | 0:40:00 | 0:31:13 | 0:08:47 | smithi | master | ubuntu | 14.04 | upgrade/jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on smithi073 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph osd set-require-min-compat-client luminous'" |
||||||||||||||
fail | 1278010 | 2017-06-11 06:01:42 | 2017-06-11 06:03:51 | 2017-06-11 08:01:53 | 1:58:02 | 1:19:22 | 0:38:40 | smithi | master | ubuntu | 16.04 | upgrade/jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2017-06-11T06:41:44.067982+00:00 smithi103 ceph-create-keys[12730]: INFO:ceph-create-keys:ceph-mon admin socket not ready yet. ' in syslog |
||||||||||||||
fail | 1278011 | 2017-06-11 06:01:42 | 2017-06-11 06:03:51 | 2017-06-11 06:47:51 | 0:44:00 | 0:32:12 | 0:11:48 | smithi | master | upgrade/hammer-jewel-x/tiering/{0-cluster/start.yaml 1-install-hammer-and-upgrade-to-jewel/hammer-to-jewel.yaml 2-setup-cache-tiering/{0-create-base-tier/create-ec-pool.yaml 1-create-cache-tier.yaml} 3-upgrade.yaml} | 3 | |||
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |