Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Nodes | Status |
---|---|---|---|---|---|---|---|---|---|---|---|
2017-08-29 07:55:16 | 2017-08-29 09:33:44 | 2017-08-29 09:55:43 | 0:21:59 | 0:08:37 | 0:13:22 | ovh | master | 3 | fail |
Description: rbd/librbd/{cache/none.yaml clusters/{fixed-3.yaml openstack.yaml} config/copy-on-read.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml pool/replicated-data-pool.yaml workloads/rbd_fio.yaml}
Sentry event: http://sentry.ceph.com/sepia/teuthology/?q=28029603a47f4c59b442de84780cc3bd
Command failed on ovh001 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-82-g11d5706-1xenial ceph-mds=13.0.0-82-g11d5706-1xenial ceph-mgr=13.0.0-82-g11d5706-1xenial ceph-common=13.0.0-82-g11d5706-1xenial ceph-fuse=13.0.0-82-g11d5706-1xenial ceph-test=13.0.0-82-g11d5706-1xenial radosgw=13.0.0-82-g11d5706-1xenial python-ceph=13.0.0-82-g11d5706-1xenial libcephfs2=13.0.0-82-g11d5706-1xenial libcephfs-dev=13.0.0-82-g11d5706-1xenial libcephfs-java=13.0.0-82-g11d5706-1xenial libcephfs-jni=13.0.0-82-g11d5706-1xenial librados2=13.0.0-82-g11d5706-1xenial librbd1=13.0.0-82-g11d5706-1xenial rbd-fuse=13.0.0-82-g11d5706-1xenial'