Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 931284 2017-03-22 03:55:27 2017-03-22 03:57:08 2017-03-22 05:39:10 1:42:02 1:38:16 0:03:46 smithi master rados/upgrade/jewel-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-luminous.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml}} 3
Failure Reason:

'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds

fail 931285 2017-03-22 03:55:28 2017-03-22 03:57:09 2017-03-22 04:13:08 0:15:59 0:12:03 0:03:56 smithi master rados/verify/{1thrash/default.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} 2
Failure Reason:

Command failed on smithi135 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph quorum_status'

pass 931286 2017-03-22 03:55:29 2017-03-22 03:57:09 2017-03-22 04:41:09 0:44:00 0:42:23 0:01:37 smithi master centos rados/verify/{1thrash/none.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2
fail 931287 2017-03-22 03:55:30 2017-03-22 03:57:09 2017-03-22 04:21:08 0:23:59 0:18:52 0:05:07 smithi master centos rados/verify/{1thrash/none.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} 2
Failure Reason:

saw valgrind issues

fail 931288 2017-03-22 03:55:31 2017-03-22 03:57:09 2017-03-22 04:23:09 0:26:00 0:21:31 0:04:29 smithi master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi059 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=61889115b412ac1873c8866cec340419fe3a7f72 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 931289 2017-03-22 03:55:31 2017-03-22 03:57:10 2017-03-22 08:19:16 4:22:06 4:15:53 0:06:13 smithi master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi163 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=61889115b412ac1873c8866cec340419fe3a7f72 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 931290 2017-03-22 03:55:32 2017-03-22 03:57:11 2017-03-22 04:11:11 0:14:00 0:09:45 0:04:15 smithi master rados/singleton/{all/mon-seesaw.yaml fs/xfs.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/filestore.yaml rados.yaml} 1
Failure Reason:

Command failed on smithi033 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json'

fail 931291 2017-03-22 03:55:33 2017-03-22 03:57:36 2017-03-22 04:21:36 0:24:00 0:19:04 0:04:56 smithi master rados/monthrash/{ceph/ceph.yaml clusters/3-mons.yaml fs/xfs.yaml mon_kv_backend/leveldb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml thrashers/one.yaml workloads/pool-create-delete.yaml} 2
Failure Reason:

Command failed on smithi189 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mgr -f --cluster ceph -i x'

fail 931292 2017-03-22 03:55:34 2017-03-22 03:59:04 2017-03-22 04:21:04 0:22:00 0:17:36 0:04:24 smithi master rados/singleton/{all/mon-thrasher.yaml fs/xfs.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml} 1
Failure Reason:

Command failed on smithi057 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mgr -f --cluster ceph -i x'

fail 931293 2017-03-22 03:55:35 2017-03-22 03:59:04 2017-03-22 04:15:04 0:16:00 0:09:28 0:06:32 smithi master rados/monthrash/{ceph/ceph.yaml clusters/9-mons.yaml fs/xfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore.yaml rados.yaml thrashers/sync-many.yaml workloads/rados_5925.yaml} 2
Failure Reason:

Command failed on smithi190 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mgr -f --cluster ceph -i x'

fail 931294 2017-03-22 03:55:36 2017-03-22 03:59:04 2017-03-22 07:15:09 3:16:05 3:09:42 0:06:23 smithi master rados/monthrash/{ceph/ceph.yaml clusters/3-mons.yaml fs/xfs.yaml mon_kv_backend/leveldb.yaml msgr-failures/mon-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml thrashers/sync.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi134 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=61889115b412ac1873c8866cec340419fe3a7f72 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 931295 2017-03-22 03:55:36 2017-03-22 03:59:04 2017-03-22 04:17:04 0:18:00 0:13:00 0:05:00 smithi master rados/monthrash/{ceph/ceph.yaml clusters/9-mons.yaml fs/xfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore.yaml rados.yaml thrashers/force-sync-many.yaml workloads/rados_mon_workunits.yaml} 2
Failure Reason:

Command failed (workunit test mon/caps.sh) on smithi175 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=61889115b412ac1873c8866cec340419fe3a7f72 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/caps.sh'

fail 931296 2017-03-22 03:55:37 2017-03-22 03:59:08 2017-03-22 04:47:09 0:48:01 0:43:37 0:04:24 smithi master rados/monthrash/{ceph/ceph.yaml clusters/3-mons.yaml fs/xfs.yaml mon_kv_backend/leveldb.yaml msgr-failures/mon-delay.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml thrashers/many.yaml workloads/snaps-few-objects.yaml} 2
Failure Reason:

Command failed on smithi034 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json'

fail 931297 2017-03-22 03:55:38 2017-03-22 03:59:09 2017-03-22 04:23:09 0:24:00 0:19:33 0:04:27 smithi master rados/monthrash/{ceph/ceph.yaml clusters/9-mons.yaml fs/xfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore.yaml rados.yaml thrashers/many.yaml workloads/pool-create-delete.yaml} 2
Failure Reason:

Command failed on smithi113 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mgr -f --cluster ceph -i x'

fail 931298 2017-03-22 03:55:39 2017-03-22 03:59:10 2017-03-22 04:13:10 0:14:00 0:09:17 0:04:43 smithi master rados/monthrash/{ceph/ceph.yaml clusters/3-mons.yaml fs/xfs.yaml mon_kv_backend/leveldb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml thrashers/one.yaml workloads/rados_5925.yaml} 2
Failure Reason:

Command failed on smithi156 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json'

pass 931299 2017-03-22 03:55:39 2017-03-22 03:59:15 2017-03-22 04:13:14 0:13:59 0:11:30 0:02:29 smithi master rados/singleton/{all/osd-recovery-incomplete.yaml fs/xfs.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml} 1
fail 931300 2017-03-22 03:55:40 2017-03-22 03:59:16 2017-03-22 04:15:15 0:15:59 0:08:07 0:07:52 smithi master rados/singleton/{all/rebuild-mondb.yaml fs/xfs.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore.yaml rados.yaml} 1
Failure Reason:

Command failed on smithi066 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json'

fail 931301 2017-03-22 03:55:41 2017-03-22 04:00:58 2017-03-22 04:38:59 0:38:01 0:27:07 0:10:54 smithi master ubuntu 16.04 rados/thrash-erasure-code-isa/{arch/x86_64.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/filestore.yaml rados.yaml supported/ubuntu_latest.yaml thrashers/mapgap.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
Failure Reason:

Command failed on smithi195 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 5'

pass 931302 2017-03-22 03:55:41 2017-03-22 04:00:59 2017-03-22 04:10:58 0:09:59 0:07:25 0:02:34 smithi master rados/singleton/{all/dump-stuck.yaml fs/xfs.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore.yaml rados.yaml} 1
fail 931303 2017-03-22 03:55:42 2017-03-22 04:00:58 2017-03-22 04:14:58 0:14:00 0:09:19 0:04:41 smithi master rados/singleton/{all/mon-seesaw.yaml fs/xfs.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml} 1
Failure Reason:

Command failed on smithi009 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json'

fail 931304 2017-03-22 03:55:43 2017-03-22 04:01:10 2017-03-22 04:23:10 0:22:00 0:17:27 0:04:33 smithi master rados/singleton/{all/mon-thrasher.yaml fs/xfs.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore.yaml rados.yaml} 1
Failure Reason:

Command failed on smithi041 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mgr -f --cluster ceph -i x'

fail 931305 2017-03-22 03:55:44 2017-03-22 04:01:11 2017-03-22 04:23:11 0:22:00 0:15:13 0:06:47 smithi master rados/multimon/{clusters/9.yaml fs/xfs.yaml mon_kv_backend/leveldb.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml tasks/mon_recovery.yaml} 3
Failure Reason:

Command failed on smithi051 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mgr -f --cluster ceph -i x'

fail 931306 2017-03-22 03:55:44 2017-03-22 04:01:12 2017-03-22 04:19:12 0:18:00 0:10:39 0:07:21 smithi master rados/multimon/{clusters/6.yaml fs/xfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore.yaml rados.yaml tasks/mon_recovery.yaml} 2
Failure Reason:

Command failed on smithi078 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph quorum_status'

fail 931307 2017-03-22 03:55:45 2017-03-22 04:01:14 2017-03-22 04:17:13 0:15:59 0:10:36 0:05:23 smithi master rados/basic/{clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml mon_kv_backend/leveldb.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore.yaml rados.yaml tasks/repair_test.yaml} 2
fail 931308 2017-03-22 03:55:46 2017-03-22 04:02:48 2017-03-22 11:08:58 7:06:10 7:03:23 0:02:47 smithi master rados/objectstore/objectstore.yaml 1
Failure Reason:

SELinux denials found on ubuntu@smithi004.front.sepia.ceph.com: ['type=AVC msg=audit(1490160662.183:3951): avc: denied { setattr } for pid=28361 comm="logrotate" name="logrotate.status.tmp" dev="sda1" ino=27265498 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1490160662.157:3949): avc: denied { read } for pid=28361 comm="logrotate" name="logrotate.status" dev="sda1" ino=27265481 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1490160662.183:3950): avc: denied { write } for pid=28361 comm="logrotate" path="/var/lib/logrotate/logrotate.status.tmp" dev="sda1" ino=27265498 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1490160662.157:3949): avc: denied { open } for pid=28361 comm="logrotate" path="/var/lib/logrotate/logrotate.status" dev="sda1" ino=27265481 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1490160662.232:3952): avc: denied { rename } for pid=28361 comm="logrotate" name="logrotate.status.tmp" dev="sda1" ino=27265498 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1490160662.232:3952): avc: denied { unlink } for pid=28361 comm="logrotate" name="logrotate.status" dev="sda1" ino=27265481 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1490160662.183:3950): avc: denied { create } for pid=28361 comm="logrotate" name="logrotate.status.tmp" scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file']