Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 4009277 2019-06-06 21:41:42 2019-06-06 21:42:26 2019-06-06 22:22:26 0:40:00 0:07:49 0:32:11 smithi master centos 7.4 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported/centos_latest.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
Failure Reason:

Command failed on smithi153 with status 1: 'sudo yum -y install python34-cephfs'

fail 4009278 2019-06-06 21:41:43 2019-06-06 21:42:28 2019-06-07 01:24:31 3:42:03 2:08:15 1:33:48 smithi master rados/upgrade/jewel-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-luminous.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} thrashosds-health.yaml} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi008 with status 139: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

fail 4009279 2019-06-06 21:41:44 2019-06-06 21:42:33 2019-06-07 01:00:46 3:18:13 3:08:00 0:10:13 smithi master rados/mgr/{clusters/2-node-mgr.yaml debug/mgr.yaml objectstore/bluestore-comp.yaml tasks/workunits.yaml} 2
Failure Reason:

Command failed (workunit test mgr/test_localpool.sh) on smithi025 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bb561855173f8aa62c2ddcb580a927455a578973 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mgr/test_localpool.sh'

fail 4009280 2019-06-06 21:41:44 2019-06-06 21:42:45 2019-06-06 22:02:45 0:20:00 0:13:19 0:06:41 smithi master rados/singleton/{all/recovery-preemption.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml} 1
Failure Reason:

Command failed on smithi190 with status 1: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'egrep \'"\'"\'(defer backfill|defer recovery)\'"\'"\' /var/log/ceph/ceph-osd.*.log\''

fail 4009281 2019-06-06 21:41:45 2019-06-06 21:44:05 2019-06-06 22:36:05 0:52:00 0:11:39 0:40:21 smithi master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2
Failure Reason:

Command failed on smithi193 with status 1: 'sudo yum -y install python34-cephfs'

fail 4009282 2019-06-06 21:41:46 2019-06-06 21:44:33 2019-06-06 21:58:32 0:13:59 0:06:37 0:07:22 smithi master rados/singleton/{all/rest-api.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml} 1
Failure Reason:

Command failed (workunit test rest/test.py) on smithi152 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bb561855173f8aa62c2ddcb580a927455a578973 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rest/test.py'

fail 4009283 2019-06-06 21:41:47 2019-06-06 21:44:45 2019-06-06 21:58:44 0:13:59 0:07:13 0:06:46 smithi master rados/singleton/{all/test_envlibrados_for_rocksdb.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi156 with status 100: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bb561855173f8aa62c2ddcb580a927455a578973 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

pass 4009284 2019-06-06 21:41:47 2019-06-06 21:44:48 2019-06-07 04:08:54 6:24:06 6:15:50 0:08:16 smithi master rados/objectstore/filestore-idempotent-aio-journal.yaml 1
pass 4009285 2019-06-06 21:41:48 2019-06-06 21:56:41 2019-06-07 04:38:48 6:42:07 6:36:39 0:05:28 smithi master rados/objectstore/filestore-idempotent.yaml 1
fail 4009286 2019-06-06 21:41:49 2019-06-06 21:57:01 2019-06-06 22:29:01 0:32:00 0:07:39 0:24:21 smithi master centos 7.4 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs-balancer-upmap.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported/centos_latest.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
Failure Reason:

Command failed on smithi098 with status 1: 'sudo yum -y install python34-cephfs'

fail 4009287 2019-06-06 21:41:50 2019-06-06 21:58:27 2019-06-06 22:18:26 0:19:59 0:07:28 0:12:31 smithi master rados/rest/rest_test.yaml 2
Failure Reason:

Command failed (workunit test rest/test.py) on smithi193 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bb561855173f8aa62c2ddcb580a927455a578973 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rest/test.py'

fail 4009288 2019-06-06 21:41:50 2019-06-06 21:58:33 2019-06-06 22:52:33 0:54:00 0:11:44 0:42:16 smithi master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} 2
Failure Reason:

Command failed on smithi143 with status 1: 'sudo yum -y install python34-cephfs'

pass 4009289 2019-06-06 21:41:51 2019-06-06 21:58:45 2019-06-06 22:50:45 0:52:00 0:09:55 0:42:05 smithi master rados/mgr/{clusters/2-node-mgr.yaml debug/mgr.yaml objectstore/filestore-xfs.yaml tasks/module_selftest.yaml} 2
fail 4009290 2019-06-06 21:41:52 2019-06-06 21:58:45 2019-06-07 01:16:48 3:18:03 3:07:47 0:10:16 smithi master rados/mgr/{clusters/2-node-mgr.yaml debug/mgr.yaml objectstore/bluestore-bitmap.yaml tasks/workunits.yaml} 2
Failure Reason:

Command failed (workunit test mgr/test_localpool.sh) on smithi190 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bb561855173f8aa62c2ddcb580a927455a578973 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mgr/test_localpool.sh'

pass 4009291 2019-06-06 21:41:53 2019-06-06 21:58:46 2019-06-06 22:24:45 0:25:59 0:08:38 0:17:21 smithi master rados/monthrash/{ceph.yaml clusters/3-mons.yaml d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/many.yaml workloads/rados_5925.yaml} 2
fail 4009292 2019-06-06 21:41:53 2019-06-06 21:58:49 2019-06-06 22:12:48 0:13:59 0:07:15 0:06:44 smithi master rados/objectstore/objectstore.yaml 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-filestore 20 --debug-bluestore 20" ceph_test_objectstore --gtest_filter=-*/3 --gtest_catch_exceptions=0\''

fail 4009293 2019-06-06 21:41:54 2019-06-06 22:01:06 2019-06-06 22:19:05 0:17:59 0:11:07 0:06:52 smithi master centos rados/singleton-nomsgr/{all/valgrind-leaks.yaml rados.yaml} 1
Failure Reason:

Command failed on smithi202 with status 1: 'sudo yum -y install python34-cephfs'