Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 4193505 2019-08-07 15:44:30 2019-08-07 15:46:18 2019-08-07 16:12:18 0:26:00 0:15:57 0:10:03 smithi master rhel 7.5 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} 3
pass 4193506 2019-08-07 15:44:31 2019-08-07 15:46:19 2019-08-07 16:10:18 0:23:59 0:16:32 0:07:27 smithi master rhel 7.5 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
fail 4193507 2019-08-07 15:44:32 2019-08-07 15:47:34 2019-08-07 22:53:41 7:06:07 6:53:59 0:12:08 smithi master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi044 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=10a16ea1e6b004fa7670ea4f5482c57191b9e26f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 4193508 2019-08-07 15:44:33 2019-08-07 15:47:53 2019-08-07 16:45:53 0:58:00 0:50:12 0:07:48 smithi master rhel 7.5 rados/standalone/{supported-random-distro$/{rhel_latest.yaml} workloads/scrub.yaml} 1
pass 4193509 2019-08-07 15:44:33 2019-08-07 15:48:05 2019-08-07 16:28:05 0:40:00 0:33:29 0:06:31 smithi master rhel 7.5 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} 2
fail 4193510 2019-08-07 15:44:34 2019-08-07 15:48:05 2019-08-07 16:20:05 0:32:00 0:17:28 0:14:32 smithi master centos 7.4 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{centos_latest.yaml} tasks/module_selftest.yaml} 2
Failure Reason:

Test failure: test_telemetry (tasks.mgr.test_module_selftest.TestModuleSelftest)

pass 4193511 2019-08-07 15:44:35 2019-08-07 15:48:14 2019-08-07 20:06:18 4:18:04 4:05:36 0:12:28 smithi master rhel 7.5 rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{rhel_latest.yaml}} 1
fail 4193512 2019-08-07 15:44:36 2019-08-07 15:49:56 2019-08-07 19:22:04 3:32:08 3:19:28 0:12:40 smithi master centos 7.4 rados/monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} thrashers/sync-many.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi007 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=10a16ea1e6b004fa7670ea4f5482c57191b9e26f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 4193513 2019-08-07 15:44:37 2019-08-07 15:49:56 2019-08-07 16:09:55 0:19:59 0:13:44 0:06:15 smithi master rhel 7.5 rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_latest.yaml} workloads/radosbench_4K_seq_read.yaml} 1
Failure Reason:

Command failed on smithi177 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml'

fail 4193514 2019-08-07 15:44:38 2019-08-07 15:49:56 2019-08-07 16:29:56 0:40:00 0:33:55 0:06:05 smithi master rhel 7.5 rados/singleton/{all/max-pg-per-osd.from-replica.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
pass 4193515 2019-08-07 15:44:39 2019-08-07 15:50:03 2019-08-07 16:26:02 0:35:59 0:29:02 0:06:57 smithi master rhel 7.5 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} 2
pass 4193516 2019-08-07 15:44:40 2019-08-07 15:50:08 2019-08-07 16:10:07 0:19:59 0:13:19 0:06:40 smithi master rhel 7.5 rados/singleton/{all/mon-auth-caps.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
pass 4193517 2019-08-07 15:44:41 2019-08-07 15:51:57 2019-08-07 19:52:00 4:00:03 3:48:26 0:11:37 smithi master rhel 7.5 rados/upgrade/luminous-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-mimic.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} supported-random-distro$/{rhel_latest.yaml} thrashosds-health.yaml} 3
fail 4193518 2019-08-07 15:44:42 2019-08-07 15:51:57 2019-08-07 20:02:01 4:10:04 4:03:56 0:06:08 smithi master rhel 7.5 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi202 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=10a16ea1e6b004fa7670ea4f5482c57191b9e26f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 4193519 2019-08-07 15:44:43 2019-08-07 15:51:58 2019-08-07 16:41:57 0:49:59 0:42:29 0:07:30 smithi master rhel 7.5 rados/monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/sync.yaml workloads/rados_mon_osdmap_prune.yaml} 2
pass 4193520 2019-08-07 15:44:44 2019-08-07 15:52:03 2019-08-07 16:12:02 0:19:59 0:13:06 0:06:53 smithi master rhel 7.5 rados/singleton-nomsgr/{all/cache-fs-trunc.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
pass 4193521 2019-08-07 15:44:45 2019-08-07 15:52:11 2019-08-07 16:18:10 0:25:59 0:16:13 0:09:46 smithi master rhel 7.5 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} 3
pass 4193522 2019-08-07 15:44:46 2019-08-07 15:52:18 2019-08-07 16:28:18 0:36:00 0:19:52 0:16:08 smithi master rhel 7.5 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
fail 4193523 2019-08-07 15:44:46 2019-08-07 15:52:22 2019-08-07 16:28:22 0:36:00 0:28:39 0:07:21 smithi master rhel 7.5 rados/standalone/{supported-random-distro$/{rhel_latest.yaml} workloads/erasure-code.yaml} 1
Failure Reason:

Command failed (workunit test erasure-code/test-erasure-eio.sh) on smithi077 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=10a16ea1e6b004fa7670ea4f5482c57191b9e26f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/erasure-code/test-erasure-eio.sh'

fail 4193524 2019-08-07 15:44:47 2019-08-07 15:53:59 2019-08-07 16:15:58 0:21:59 0:13:35 0:08:24 smithi master rhel 7.5 rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_latest.yaml} workloads/sample_radosbench.yaml} 1
Failure Reason:

Command failed on smithi158 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml'

pass 4193525 2019-08-07 15:44:48 2019-08-07 15:55:56 2019-08-07 16:17:55 0:21:59 0:13:56 0:08:03 smithi master rhel 7.5 rados/objectstore/{backends/filejournal.yaml supported-random-distro$/{rhel_latest.yaml}} 1
pass 4193526 2019-08-07 15:44:49 2019-08-07 15:55:56 2019-08-07 16:35:55 0:39:59 0:31:46 0:08:13 smithi master rhel 7.5 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
fail 4193527 2019-08-07 15:44:50 2019-08-07 15:55:57 2019-08-07 16:23:56 0:27:59 0:16:45 0:11:14 smithi master rhel 7.5 rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_latest.yaml} workloads/cosbench_64K_write.yaml} 1
Failure Reason:

Command failed on smithi023 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml'

pass 4193528 2019-08-07 15:44:51 2019-08-07 15:56:05 2019-08-07 16:30:05 0:34:00 0:19:04 0:14:56 smithi master rhel 7.5 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
fail 4193529 2019-08-07 15:44:52 2019-08-07 15:56:12 2019-08-07 16:18:11 0:21:59 0:15:09 0:06:50 smithi master rhel 7.5 rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_latest.yaml} workloads/fio_4K_rand_read.yaml} 1
Failure Reason:

Command failed on smithi121 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml'

fail 4193530 2019-08-07 15:44:53 2019-08-07 15:56:16 2019-08-07 19:32:19 3:36:03 3:24:26 0:11:37 smithi master rhel 7.5 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} tasks/rados_api_tests.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi200 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=10a16ea1e6b004fa7670ea4f5482c57191b9e26f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 4193531 2019-08-07 15:44:54 2019-08-07 15:56:22 2019-08-07 16:30:21 0:33:59 0:25:50 0:08:09 smithi master rhel 7.5 rados/multimon/{clusters/21.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} tasks/mon_recovery.yaml} 3
fail 4193532 2019-08-07 15:44:55 2019-08-07 15:58:24 2019-08-07 22:28:29 6:30:05 6:19:39 0:10:26 smithi master rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-stupid.yaml rados.yaml tasks/rados_api_tests.yaml validater/lockdep.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi129 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=10a16ea1e6b004fa7670ea4f5482c57191b9e26f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 4193533 2019-08-07 15:44:55 2019-08-07 15:58:24 2019-08-07 16:24:23 0:25:59 0:17:51 0:08:08 smithi master rhel 7.5 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} 2
pass 4193534 2019-08-07 15:44:56 2019-08-07 15:58:24 2019-08-07 16:22:23 0:23:59 0:16:07 0:07:52 smithi master rhel 7.5 rados/singleton/{all/resolve_stuck_peering.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 2
pass 4193535 2019-08-07 15:44:57 2019-08-07 15:59:56 2019-08-07 18:31:58 2:32:02 2:23:37 0:08:25 smithi master rhel 7.5 rados/objectstore/{backends/filestore-idempotent.yaml supported-random-distro$/{rhel_latest.yaml}} 1
pass 4193536 2019-08-07 15:44:58 2019-08-07 15:59:56 2019-08-07 16:37:56 0:38:00 0:31:20 0:06:40 smithi master rhel 7.5 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
pass 4193537 2019-08-07 15:44:59 2019-08-07 15:59:57 2019-08-07 16:29:57 0:30:00 0:19:13 0:10:47 smithi master rhel 7.5 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
fail 4193538 2019-08-07 15:45:00 2019-08-07 16:00:10 2019-08-07 16:16:09 0:15:59 0:07:00 0:08:59 smithi master ubuntu 16.04 rados/singleton/{all/test_envlibrados_for_rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi138 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=10a16ea1e6b004fa7670ea4f5482c57191b9e26f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

pass 4193539 2019-08-07 15:45:00 2019-08-07 16:00:11 2019-08-07 16:26:10 0:25:59 0:18:01 0:07:58 smithi master rhel 7.5 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{rhel_latest.yaml} tasks/failover.yaml} 2
pass 4193540 2019-08-07 15:45:01 2019-08-07 16:02:13 2019-08-07 16:42:13 0:40:00 0:32:58 0:07:02 smithi master rhel 7.5 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
pass 4193541 2019-08-07 15:45:02 2019-08-07 16:02:14 2019-08-07 16:42:13 0:39:59 0:33:02 0:06:57 smithi master rhel 7.5 rados/standalone/{supported-random-distro$/{rhel_latest.yaml} workloads/mon.yaml} 1
fail 4193542 2019-08-07 15:45:03 2019-08-07 16:02:14 2019-08-07 16:26:13 0:23:59 0:16:18 0:07:41 smithi master rhel 7.5 rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_latest.yaml} workloads/fio_4M_rand_rw.yaml} 1
Failure Reason:

Command failed on smithi107 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml'

pass 4193543 2019-08-07 15:45:04 2019-08-07 16:04:24 2019-08-07 16:24:23 0:19:59 0:14:41 0:05:18 smithi master rhel 7.5 rados/objectstore/{backends/fusestore.yaml supported-random-distro$/{rhel_latest.yaml}} 1
pass 4193544 2019-08-07 15:45:05 2019-08-07 16:04:24 2019-08-07 16:28:23 0:23:59 0:14:19 0:09:40 smithi master rhel 7.5 rados/multimon/{clusters/3.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} tasks/mon_clock_no_skews.yaml} 2
fail 4193545 2019-08-07 15:45:05 2019-08-07 16:04:24 2019-08-07 16:34:23 0:29:59 0:21:34 0:08:25 smithi master rhel 7.5 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_latest.yaml} tasks/module_selftest.yaml} 2
Failure Reason:

Test failure: test_telemetry (tasks.mgr.test_module_selftest.TestModuleSelftest)

fail 4193546 2019-08-07 15:45:06 2019-08-07 16:04:24 2019-08-07 16:26:23 0:21:59 0:15:17 0:06:42 smithi master rhel 7.5 rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_latest.yaml} workloads/radosbench_4K_rand_read.yaml} 1
Failure Reason:

Command failed on smithi045 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml'

pass 4193547 2019-08-07 15:45:07 2019-08-07 16:04:45 2019-08-07 16:50:45 0:46:00 0:33:42 0:12:18 smithi master rhel 7.5 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} 3
fail 4193548 2019-08-07 15:45:08 2019-08-07 16:04:51 2019-08-07 20:04:54 4:00:03 3:46:55 0:13:08 smithi master ubuntu 16.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-stupid.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi046 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=10a16ea1e6b004fa7670ea4f5482c57191b9e26f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 4193549 2019-08-07 15:45:09 2019-08-07 16:06:26 2019-08-07 16:28:25 0:21:59 0:15:18 0:06:41 smithi master rhel 7.5 rados/singleton-nomsgr/{all/librados_hello_world.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
Failure Reason:

Command failed (workunit test rados/test_librados_build.sh) on smithi142 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=10a16ea1e6b004fa7670ea4f5482c57191b9e26f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_librados_build.sh'

pass 4193550 2019-08-07 15:45:10 2019-08-07 16:06:26 2019-08-07 16:32:25 0:25:59 0:17:54 0:08:05 smithi master rhel 7.5 rados/singleton/{all/divergent_priors.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
fail 4193551 2019-08-07 15:45:10 2019-08-07 16:07:46 2019-08-07 16:33:45 0:25:59 0:16:12 0:09:47 smithi master ubuntu 16.04 rados/standalone/{supported-random-distro$/{ubuntu_16.04.yaml} workloads/osd.yaml} 1
Failure Reason:

Command failed (workunit test osd/osd-backfill-prio.sh) on smithi162 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=10a16ea1e6b004fa7670ea4f5482c57191b9e26f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-backfill-prio.sh'

pass 4193552 2019-08-07 15:45:11 2019-08-07 16:08:03 2019-08-07 17:20:03 1:12:00 1:04:40 0:07:20 smithi master rhel 7.5 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} 2
fail 4193553 2019-08-07 15:45:12 2019-08-07 16:10:16 2019-08-07 19:34:18 3:24:02 3:18:28 0:05:34 smithi master rhel 7.5 rados/monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/sync.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi191 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=10a16ea1e6b004fa7670ea4f5482c57191b9e26f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'