Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 4221370 2019-08-16 11:16:52 2019-08-16 11:17:35 2019-08-16 11:47:34 0:29:59 0:21:09 0:08:50 smithi master ubuntu 18.04 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_api_tests.yaml} 2
fail 4221371 2019-08-16 11:16:53 2019-08-16 11:17:47 2019-08-16 11:39:46 0:21:59 0:14:30 0:07:29 smithi master rhel 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{rhel_7.yaml} tasks/progress.yaml} 2
Failure Reason:

Test failure: test_osd_came_back (tasks.mgr.test_progress.TestProgress)

fail 4221372 2019-08-16 11:16:53 2019-08-16 11:17:51 2019-08-16 11:35:51 0:18:00 0:10:36 0:07:24 smithi master ubuntu 18.04 rados/singleton/{all/rebuild-mondb.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
Failure Reason:

+ sudo adjust-ulimits ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --journal-path /var/lib/ceph/osd/ceph-0/journal --no-mon-config --op update-mon-db --mon-store-path /home/ubuntu/cephtest/mon-store ceph-objectstore-tool: /build/ceph-15.0.0-3991-g1e3ae0e/src/tools/rebuild_mondb.cc:290: int update_osdmap(ObjectStore&, OSDSuperblock&, MonitorDBStore&): Assertion `0' failed. *** Caught signal (Aborted) ** in thread 7fd56a2edc00 thread_name:ceph-objectstor ceph version 15.0.0-3991-g1e3ae0e (1e3ae0ea9ca5702249d4c6d65f8df811fdbd1b2d) octopus (dev) 1: (()+0x12890) [0x7fd55f892890] 2: (gsignal()+0xc7) [0x7fd55e986e97] 3: (abort()+0x141) [0x7fd55e988801] 4: (()+0x3039a) [0x7fd55e97839a] 5: (()+0x30412) [0x7fd55e978412] 6: (update_mon_db(ObjectStore&, OSDSuperblock&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x2be2) [0x5622d03bc0f2] 7: (main()+0x4f9c) [0x5622d036427c] 8: (__libc_start_main()+0xe7) [0x7fd55e969b97] 9: (_start()+0x2a) [0x5622d0370bba]

fail 4221373 2019-08-16 11:16:54 2019-08-16 11:17:52 2019-08-16 11:35:51 0:17:59 0:11:04 0:06:55 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/small-objects.yaml} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op rmattr 25 --op delete 50 --pool unique_pool_0'

dead 4221374 2019-08-16 11:16:55 2019-08-16 11:18:03 2019-08-16 15:34:07 4:16:04 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 4
pass 4221375 2019-08-16 11:16:56 2019-08-16 11:18:05 2019-08-16 11:46:04 0:27:59 0:21:21 0:06:38 smithi master centos 7.6 rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/cosbench_64K_read_write.yaml} 1
fail 4221376 2019-08-16 11:16:57 2019-08-16 11:18:08 2019-08-16 11:36:07 0:17:59 0:07:42 0:10:17 smithi master ubuntu 18.04 rados/singleton/{all/test_envlibrados_for_rocksdb.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi184 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1e3ae0ea9ca5702249d4c6d65f8df811fdbd1b2d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

fail 4221377 2019-08-16 11:16:58 2019-08-16 11:18:09 2019-08-16 11:48:07 0:29:58 0:20:41 0:09:17 smithi master centos 7.6 rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/cosbench_64K_write.yaml} 1
Failure Reason:

Command failed on smithi100 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest'

pass 4221378 2019-08-16 11:16:59 2019-08-16 11:18:11 2019-08-16 12:12:11 0:54:00 0:47:14 0:06:46 smithi master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2
fail 4221379 2019-08-16 11:17:00 2019-08-16 11:18:14 2019-08-16 11:46:14 0:28:00 0:21:14 0:06:46 smithi master ubuntu 18.04 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_create_get_update_delete_w_tenant (tasks.mgr.dashboard.test_rgw.RgwBucketTest)

fail 4221380 2019-08-16 11:17:01 2019-08-16 11:18:28 2019-08-16 12:28:28 1:10:00 0:59:38 0:10:22 smithi master rhel 7.6 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} 2
Failure Reason:

not all PGs are active or peered 15 seconds after marking out OSDs

pass 4221381 2019-08-16 11:17:02 2019-08-16 11:19:27 2019-08-16 11:41:26 0:21:59 0:13:41 0:08:18 smithi master centos 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{centos_7.yaml} tasks/orchestrator_cli.yaml} 2
fail 4221382 2019-08-16 11:17:02 2019-08-16 11:19:52 2019-08-16 12:23:52 1:04:00 0:57:22 0:06:38 smithi master centos 7.6 rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/scrub.yaml} 1
Failure Reason:

Command failed (workunit test scrub/osd-scrub-snaps.sh) on smithi151 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1e3ae0ea9ca5702249d4c6d65f8df811fdbd1b2d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-snaps.sh'

fail 4221383 2019-08-16 11:17:03 2019-08-16 11:19:59 2019-08-16 11:47:58 0:27:59 0:15:07 0:12:52 smithi master rhel 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{rhel_7.yaml} tasks/progress.yaml} 2
Failure Reason:

Test failure: test_osd_came_back (tasks.mgr.test_progress.TestProgress)