User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2020-06-05 16:39:00 | 2020-06-05 16:42:07 | 2020-06-05 23:54:11 | 7:12:04 | rados | wip-yuri2-testing-2020-06-03-2341-MASTER | smithi | 64a1a64 | 6 | 6 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 5120429 | 2020-06-05 16:40:09 | 2020-06-05 16:40:28 | 2020-06-05 17:42:28 | 1:02:00 | 0:19:58 | 0:42:02 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_7.6} msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} | 3 | |
fail | 5120430 | 2020-06-05 16:40:10 | 2020-06-05 16:40:28 | 2020-06-05 21:34:35 | 4:54:07 | 3:48:19 | 1:05:48 | smithi | master | ubuntu | 18.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-octopus 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap thrashosds-health ubuntu_latest} | 4 | |
Failure Reason:
"2020-06-05T20:27:42.076689+0000 osd.11 (osd.11) 68 : cluster [ERR] 5.8 deep-scrub : stat mismatch, got 4/4 objects, 0/0 clones, 0/0 dirty, 0/0 omap, 0/0 pinned, 4/4 hit_set_archive, 0/0 whiteouts, 6718/0 bytes, 0/0 manifest objects, 6718/6718 hit_set_archive bytes." in cluster log |
||||||||||||||
pass | 5120431 | 2020-06-05 16:40:11 | 2020-06-05 16:42:00 | 2020-06-05 17:10:00 | 0:28:00 | 0:14:19 | 0:13:41 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/rados_python} | 2 | |
fail | 5120432 | 2020-06-05 16:40:12 | 2020-06-05 16:42:00 | 2020-06-05 23:54:11 | 7:12:11 | 6:37:56 | 0:34:15 | smithi | master | centos | 8.1 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi134 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=64a1a6455982e19dbbfd3edd4f271c0432507975 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 5120433 | 2020-06-05 16:40:13 | 2020-06-05 16:42:07 | 2020-06-05 17:20:07 | 0:38:00 | 0:27:01 | 0:10:59 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/few rados thrashers/none thrashosds-health workloads/cache-snaps} | 3 | |
pass | 5120434 | 2020-06-05 16:40:13 | 2020-06-05 16:43:45 | 2020-06-05 17:21:45 | 0:38:00 | 0:20:38 | 0:17:22 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/test_rbd_api} | 3 | |
pass | 5120435 | 2020-06-05 16:40:14 | 2020-06-05 16:44:02 | 2020-06-05 17:08:02 | 0:24:00 | 0:14:28 | 0:09:32 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/rebuild-mondb msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 5120436 | 2020-06-05 16:40:15 | 2020-06-05 16:44:07 | 2020-06-05 17:44:07 | 1:00:00 | 0:33:22 | 0:26:38 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_7.6} msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/cache-snaps} | 3 | |
fail | 5120437 | 2020-06-05 16:40:16 | 2020-06-05 16:44:26 | 2020-06-05 23:48:37 | 7:04:11 | 6:41:32 | 0:22:39 | smithi | master | centos | 8.1 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none msgr-failures/few msgr/async objectstore/filestore-xfs rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi172 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=64a1a6455982e19dbbfd3edd4f271c0432507975 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 5120438 | 2020-06-05 16:40:17 | 2020-06-05 16:44:39 | 2020-06-05 18:32:41 | 1:48:02 | 1:38:58 | 0:09:04 | smithi | master | ubuntu | 18.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd} | 1 | |
Failure Reason:
Command failed (workunit test osd/osd-markdown.sh) on smithi078 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=64a1a6455982e19dbbfd3edd4f271c0432507975 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-markdown.sh' |
||||||||||||||
fail | 5120439 | 2020-06-05 16:40:18 | 2020-06-05 16:47:19 | 2020-06-05 17:19:19 | 0:32:00 | 0:16:48 | 0:15:12 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/thrash_cache_writeback_proxy_none msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 10000 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op copy_from 50 --op write_excl 50 --pool base' |
||||||||||||||
fail | 5120440 | 2020-06-05 16:40:19 | 2020-06-05 16:47:41 | 2020-06-05 21:33:48 | 4:46:07 | 4:28:04 | 0:18:03 | smithi | master | ubuntu | 18.04 | rados/objectstore/{backends/objectstore supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-filestore 20 --debug-bluestore 20" ceph_test_objectstore --gtest_filter=-*/3 --gtest_catch_exceptions=0\'' |