User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
ozeneva | 2021-09-14 08:03:28 | 2021-09-14 08:05:08 | 2021-09-14 20:16:49 | 12:11:41 | rgw | wip-omri-tracer | smithi | e6b3c32 | 4 | 62 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6389585 | 2021-09-14 08:03:38 | 2021-09-14 08:05:08 | 2021-09-14 10:06:22 | 2:01:14 | 1:49:06 | 0:12:08 | smithi | master | ubuntu | 20.04 | rgw/crypt/{0-cluster/fixed-1 1-ceph-install/install 2-kms/barbican 3-rgw/rgw 4-tests/{s3tests} ignore-pg-availability supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed on smithi093 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph' |
||||||||||||||
fail | 6389586 | 2021-09-14 08:03:39 | 2021-09-14 08:05:08 | 2021-09-14 08:35:17 | 0:30:09 | 0:15:41 | 0:14:28 | smithi | master | rgw/hadoop-s3a/{clusters/fixed-2 hadoop/default overrides s3a-hadoop} | 2 | |||
Failure Reason:
"2021-09-14T08:28:26.825352+0000 mon.a (mon.0) 230 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 6389587 | 2021-09-14 08:03:40 | 2021-09-14 08:05:08 | 2021-09-14 08:22:44 | 0:17:36 | 0:07:20 | 0:10:16 | smithi | master | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/filestore-xfs overrides rgw_pool_type/ec-profile tasks/rgw_bucket_quota ubuntu_latest} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_bucket_quota.pl) on smithi096 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_bucket_quota.pl' |
||||||||||||||
dead | 6389588 | 2021-09-14 08:03:41 | 2021-09-14 08:05:20 | 2021-09-14 20:14:33 | 12:09:13 | smithi | master | centos | 8.3 | rgw/multisite/{clusters frontend/beast ignore-pg-availability omap_limits overrides realms/three-zone-plus-pubsub supported-random-distro$/{centos_8} tasks/test_multi} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6389589 | 2021-09-14 08:03:42 | 2021-09-14 08:05:30 | 2021-09-14 08:45:25 | 0:39:55 | 0:30:22 | 0:09:33 | smithi | master | centos | 8.3 | rgw/notifications/{beast bluestore-bitmap fixed-2 overrides supported-all-distro$/{centos_8} tasks/{0-install test_amqp test_kafka test_others}} | 2 | |
fail | 6389590 | 2021-09-14 08:03:43 | 2021-09-14 08:06:30 | 2021-09-14 08:24:07 | 0:17:37 | 0:06:40 | 0:10:57 | smithi | master | ubuntu | 20.04 | rgw/singleton/{all/radosgw-admin frontend/beast objectstore/bluestore-bitmap overrides rgw_pool_type/ec-profile supported-random-distro$/{ubuntu_latest}} | 2 | |
dead | 6389591 | 2021-09-14 08:03:44 | 2021-09-14 08:06:41 | 2021-09-14 20:16:49 | 12:10:08 | smithi | master | centos | 8.3 | rgw/sts/{cluster ignore-pg-availability objectstore overrides pool-type rgw_frontend/beast supported-random-distro$/{centos_8} tasks/{0-install first ststests}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6389592 | 2021-09-14 08:03:45 | 2021-09-14 08:08:01 | 2021-09-14 08:41:41 | 0:33:40 | 0:22:39 | 0:11:01 | smithi | master | ubuntu | 20.04 | rgw/tempest/{clusters/fixed-1 frontend/beast overrides tasks/rgw_tempest ubuntu_latest} | 1 | |
Failure Reason:
Command failed on smithi193 with status 1: "cd /home/ubuntu/cephtest/tempest && source .tox/venv/bin/activate && tempest run --workspace-path /home/ubuntu/cephtest/tempest/workspace.yaml --workspace rgw --regex '^tempest.api.object_storage' --black-regex '.*test_account_quotas_negative.AccountQuotasNegativeTest.test_user_modify_quota|.*test_container_acl_negative.ObjectACLsNegativeTest.*|.*test_container_services_negative.ContainerNegativeTest.test_create_container_metadata_.*|.*test_container_staticweb.StaticWebTest.test_web_index|.*test_container_staticweb.StaticWebTest.test_web_listing_css|.*test_container_synchronization.*|.*test_object_services.PublicObjectTest.test_access_public_container_object_without_using_creds|.*test_object_services.ObjectTest.test_create_object_with_transfer_encoding'" |
||||||||||||||
fail | 6389593 | 2021-09-14 08:03:46 | 2021-09-14 08:08:21 | 2021-09-14 08:33:59 | 0:25:38 | 0:11:46 | 0:13:52 | smithi | master | ubuntu | 20.04 | rgw/thrash/{clusters/fixed-2 frontend/beast install objectstore/bluestore-bitmap thrasher/default thrashosds-health ubuntu_latest workload/rgw_bucket_quota} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_bucket_quota.pl) on smithi166 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_bucket_quota.pl' |
||||||||||||||
fail | 6389594 | 2021-09-14 08:03:47 | 2021-09-14 08:08:21 | 2021-09-14 11:07:07 | 2:58:46 | 2:42:45 | 0:16:01 | smithi | master | centos | 8.3 | rgw/tools/{centos_latest cluster tasks} | 1 | |
Failure Reason:
"2021-09-14T10:55:44.748549+0000 mon.a (mon.0) 466 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 6389595 | 2021-09-14 08:03:48 | 2021-09-14 08:08:22 | 2021-09-14 10:06:40 | 1:58:18 | 1:46:58 | 0:11:20 | smithi | master | centos | 8.3 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec-profile sharding$/{default} striping$/{stripe-greater-than-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/lockdep} | 2 | |
Failure Reason:
Command failed on smithi184 with status 1: 'cd /home/ubuntu/cephtest/s3-tests-java && /opt/gradle/gradle/bin/gradle clean test --rerun-tasks --no-build-cache --tests AWS4Test' |
||||||||||||||
fail | 6389596 | 2021-09-14 08:03:49 | 2021-09-14 08:09:42 | 2021-09-14 10:09:40 | 1:59:58 | 1:43:59 | 0:15:59 | smithi | master | ubuntu | 20.04 | rgw/website/{clusters/fixed-2 frontend/beast http overrides tasks/s3tests-website ubuntu_latest} | 2 | |
Failure Reason:
Command failed on smithi033 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph' |
||||||||||||||
fail | 6389597 | 2021-09-14 08:03:50 | 2021-09-14 08:12:03 | 2021-09-14 09:36:19 | 1:24:16 | 1:11:39 | 0:12:37 | smithi | master | centos | 8.0 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/filestore-xfs overrides proto/https rgw_pool_type/ec sharding$/{default} striping$/{stripe-equals-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/valgrind} | 2 | |
Failure Reason:
Command failed on smithi099 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid ragweed.client.0 --purge-data' |
||||||||||||||
fail | 6389598 | 2021-09-14 08:03:51 | 2021-09-14 08:13:13 | 2021-09-14 08:34:10 | 0:20:57 | 0:07:40 | 0:13:17 | smithi | master | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec tasks/rgw_multipart_upload ubuntu_latest} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_multipart_upload.pl) on smithi071 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_multipart_upload.pl' |
||||||||||||||
fail | 6389599 | 2021-09-14 08:03:52 | 2021-09-14 08:14:43 | 2021-09-14 10:13:52 | 1:59:09 | 1:47:31 | 0:11:38 | smithi | master | centos | 8.3 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/replicated sharding$/{default} striping$/{stripe-greater-than-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/lockdep} | 2 | |
Failure Reason:
Command failed on smithi013 with status 1: 'cd /home/ubuntu/cephtest/s3-tests-java && /opt/gradle/gradle/bin/gradle clean test --rerun-tasks --no-build-cache --tests AWS4Test' |
||||||||||||||
fail | 6389600 | 2021-09-14 08:03:53 | 2021-09-14 08:16:44 | 2021-09-14 08:41:19 | 0:24:35 | 0:11:21 | 0:13:14 | smithi | master | ubuntu | 20.04 | rgw/thrash/{clusters/fixed-2 frontend/beast install objectstore/filestore-xfs thrasher/default thrashosds-health ubuntu_latest workload/rgw_multipart_upload} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_multipart_upload.pl) on smithi067 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_multipart_upload.pl' |
||||||||||||||
fail | 6389601 | 2021-09-14 08:03:54 | 2021-09-14 08:17:54 | 2021-09-14 09:40:37 | 1:22:43 | 1:11:17 | 0:11:26 | smithi | master | centos | 8.0 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/filestore-xfs overrides proto/https rgw_pool_type/ec-profile sharding$/{single} striping$/{stripe-greater-than-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/valgrind} | 2 | |
Failure Reason:
Command failed on smithi027 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid ragweed.client.0 --purge-data' |
||||||||||||||
pass | 6389602 | 2021-09-14 08:03:55 | 2021-09-14 08:17:54 | 2021-09-14 08:36:47 | 0:18:53 | 0:07:43 | 0:11:10 | smithi | master | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/filestore-xfs overrides rgw_pool_type/replicated tasks/rgw_ragweed ubuntu_latest} | 2 | |
fail | 6389603 | 2021-09-14 08:03:56 | 2021-09-14 08:19:25 | 2021-09-14 10:19:09 | 1:59:44 | 1:48:26 | 0:11:18 | smithi | master | ubuntu | 20.04 | rgw/crypt/{0-cluster/fixed-1 1-ceph-install/install 2-kms/kmip 3-rgw/rgw 4-tests/{s3tests} ignore-pg-availability supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed on smithi115 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph' |
||||||||||||||
fail | 6389604 | 2021-09-14 08:03:57 | 2021-09-14 08:19:35 | 2021-09-14 08:37:05 | 0:17:30 | 0:06:49 | 0:10:41 | smithi | master | ubuntu | 20.04 | rgw/singleton/{all/radosgw-admin frontend/beast objectstore/filestore-xfs overrides rgw_pool_type/ec supported-random-distro$/{ubuntu_latest}} | 2 | |
fail | 6389605 | 2021-09-14 08:03:58 | 2021-09-14 08:19:35 | 2021-09-14 10:17:26 | 1:57:51 | 1:47:05 | 0:10:46 | smithi | master | centos | 8.3 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec sharding$/{default} striping$/{stripe-equals-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/lockdep} | 2 | |
Failure Reason:
Command failed on smithi092 with status 1: 'cd /home/ubuntu/cephtest/s3-tests-java && /opt/gradle/gradle/bin/gradle clean test --rerun-tasks --no-build-cache --tests AWS4Test' |
||||||||||||||
fail | 6389606 | 2021-09-14 08:03:59 | 2021-09-14 08:20:26 | 2021-09-14 10:16:05 | 1:55:39 | 1:41:23 | 0:14:16 | smithi | master | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec-profile tasks/rgw_s3tests ubuntu_latest} | 2 | |
Failure Reason:
Command failed on smithi049 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph' |
||||||||||||||
fail | 6389607 | 2021-09-14 08:04:00 | 2021-09-14 08:20:26 | 2021-09-14 09:45:53 | 1:25:27 | 1:11:22 | 0:14:05 | smithi | master | centos | 8.0 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/filestore-xfs overrides proto/https rgw_pool_type/replicated sharding$/{default} striping$/{stripe-greater-than-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rgw/run-reshard.sh) on smithi101 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/run-reshard.sh' |
||||||||||||||
fail | 6389608 | 2021-09-14 08:04:01 | 2021-09-14 08:20:46 | 2021-09-14 10:28:33 | 2:07:47 | 1:49:34 | 0:18:13 | smithi | master | ubuntu | 20.04 | rgw/thrash/{clusters/fixed-2 frontend/beast install objectstore/bluestore-bitmap thrasher/default thrashosds-health ubuntu_latest workload/rgw_s3tests} | 2 | |
Failure Reason:
Command failed on smithi091 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph' |
||||||||||||||
fail | 6389609 | 2021-09-14 08:04:02 | 2021-09-14 08:21:07 | 2021-09-14 09:42:46 | 1:21:39 | 1:09:52 | 0:11:47 | smithi | master | centos | 8.3 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/ec-profile sharding$/{default} striping$/{stripe-equals-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/lockdep} | 2 | |
Failure Reason:
Command failed on smithi109 with status 1: "adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user create --uid tester.client.0 --display-name 'Ms. tester.client.0' --access-key KBWHZIKMTPEUGLHLJOXU --secret NttPvBxbLzQww3SrDIIy3gyz/aj8o+Z2A9kKg/XgQBe8vTice0Vo2g== --email tester.client.0_test@test.test --cluster ceph" |
||||||||||||||
fail | 6389610 | 2021-09-14 08:04:03 | 2021-09-14 08:21:47 | 2021-09-14 08:39:59 | 0:18:12 | 0:07:20 | 0:10:52 | smithi | master | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/filestore-xfs overrides rgw_pool_type/ec tasks/rgw_user_quota ubuntu_latest} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_user_quota.pl) on smithi119 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_user_quota.pl' |
||||||||||||||
fail | 6389611 | 2021-09-14 08:04:04 | 2021-09-14 08:22:47 | 2021-09-14 10:33:41 | 2:10:54 | 2:00:06 | 0:10:48 | smithi | master | centos | 8.0 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/filestore-xfs overrides proto/http rgw_pool_type/ec sharding$/{default} striping$/{stripe-equals-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rgw/run-reshard.sh) on smithi096 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/run-reshard.sh' |
||||||||||||||
fail | 6389612 | 2021-09-14 08:04:05 | 2021-09-14 08:22:48 | 2021-09-14 10:27:43 | 2:04:55 | 1:55:55 | 0:09:00 | smithi | master | rhel | 8.4 | rgw/crypt/{0-cluster/fixed-1 1-ceph-install/install 2-kms/testing 3-rgw/rgw 4-tests/{s3tests} ignore-pg-availability supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed on smithi086 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph' |
||||||||||||||
fail | 6389613 | 2021-09-14 08:04:06 | 2021-09-14 08:22:48 | 2021-09-14 08:41:24 | 0:18:36 | 0:07:42 | 0:10:54 | smithi | master | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/replicated tasks/rgw_bucket_quota ubuntu_latest} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_bucket_quota.pl) on smithi083 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_bucket_quota.pl' |
||||||||||||||
fail | 6389614 | 2021-09-14 08:04:07 | 2021-09-14 08:24:08 | 2021-09-14 08:43:52 | 0:19:44 | 0:08:37 | 0:11:07 | smithi | master | centos | 8.3 | rgw/singleton/{all/radosgw-admin frontend/beast objectstore/bluestore-bitmap overrides rgw_pool_type/replicated supported-random-distro$/{centos_8}} | 2 | |
fail | 6389615 | 2021-09-14 08:04:08 | 2021-09-14 08:24:59 | 2021-09-14 10:37:14 | 2:12:15 | 1:59:46 | 0:12:29 | smithi | master | centos | 8.3 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/replicated sharding$/{single} striping$/{stripe-greater-than-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/lockdep} | 2 | |
Failure Reason:
Command failed (workunit test rgw/run-reshard.sh) on smithi057 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/run-reshard.sh' |
||||||||||||||
fail | 6389616 | 2021-09-14 08:04:09 | 2021-09-14 08:26:09 | 2021-09-14 08:51:32 | 0:25:23 | 0:11:30 | 0:13:53 | smithi | master | ubuntu | 20.04 | rgw/thrash/{clusters/fixed-2 frontend/beast install objectstore/filestore-xfs thrasher/default thrashosds-health ubuntu_latest workload/rgw_user_quota} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_user_quota.pl) on smithi016 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_user_quota.pl' |
||||||||||||||
fail | 6389617 | 2021-09-14 08:04:10 | 2021-09-14 08:28:30 | 2021-09-14 09:02:47 | 0:34:17 | 0:20:12 | 0:14:05 | smithi | master | centos | 8.0 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/filestore-xfs overrides proto/http rgw_pool_type/ec-profile sharding$/{default} striping$/{stripe-greater-than-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rgw/run-reshard.sh) on smithi080 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/run-reshard.sh' |
||||||||||||||
fail | 6389618 | 2021-09-14 08:04:11 | 2021-09-14 08:29:40 | 2021-09-14 08:47:39 | 0:17:59 | 0:07:12 | 0:10:47 | smithi | master | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/filestore-xfs overrides rgw_pool_type/ec-profile tasks/rgw_multipart_upload ubuntu_latest} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_multipart_upload.pl) on smithi043 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_multipart_upload.pl' |
||||||||||||||
fail | 6389619 | 2021-09-14 08:04:12 | 2021-09-14 08:30:10 | 2021-09-14 09:51:44 | 1:21:34 | 1:09:38 | 0:11:56 | smithi | master | centos | 8.3 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/ec sharding$/{default} striping$/{stripe-equals-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/lockdep} | 2 | |
Failure Reason:
Command failed on smithi042 with status 1: "adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user create --uid tester.client.0 --display-name 'Ms. tester.client.0' --access-key WMPFVXLEBNXNKVCLEFUI --secret Ifuqv8gD6rHtiJ5zUMUUGQ+AC8rTMNiSaDgTGRepVcnyAwzRSifTNw== --email tester.client.0_test@test.test --cluster ceph" |
||||||||||||||
fail | 6389620 | 2021-09-14 08:04:13 | 2021-09-14 08:30:41 | 2021-09-14 10:44:22 | 2:13:41 | 2:00:07 | 0:13:34 | smithi | master | centos | 8.0 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/filestore-xfs overrides proto/http rgw_pool_type/replicated sharding$/{default} striping$/{stripe-greater-than-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rgw/run-reshard.sh) on smithi149 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/run-reshard.sh' |
||||||||||||||
pass | 6389621 | 2021-09-14 08:04:14 | 2021-09-14 08:31:21 | 2021-09-14 08:50:48 | 0:19:27 | 0:08:04 | 0:11:23 | smithi | master | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec tasks/rgw_ragweed ubuntu_latest} | 2 | |
fail | 6389622 | 2021-09-14 08:04:15 | 2021-09-14 08:31:31 | 2021-09-14 10:36:26 | 2:04:55 | 1:57:13 | 0:07:42 | smithi | master | rhel | 8.4 | rgw/crypt/{0-cluster/fixed-1 1-ceph-install/install 2-kms/vault_kv 3-rgw/rgw 4-tests/{s3tests} ignore-pg-availability supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed on smithi174 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph' |
||||||||||||||
fail | 6389623 | 2021-09-14 08:04:15 | 2021-09-14 08:31:32 | 2021-09-14 09:01:22 | 0:29:50 | 0:14:11 | 0:15:39 | smithi | master | rgw/hadoop-s3a/{clusters/fixed-2 hadoop/v32 overrides s3a-hadoop} | 2 | |||
Failure Reason:
"2021-09-14T08:54:26.232160+0000 mon.a (mon.0) 215 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 6389624 | 2021-09-14 08:04:16 | 2021-09-14 08:33:42 | 2021-09-14 10:11:13 | 1:37:31 | 1:23:30 | 0:14:01 | smithi | master | centos | 8.3 | rgw/multisite/{clusters frontend/beast ignore-pg-availability omap_limits overrides realms/two-zonegroup supported-random-distro$/{centos_8} tasks/test_multi} | 2 | |
Failure Reason:
rgw multisite test failures |
||||||||||||||
fail | 6389625 | 2021-09-14 08:04:18 | 2021-09-14 08:34:02 | 2021-09-14 08:51:23 | 0:17:21 | 0:06:33 | 0:10:48 | smithi | master | ubuntu | 20.04 | rgw/singleton/{all/radosgw-admin frontend/beast objectstore/filestore-xfs overrides rgw_pool_type/ec-profile supported-random-distro$/{ubuntu_latest}} | 2 | |
fail | 6389626 | 2021-09-14 08:04:18 | 2021-09-14 08:34:13 | 2021-09-14 08:58:29 | 0:24:16 | 0:11:25 | 0:12:51 | smithi | master | ubuntu | 20.04 | rgw/thrash/{clusters/fixed-2 frontend/beast install objectstore/filestore-xfs thrasher/default thrashosds-health ubuntu_latest workload/rgw_bucket_quota} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_bucket_quota.pl) on smithi018 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_bucket_quota.pl' |
||||||||||||||
fail | 6389627 | 2021-09-14 08:04:19 | 2021-09-14 08:34:23 | 2021-09-14 10:09:11 | 1:34:48 | 1:22:22 | 0:12:26 | smithi | master | centos | 8.3 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/filestore-xfs overrides proto/http rgw_pool_type/ec-profile sharding$/{single} striping$/{stripe-equals-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/lockdep} | 2 | |
Failure Reason:
Command failed on smithi025 with status 1: 'cd /home/ubuntu/cephtest/s3-tests-java && /opt/gradle/gradle/bin/gradle clean test --rerun-tasks --no-build-cache --tests AWS4Test' |
||||||||||||||
fail | 6389628 | 2021-09-14 08:04:20 | 2021-09-14 08:35:23 | 2021-09-14 10:36:19 | 2:00:56 | 1:45:35 | 0:15:21 | smithi | master | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/filestore-xfs overrides rgw_pool_type/replicated tasks/rgw_s3tests ubuntu_latest} | 2 | |
Failure Reason:
Command failed on smithi175 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph' |
||||||||||||||
fail | 6389629 | 2021-09-14 08:04:21 | 2021-09-14 08:36:54 | 2021-09-14 09:10:48 | 0:33:54 | 0:20:33 | 0:13:21 | smithi | master | centos | 8.0 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/ec sharding$/{single} striping$/{stripe-greater-than-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi012 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh' |
||||||||||||||
fail | 6389630 | 2021-09-14 08:04:22 | 2021-09-14 08:37:04 | 2021-09-14 10:28:34 | 1:51:30 | 1:41:25 | 0:10:05 | smithi | master | centos | 8.3 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/filestore-xfs overrides proto/http rgw_pool_type/replicated sharding$/{single} striping$/{stripe-equals-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/lockdep} | 2 | |
Failure Reason:
Command failed on smithi060 with status 1: 'cd /home/ubuntu/cephtest/s3-tests-java && /opt/gradle/gradle/bin/gradle clean test --rerun-tasks --no-build-cache --tests AWS4Test' |
||||||||||||||
fail | 6389631 | 2021-09-14 08:04:23 | 2021-09-14 08:37:14 | 2021-09-14 08:56:09 | 0:18:55 | 0:07:24 | 0:11:31 | smithi | master | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec-profile tasks/rgw_user_quota ubuntu_latest} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_user_quota.pl) on smithi150 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_user_quota.pl' |
||||||||||||||
fail | 6389632 | 2021-09-14 08:04:24 | 2021-09-14 08:38:35 | 2021-09-14 09:05:08 | 0:26:33 | 0:11:37 | 0:14:56 | smithi | master | ubuntu | 20.04 | rgw/thrash/{clusters/fixed-2 frontend/beast install objectstore/bluestore-bitmap thrasher/default thrashosds-health ubuntu_latest workload/rgw_multipart_upload} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_multipart_upload.pl) on smithi119 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_multipart_upload.pl' |
||||||||||||||
fail | 6389633 | 2021-09-14 08:04:25 | 2021-09-14 08:40:05 | 2021-09-14 09:12:08 | 0:32:03 | 0:20:12 | 0:11:51 | smithi | master | centos | 8.0 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/ec-profile sharding$/{single} striping$/{stripe-greater-than-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi083 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh' |
||||||||||||||
fail | 6389634 | 2021-09-14 08:04:26 | 2021-09-14 08:41:26 | 2021-09-14 10:40:17 | 1:58:51 | 1:47:41 | 0:11:10 | smithi | master | centos | 8.3 | rgw/crypt/{0-cluster/fixed-1 1-ceph-install/install 2-kms/vault_old 3-rgw/rgw 4-tests/{s3tests} ignore-pg-availability supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed on smithi176 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph' |
||||||||||||||
fail | 6389635 | 2021-09-14 08:04:27 | 2021-09-14 08:41:26 | 2021-09-14 08:58:40 | 0:17:14 | 0:07:24 | 0:09:50 | smithi | master | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/filestore-xfs overrides rgw_pool_type/ec tasks/rgw_bucket_quota ubuntu_latest} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_bucket_quota.pl) on smithi036 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_bucket_quota.pl' |
||||||||||||||
fail | 6389636 | 2021-09-14 08:04:28 | 2021-09-14 08:41:26 | 2021-09-14 09:01:12 | 0:19:46 | 0:06:29 | 0:13:17 | smithi | master | ubuntu | 20.04 | rgw/singleton/{all/radosgw-admin frontend/beast objectstore/bluestore-bitmap overrides rgw_pool_type/ec supported-random-distro$/{ubuntu_latest}} | 2 | |
fail | 6389637 | 2021-09-14 08:04:29 | 2021-09-14 08:43:57 | 2021-09-14 09:14:06 | 0:30:09 | 0:18:23 | 0:11:46 | smithi | master | centos | 8.3 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/filestore-xfs overrides proto/http rgw_pool_type/ec sharding$/{default} striping$/{stripe-greater-than-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/lockdep} | 2 | |
Failure Reason:
"2021-09-14T09:01:49.275984+0000 mon.a (mon.0) 118 : cluster [WRN] Health check failed: Degraded data redundancy: 2/4 objects degraded (50.000%), 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 6389638 | 2021-09-14 08:04:30 | 2021-09-14 08:45:27 | 2021-09-14 10:10:08 | 1:24:41 | 1:11:08 | 0:13:33 | smithi | master | centos | 8.0 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/replicated sharding$/{default} striping$/{stripe-equals-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/valgrind} | 2 | |
Failure Reason:
Command failed on smithi085 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid ragweed.client.0 --purge-data' |
||||||||||||||
fail | 6389639 | 2021-09-14 08:04:31 | 2021-09-14 08:47:07 | 2021-09-14 09:05:02 | 0:17:55 | 0:07:27 | 0:10:28 | smithi | master | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/replicated tasks/rgw_multipart_upload ubuntu_latest} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_multipart_upload.pl) on smithi043 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_multipart_upload.pl' |
||||||||||||||
fail | 6389640 | 2021-09-14 08:04:32 | 2021-09-14 08:47:48 | 2021-09-14 10:56:23 | 2:08:35 | 1:49:49 | 0:18:46 | smithi | master | ubuntu | 20.04 | rgw/thrash/{clusters/fixed-2 frontend/beast install objectstore/filestore-xfs thrasher/default thrashosds-health ubuntu_latest workload/rgw_s3tests} | 2 | |
Failure Reason:
Command failed on smithi124 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph' |
||||||||||||||
fail | 6389641 | 2021-09-14 08:04:33 | 2021-09-14 08:50:38 | 2021-09-14 10:45:34 | 1:54:56 | 1:44:46 | 0:10:10 | smithi | master | centos | 8.3 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/filestore-xfs overrides proto/https rgw_pool_type/ec-profile sharding$/{default} striping$/{stripe-equals-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/lockdep} | 2 | |
Failure Reason:
Command failed on smithi180 with status 1: 'cd /home/ubuntu/cephtest/s3-tests-java && /opt/gradle/gradle/bin/gradle clean test --rerun-tasks --no-build-cache --tests AWS4Test' |
||||||||||||||
fail | 6389642 | 2021-09-14 08:04:34 | 2021-09-14 08:50:49 | 2021-09-14 10:14:26 | 1:23:37 | 1:10:48 | 0:12:49 | smithi | master | centos | 8.0 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec sharding$/{single} striping$/{stripe-greater-than-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rgw/run-reshard.sh) on smithi071 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/run-reshard.sh' |
||||||||||||||
pass | 6389643 | 2021-09-14 08:04:35 | 2021-09-14 08:51:29 | 2021-09-14 09:08:27 | 0:16:58 | 0:07:48 | 0:09:10 | smithi | master | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/filestore-xfs overrides rgw_pool_type/ec-profile tasks/rgw_ragweed ubuntu_latest} | 2 | |
fail | 6389644 | 2021-09-14 08:04:37 | 2021-09-14 08:51:39 | 2021-09-14 10:52:31 | 2:00:52 | 1:49:34 | 0:11:18 | smithi | master | centos | 8.3 | rgw/crypt/{0-cluster/fixed-1 1-ceph-install/install 2-kms/vault_transit 3-rgw/rgw 4-tests/{s3tests} ignore-pg-availability supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed on smithi193 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph' |
||||||||||||||
fail | 6389645 | 2021-09-14 08:04:37 | 2021-09-14 08:51:39 | 2021-09-14 09:11:19 | 0:19:40 | 0:08:30 | 0:11:10 | smithi | master | centos | 8.3 | rgw/singleton/{all/radosgw-admin frontend/beast objectstore/filestore-xfs overrides rgw_pool_type/replicated supported-random-distro$/{centos_8}} | 2 | |
fail | 6389646 | 2021-09-14 08:04:38 | 2021-09-14 08:52:10 | 2021-09-14 09:26:30 | 0:34:20 | 0:19:22 | 0:14:58 | smithi | master | centos | 8.3 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/filestore-xfs overrides proto/https rgw_pool_type/replicated sharding$/{default} striping$/{stripe-greater-than-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/lockdep} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi064 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh' |
||||||||||||||
fail | 6389647 | 2021-09-14 08:04:39 | 2021-09-14 08:55:30 | 2021-09-14 10:55:39 | 2:00:09 | 1:45:04 | 0:15:05 | smithi | master | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec tasks/rgw_s3tests ubuntu_latest} | 2 | |
Failure Reason:
Command failed on smithi150 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph' |
||||||||||||||
fail | 6389648 | 2021-09-14 08:04:40 | 2021-09-14 08:56:11 | 2021-09-14 09:22:26 | 0:26:15 | 0:11:19 | 0:14:56 | smithi | master | ubuntu | 20.04 | rgw/thrash/{clusters/fixed-2 frontend/beast install objectstore/bluestore-bitmap thrasher/default thrashosds-health ubuntu_latest workload/rgw_user_quota} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_user_quota.pl) on smithi018 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_user_quota.pl' |
||||||||||||||
fail | 6389649 | 2021-09-14 08:04:41 | 2021-09-14 08:58:31 | 2021-09-14 10:21:36 | 1:23:05 | 1:11:16 | 0:11:49 | smithi | master | centos | 8.0 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec-profile sharding$/{single} striping$/{stripe-equals-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rgw/run-reshard.sh) on smithi036 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/run-reshard.sh' |
||||||||||||||
fail | 6389650 | 2021-09-14 08:04:42 | 2021-09-14 08:58:42 | 2021-09-14 10:55:59 | 1:57:17 | 1:46:22 | 0:10:55 | smithi | master | centos | 8.3 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/filestore-xfs overrides proto/https rgw_pool_type/ec sharding$/{default} striping$/{stripe-equals-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/lockdep} | 2 | |
Failure Reason:
Command failed on smithi121 with status 1: 'cd /home/ubuntu/cephtest/s3-tests-java && /opt/gradle/gradle/bin/gradle clean test --rerun-tasks --no-build-cache --tests AWS4Test' |
||||||||||||||
fail | 6389651 | 2021-09-14 08:04:44 | 2021-09-14 08:58:52 | 2021-09-14 09:18:34 | 0:19:42 | 0:07:12 | 0:12:30 | smithi | master | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/filestore-xfs overrides rgw_pool_type/replicated tasks/rgw_user_quota ubuntu_latest} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_user_quota.pl) on smithi134 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_user_quota.pl' |
||||||||||||||
fail | 6389652 | 2021-09-14 08:04:44 | 2021-09-14 09:01:22 | 2021-09-14 10:26:26 | 1:25:04 | 1:10:41 | 0:14:23 | smithi | master | centos | 8.0 | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/replicated sharding$/{default} striping$/{stripe-greater-than-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rgw/run-reshard.sh) on smithi122 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/run-reshard.sh' |