User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|
ivancich | 2021-07-01 13:56:35 | 2021-07-01 14:40:31 | 2021-07-02 10:54:51 | 20:14:20 | rgw | wip-cls-empty-listing | gibba | f1c0ee7 | 21 | 12 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6247174 | 2021-07-01 13:56:40 | 2021-07-01 14:28:09 | 2021-07-01 15:09:19 | 0:41:10 | 0:17:58 | 0:23:12 | gibba | master | centos | 8.3 | rgw/singleton/{all/radosgw-admin frontend/beast objectstore/bluestore-bitmap overrides rgw_pool_type/ec supported-random-distro$/{centos_8}} | 2 | |
Failure Reason:
timed out |
||||||||||||||
dead | 6247175 | 2021-07-01 13:56:41 | 2021-07-01 14:40:31 | 2021-07-02 02:48:48 | 12:08:17 | gibba | master | rhel | 8.4 | rgw/crypt/{0-cluster/fixed-1 1-ceph-install/install 2-kms/barbican 3-rgw/rgw 4-tests/{s3tests} ignore-pg-availability supported-random-distro$/{rhel_8}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6247176 | 2021-07-01 13:56:42 | 2021-07-01 14:40:31 | 2021-07-01 14:54:22 | 0:13:51 | 0:03:13 | 0:10:38 | gibba | master | rgw/hadoop-s3a/{clusters/fixed-2 hadoop/default overrides s3a-hadoop} | 2 | |||
Failure Reason:
Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F18.04%2Fx86_64&sha1=f1c0ee705875d993bd0d16feb5355bcf77616c5f |
||||||||||||||
fail | 6247177 | 2021-07-01 13:56:43 | 2021-07-01 14:41:12 | 2021-07-01 18:04:56 | 3:23:44 | 3:10:10 | 0:13:34 | gibba | master | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec-profile tasks/rgw_multipart_upload ubuntu_latest} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_multipart_upload.pl) on gibba025 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f1c0ee705875d993bd0d16feb5355bcf77616c5f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_multipart_upload.pl' |
||||||||||||||
fail | 6247178 | 2021-07-01 13:56:44 | 2021-07-01 14:43:32 | 2021-07-01 20:18:52 | 5:35:20 | 5:29:38 | 0:05:42 | gibba | master | rhel | 8.4 | rgw/multisite/{clusters frontend/beast ignore-pg-availability omap_limits overrides realms/three-zone-plus-pubsub supported-random-distro$/{rhel_8} tasks/test_multi} | 2 | |
Failure Reason:
rgw multisite test failures |
||||||||||||||
dead | 6247179 | 2021-07-01 13:56:45 | 2021-07-01 14:43:52 | 2021-07-02 03:02:48 | 12:18:56 | gibba | master | centos | 8.3 | rgw/notifications/{beast bluestore-bitmap fixed-2 overrides supported-all-distro$/{centos_8} tasks/{0-install test_amqp test_kafka test_others}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6247180 | 2021-07-01 13:56:46 | 2021-07-01 14:54:25 | 2021-07-02 03:06:00 | 12:11:35 | gibba | master | centos | 8.3 | rgw/sts/{cluster ignore-pg-availability objectstore overrides pool-type rgw_frontend/beast supported-random-distro$/{centos_8} tasks/{0-install ststests webidentity}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6247181 | 2021-07-01 13:56:47 | 2021-07-01 14:57:25 | 2021-07-02 00:01:19 | 9:03:54 | 8:54:28 | 0:09:26 | gibba | master | ubuntu | 20.04 | rgw/tempest/{clusters/fixed-1 frontend/beast overrides tasks/rgw_tempest ubuntu_latest} | 1 | |
Failure Reason:
Command failed on gibba005 with status 1: "cd /home/ubuntu/cephtest/tempest && source .tox/venv/bin/activate && tempest run --workspace-path /home/ubuntu/cephtest/tempest/workspace.yaml --workspace rgw --regex '^tempest.api.object_storage' --black-regex '.*test_account_quotas_negative.AccountQuotasNegativeTest.test_user_modify_quota|.*test_container_acl_negative.ObjectACLsNegativeTest.*|.*test_container_services_negative.ContainerNegativeTest.test_create_container_metadata_.*|.*test_container_staticweb.StaticWebTest.test_web_index|.*test_container_staticweb.StaticWebTest.test_web_listing_css|.*test_container_synchronization.*|.*test_object_services.PublicObjectTest.test_access_public_container_object_without_using_creds'" |
||||||||||||||
fail | 6247182 | 2021-07-01 13:56:48 | 2021-07-01 14:57:26 | 2021-07-01 18:36:44 | 3:39:18 | 3:15:49 | 0:23:29 | gibba | master | ubuntu | 20.04 | rgw/thrash/{clusters/fixed-2 frontend/beast install objectstore/bluestore-bitmap thrasher/default thrashosds-health ubuntu_latest workload/rgw_bucket_quota} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_bucket_quota.pl) on gibba028 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f1c0ee705875d993bd0d16feb5355bcf77616c5f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_bucket_quota.pl' |
||||||||||||||
fail | 6247183 | 2021-07-01 13:56:49 | 2021-07-01 15:09:27 | 2021-07-01 17:50:17 | 2:40:50 | 2:32:11 | 0:08:39 | gibba | master | centos | 8.3 | rgw/tools/{centos_latest cluster tasks} | 1 | |
Failure Reason:
Command failed (workunit test rgw/test_rgw_orphan_list.sh) on gibba016 with status 75: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f1c0ee705875d993bd0d16feb5355bcf77616c5f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh' |
||||||||||||||
fail | 6247184 | 2021-07-01 13:56:50 | 2021-07-01 15:09:28 | 2021-07-01 22:58:04 | 7:48:36 | 5:22:37 | 2:25:59 | gibba | master | ubuntu | 20.04 | rgw/website/{clusters/fixed-2 frontend/beast http overrides tasks/s3tests-website ubuntu_latest} | 2 | |
Failure Reason:
Command failed on gibba022 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph' |
||||||||||||||
fail | 6247185 | 2021-07-01 13:56:51 | 2021-07-01 17:24:43 | 2021-07-01 20:49:52 | 3:25:09 | 3:15:44 | 0:09:25 | gibba | master | ubuntu | 20.04 | rgw/thrash/{clusters/fixed-2 frontend/beast install objectstore/filestore-xfs thrasher/default thrashosds-health ubuntu_latest workload/rgw_multipart_upload} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_multipart_upload.pl) on gibba001 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f1c0ee705875d993bd0d16feb5355bcf77616c5f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_multipart_upload.pl' |
||||||||||||||
dead | 6247186 | 2021-07-01 13:56:53 | 2021-07-01 17:24:43 | 2021-07-02 05:39:32 | 12:14:49 | gibba | master | ubuntu | 20.04 | rgw/crypt/{0-cluster/fixed-1 1-ceph-install/install 2-kms/kmip 3-rgw/rgw 4-tests/{s3tests} ignore-pg-availability supported-random-distro$/{ubuntu_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6247187 | 2021-07-01 13:56:54 | 2021-07-01 17:30:54 | 2021-07-02 05:40:50 | 12:09:56 | gibba | master | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/filestore-xfs overrides rgw_pool_type/ec tasks/rgw_ragweed ubuntu_latest} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6247188 | 2021-07-01 13:56:55 | 2021-07-01 17:31:45 | 2021-07-01 17:44:30 | 0:12:45 | gibba | master | ubuntu | 20.04 | rgw/singleton/{all/rgw_cephadm frontend/beast objectstore/filestore-xfs overrides rgw_pool_type/replicated supported-random-distro$/{ubuntu_latest}} | 6 | |||
Failure Reason:
too many values to unpack (expected 1) |
||||||||||||||
dead | 6247189 | 2021-07-01 13:56:56 | 2021-07-01 17:37:25 | 2021-07-02 05:53:44 | 12:16:19 | gibba | master | ubuntu | 20.04 | rgw/thrash/{clusters/fixed-2 frontend/beast install objectstore/bluestore-bitmap thrasher/default thrashosds-health ubuntu_latest workload/rgw_s3tests} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6247190 | 2021-07-01 13:56:57 | 2021-07-01 17:44:37 | 2021-07-02 05:53:16 | 12:08:39 | gibba | master | centos | 8.3 | rgw/crypt/{0-cluster/fixed-1 1-ceph-install/install 2-kms/testing 3-rgw/rgw 4-tests/{s3tests} ignore-pg-availability supported-random-distro$/{centos_8}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6247191 | 2021-07-01 13:56:58 | 2021-07-01 17:44:37 | 2021-07-02 05:53:36 | 12:08:59 | gibba | master | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/replicated tasks/rgw_s3tests ubuntu_latest} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6247192 | 2021-07-01 13:56:59 | 2021-07-01 17:44:37 | 2021-07-01 21:13:36 | 3:28:59 | 3:12:23 | 0:16:36 | gibba | master | ubuntu | 20.04 | rgw/thrash/{clusters/fixed-2 frontend/beast install objectstore/filestore-xfs thrasher/default thrashosds-health ubuntu_latest workload/rgw_user_quota} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_user_quota.pl) on gibba016 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f1c0ee705875d993bd0d16feb5355bcf77616c5f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_user_quota.pl' |
||||||||||||||
fail | 6247193 | 2021-07-01 13:57:00 | 2021-07-01 17:50:28 | 2021-07-01 18:43:26 | 0:52:58 | gibba | master | rhel | 8.4 | rgw/singleton/{all/rgw_cephadm frontend/beast objectstore/bluestore-bitmap overrides rgw_pool_type/ec-profile supported-random-distro$/{rhel_8}} | 6 | |||
Failure Reason:
too many values to unpack (expected 1) |
||||||||||||||
dead | 6247194 | 2021-07-01 13:57:01 | 2021-07-01 18:36:46 | 2021-07-02 06:52:41 | 12:15:55 | gibba | master | ubuntu | 20.04 | rgw/crypt/{0-cluster/fixed-1 1-ceph-install/install 2-kms/vault_kv 3-rgw/rgw 4-tests/{s3tests} ignore-pg-availability supported-random-distro$/{ubuntu_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6247195 | 2021-07-01 13:57:02 | 2021-07-01 18:43:37 | 2021-07-01 20:31:54 | 1:48:17 | 0:03:11 | 1:45:06 | gibba | master | rgw/hadoop-s3a/{clusters/fixed-2 hadoop/v32 overrides s3a-hadoop} | 2 | |||
Failure Reason:
Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F18.04%2Fx86_64&sha1=f1c0ee705875d993bd0d16feb5355bcf77616c5f |
||||||||||||||
fail | 6247196 | 2021-07-01 13:57:03 | 2021-07-01 20:18:57 | 2021-07-01 23:49:08 | 3:30:11 | 3:07:36 | 0:22:35 | gibba | master | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/filestore-xfs overrides rgw_pool_type/ec-profile tasks/rgw_user_quota ubuntu_latest} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_user_quota.pl) on gibba017 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f1c0ee705875d993bd0d16feb5355bcf77616c5f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_user_quota.pl' |
||||||||||||||
fail | 6247197 | 2021-07-01 13:57:04 | 2021-07-01 20:31:59 | 2021-07-01 23:18:23 | 2:46:24 | 2:26:13 | 0:20:11 | gibba | master | ubuntu | 20.04 | rgw/multisite/{clusters frontend/beast ignore-pg-availability omap_limits overrides realms/two-zonegroup supported-random-distro$/{ubuntu_latest} tasks/test_multi} | 2 | |
Failure Reason:
rgw multisite test failures |
||||||||||||||
fail | 6247198 | 2021-07-01 13:57:05 | 2021-07-01 20:41:20 | 2021-07-02 00:15:11 | 3:33:51 | 3:15:18 | 0:18:33 | gibba | master | ubuntu | 20.04 | rgw/thrash/{clusters/fixed-2 frontend/beast install objectstore/filestore-xfs thrasher/default thrashosds-health ubuntu_latest workload/rgw_bucket_quota} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_bucket_quota.pl) on gibba001 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f1c0ee705875d993bd0d16feb5355bcf77616c5f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_bucket_quota.pl' |
||||||||||||||
fail | 6247199 | 2021-07-01 13:57:06 | 2021-07-01 20:50:01 | 2021-07-02 00:38:54 | 3:48:53 | 3:14:54 | 0:33:59 | gibba | master | ubuntu | 20.04 | rgw/thrash/{clusters/fixed-2 frontend/beast install objectstore/bluestore-bitmap thrasher/default thrashosds-health ubuntu_latest workload/rgw_multipart_upload} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_multipart_upload.pl) on gibba016 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f1c0ee705875d993bd0d16feb5355bcf77616c5f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_multipart_upload.pl' |
||||||||||||||
dead | 6247200 | 2021-07-01 13:57:07 | 2021-07-01 21:13:45 | 2021-07-02 09:22:32 | 12:08:47 | gibba | master | rhel | 8.4 | rgw/crypt/{0-cluster/fixed-1 1-ceph-install/install 2-kms/vault_old 3-rgw/rgw 4-tests/{s3tests} ignore-pg-availability supported-random-distro$/{rhel_8}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6247201 | 2021-07-01 13:57:08 | 2021-07-01 21:13:45 | 2021-07-02 02:06:52 | 4:53:07 | 3:10:22 | 1:42:45 | gibba | master | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec tasks/rgw_bucket_quota ubuntu_latest} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_bucket_quota.pl) on gibba025 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f1c0ee705875d993bd0d16feb5355bcf77616c5f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_bucket_quota.pl' |
||||||||||||||
fail | 6247202 | 2021-07-01 13:57:09 | 2021-07-01 22:45:46 | 2021-07-01 23:14:47 | 0:29:01 | 0:18:05 | 0:10:56 | gibba | master | ubuntu | 20.04 | rgw/singleton/{all/radosgw-admin frontend/beast objectstore/filestore-xfs overrides rgw_pool_type/ec supported-random-distro$/{ubuntu_latest}} | 2 | |
Failure Reason:
timed out |
||||||||||||||
dead | 6247203 | 2021-07-01 13:57:10 | 2021-07-01 22:45:46 | 2021-07-02 10:54:44 | 12:08:58 | gibba | master | ubuntu | 20.04 | rgw/thrash/{clusters/fixed-2 frontend/beast install objectstore/filestore-xfs thrasher/default thrashosds-health ubuntu_latest workload/rgw_s3tests} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6247204 | 2021-07-01 13:57:11 | 2021-07-01 22:45:57 | 2021-07-02 10:54:51 | 12:08:54 | gibba | master | ubuntu | 20.04 | rgw/crypt/{0-cluster/fixed-1 1-ceph-install/install 2-kms/vault_transit 3-rgw/rgw 4-tests/{s3tests} ignore-pg-availability supported-random-distro$/{ubuntu_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6247205 | 2021-07-01 13:57:12 | 2021-07-01 22:45:57 | 2021-07-02 02:17:17 | 3:31:20 | 3:09:39 | 0:21:41 | gibba | master | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/filestore-xfs overrides rgw_pool_type/replicated tasks/rgw_multipart_upload ubuntu_latest} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_multipart_upload.pl) on gibba022 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f1c0ee705875d993bd0d16feb5355bcf77616c5f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_multipart_upload.pl' |
||||||||||||||
fail | 6247206 | 2021-07-01 13:57:13 | 2021-07-01 22:58:09 | 2021-07-02 02:45:45 | 3:47:36 | 3:17:09 | 0:30:27 | gibba | master | ubuntu | 20.04 | rgw/thrash/{clusters/fixed-2 frontend/beast install objectstore/bluestore-bitmap thrasher/default thrashosds-health ubuntu_latest workload/rgw_user_quota} | 2 | |
Failure Reason:
Command failed (workunit test rgw/s3_user_quota.pl) on gibba013 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f1c0ee705875d993bd0d16feb5355bcf77616c5f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_user_quota.pl' |