Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7392620 2023-09-10 01:35:58 2023-09-10 01:38:20 2023-09-10 02:51:51 1:13:31 0:54:16 0:19:15 smithi main centos 8.stream upgrade:quincy-p2p/quincy-p2p-parallel/{point-to-point-upgrade supported-all-distro/centos_8} 3
Failure Reason:

Command failed (s3 tests against rgw) on smithi137 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto-client.0.cfg REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt /home/ubuntu/cephtest/s3-tests-client.0/virtualenv/bin/python -m nose -w /home/ubuntu/cephtest/s3-tests-client.0 -v -a '!fails_on_rgw,!lifecycle_expiration,!fails_strict_rfc2616,!test_of_sts,!webidentity_test,!fails_with_subdomain,!sse-s3'"

fail 7392621 2023-09-10 01:35:59 2023-09-10 01:44:51 2023-09-10 05:28:45 3:43:54 3:29:41 0:14:13 smithi main ubuntu 20.04 upgrade:quincy-p2p/quincy-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/quincy 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 6-final-workload/{rbd-python snaps-many-objects} objectstore/bluestore-bitmap supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi162 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=quincy TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

pass 7392622 2023-09-10 01:36:00 2023-09-10 01:48:12 2023-09-10 05:26:48 3:38:36 3:23:14 0:15:22 smithi main ubuntu 20.04 upgrade:quincy-p2p/quincy-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/quincy 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 6-final-workload/{rbd-python snaps-many-objects} objectstore/bluestore-comp supported-all-distro/ubuntu_latest thrashosds-health} 3
fail 7392623 2023-09-10 01:36:00 2023-09-10 01:50:53 2023-09-10 02:53:45 1:02:52 0:51:44 0:11:08 smithi main ubuntu 20.04 upgrade:quincy-p2p/quincy-p2p-parallel/{point-to-point-upgrade supported-all-distro/ubuntu_latest} 3
Failure Reason:

Command failed (s3 tests against rgw) on smithi017 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto-client.0.cfg REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt /home/ubuntu/cephtest/s3-tests-client.0/virtualenv/bin/python -m nose -w /home/ubuntu/cephtest/s3-tests-client.0 -v -a '!fails_on_rgw,!lifecycle_expiration,!fails_strict_rfc2616,!test_of_sts,!webidentity_test,!fails_with_subdomain,!sse-s3'"

pass 7392624 2023-09-10 01:36:01 2023-09-10 01:51:33 2023-09-10 06:37:50 4:46:17 4:34:13 0:12:04 smithi main ubuntu 20.04 upgrade:quincy-p2p/quincy-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/quincy 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 6-final-workload/{rbd-python snaps-many-objects} objectstore/bluestore-stupid supported-all-distro/ubuntu_latest thrashosds-health} 3
fail 7392625 2023-09-10 01:36:02 2023-09-10 01:52:14 2023-09-10 03:34:07 1:41:53 1:27:49 0:14:04 smithi main ubuntu 20.04 upgrade:quincy-p2p/quincy-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/quincy 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 6-final-workload/{rbd-python snaps-many-objects} objectstore/filestore-xfs supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed on smithi027 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'