Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7361948 2023-08-06 01:35:33 2023-08-07 17:15:40 2023-08-07 18:27:30 1:11:50 0:56:03 0:15:47 smithi main centos 8.stream upgrade:quincy-p2p/quincy-p2p-parallel/{point-to-point-upgrade supported-all-distro/centos_8} 3
Failure Reason:

Command failed (s3 tests against rgw) on smithi142 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto-client.0.cfg REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt /home/ubuntu/cephtest/s3-tests-client.0/virtualenv/bin/python -m nose -w /home/ubuntu/cephtest/s3-tests-client.0 -v -a '!fails_on_rgw,!lifecycle_expiration,!fails_strict_rfc2616,!test_of_sts,!webidentity_test,!fails_with_subdomain,!sse-s3'"

pass 7361949 2023-08-06 01:35:34 2023-08-07 17:20:34 2023-08-07 22:49:40 5:29:06 5:12:00 0:17:06 smithi main ubuntu 20.04 upgrade:quincy-p2p/quincy-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/quincy 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 6-final-workload/{rbd-python snaps-many-objects} objectstore/bluestore-bitmap supported-all-distro/ubuntu_latest thrashosds-health} 3
fail 7361950 2023-08-06 01:35:35 2023-08-07 17:27:02 2023-08-07 19:48:11 2:21:09 2:08:10 0:12:59 smithi main ubuntu 20.04 upgrade:quincy-p2p/quincy-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/quincy 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 6-final-workload/{rbd-python snaps-many-objects} objectstore/bluestore-comp supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi157 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=quincy TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

fail 7361951 2023-08-06 01:35:36 2023-08-07 17:29:28 2023-08-07 18:33:55 1:04:27 0:51:04 0:13:23 smithi main ubuntu 20.04 upgrade:quincy-p2p/quincy-p2p-parallel/{point-to-point-upgrade supported-all-distro/ubuntu_latest} 3
Failure Reason:

Command failed (s3 tests against rgw) on smithi100 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto-client.0.cfg REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt /home/ubuntu/cephtest/s3-tests-client.0/virtualenv/bin/python -m nose -w /home/ubuntu/cephtest/s3-tests-client.0 -v -a '!fails_on_rgw,!lifecycle_expiration,!fails_strict_rfc2616,!test_of_sts,!webidentity_test,!fails_with_subdomain,!sse-s3'"

pass 7361952 2023-08-06 01:35:36 2023-08-07 17:31:33 2023-08-07 22:24:11 4:52:38 4:40:37 0:12:01 smithi main ubuntu 20.04 upgrade:quincy-p2p/quincy-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/quincy 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 6-final-workload/{rbd-python snaps-many-objects} objectstore/bluestore-stupid supported-all-distro/ubuntu_latest thrashosds-health} 3
fail 7361953 2023-08-06 01:35:37 2023-08-07 17:32:34 2023-08-07 19:05:29 1:32:55 1:17:33 0:15:22 smithi main ubuntu 20.04 upgrade:quincy-p2p/quincy-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/quincy 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 6-final-workload/{rbd-python snaps-many-objects} objectstore/filestore-xfs supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed on smithi119 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'