Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7306870 2023-06-18 01:35:57 2023-06-18 01:36:56 2023-06-18 03:04:07 1:27:11 1:15:11 0:12:00 smithi main centos 8.stream upgrade:quincy-p2p/quincy-p2p-parallel/{point-to-point-upgrade supported-all-distro/centos_8} 3
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi177 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=quincy TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 CLS_RBD_GTEST_FILTER=\'*:-TestClsRbd.snapshots_namespaces\' adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/cls/test_cls_rbd.sh'

pass 7306871 2023-06-18 01:35:57 2023-06-18 01:36:56 2023-06-18 06:36:54 4:59:58 4:47:04 0:12:54 smithi main ubuntu 20.04 upgrade:quincy-p2p/quincy-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/quincy 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 6-final-workload/{rbd-python snaps-many-objects} objectstore/bluestore-bitmap supported-all-distro/ubuntu_latest thrashosds-health} 3
pass 7306872 2023-06-18 01:35:58 2023-06-18 01:36:57 2023-06-18 05:13:21 3:36:24 3:24:14 0:12:10 smithi main ubuntu 20.04 upgrade:quincy-p2p/quincy-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/quincy 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 6-final-workload/{rbd-python snaps-many-objects} objectstore/bluestore-comp supported-all-distro/ubuntu_latest thrashosds-health} 3
fail 7306873 2023-06-18 01:35:59 2023-06-18 01:36:57 2023-06-18 03:02:11 1:25:14 1:13:08 0:12:06 smithi main ubuntu 20.04 upgrade:quincy-p2p/quincy-p2p-parallel/{point-to-point-upgrade supported-all-distro/ubuntu_latest} 3
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi184 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=quincy TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 CLS_RBD_GTEST_FILTER=\'*:-TestClsRbd.snapshots_namespaces\' adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/cls/test_cls_rbd.sh'

pass 7306874 2023-06-18 01:36:00 2023-06-18 01:36:57 2023-06-18 06:20:57 4:44:00 4:31:59 0:12:01 smithi main ubuntu 20.04 upgrade:quincy-p2p/quincy-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/quincy 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 6-final-workload/{rbd-python snaps-many-objects} objectstore/bluestore-stupid supported-all-distro/ubuntu_latest thrashosds-health} 3
fail 7306875 2023-06-18 01:36:01 2023-06-18 01:36:59 2023-06-18 03:21:07 1:44:08 1:29:43 0:14:25 smithi main ubuntu 20.04 upgrade:quincy-p2p/quincy-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/quincy 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 6-final-workload/{rbd-python snaps-many-objects} objectstore/filestore-xfs supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed on smithi016 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'