Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7527423 2024-01-22 22:03:47 2024-01-23 06:43:40 2024-01-23 07:36:59 0:53:19 0:43:55 0:09:24 smithi main centos 8.stream upgrade:pacific-p2p/p2q/upgrade-pacific-to-quincy-with-snap_schedule-tests 1
Failure Reason:

Command failed on smithi137 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd down 0'

fail 7527424 2024-01-22 22:03:48 2024-01-23 06:43:40 2024-01-23 07:29:48 0:46:08 0:34:41 0:11:27 smithi main ubuntu 20.04 upgrade:pacific-p2p/pacific-p2p-parallel/{point-to-point-upgrade supported-all-distro/ubuntu_latest} 3
Failure Reason:

Command failed (workunit test cls/test_cls_cmpomap.sh) on smithi184 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 CLS_RBD_GTEST_FILTER=\'*:-TestClsRbd.mirror_snapshot\' adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/cls/test_cls_cmpomap.sh'

fail 7527425 2024-01-22 22:03:49 2024-01-23 06:43:41 2024-01-23 14:07:43 7:24:02 7:11:05 0:12:57 smithi main ubuntu 20.04 upgrade:pacific-p2p/pacific-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/pacific 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 6-final-workload/{rbd-python snaps-many-objects} objectstore/bluestore-bitmap supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi163 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v16.2.7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 7527426 2024-01-22 22:03:50 2024-01-23 06:44:31 2024-01-23 12:50:00 6:05:29 5:53:22 0:12:07 smithi main ubuntu 20.04 upgrade:pacific-p2p/pacific-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/pacific 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 6-final-workload/{rbd-python snaps-many-objects} objectstore/bluestore-comp supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi179 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v16.2.7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 7527427 2024-01-22 22:03:51 2024-01-23 06:45:12 2024-01-23 13:45:08 6:59:56 6:48:17 0:11:39 smithi main ubuntu 20.04 upgrade:pacific-p2p/pacific-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/pacific 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 6-final-workload/{rbd-python snaps-many-objects} objectstore/bluestore-stupid supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi146 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v16.2.7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 7527428 2024-01-22 22:03:52 2024-01-23 06:46:22 2024-01-23 11:16:10 4:29:48 4:16:52 0:12:56 smithi main ubuntu 20.04 upgrade:pacific-p2p/pacific-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/pacific 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 6-final-workload/{rbd-python snaps-many-objects} objectstore/filestore-xfs supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi067 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v16.2.7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'