User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2023-10-12 14:52:00 | 2023-10-12 14:54:20 | 2023-10-12 18:42:47 | 3:48:27 | rbd | wip-yuri5-testing-2023-10-11-1125-quincy | smithi | 411b8b7 | 4 | 4 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7423414 | 2023-10-12 14:52:51 | 2023-10-12 14:54:20 | 2023-10-12 18:41:17 | 3:46:57 | 3:39:06 | 0:07:51 | smithi | main | rhel | 8.4 | rbd/cli_v1/{base/install clusters/{fixed-1 openstack} conf/{disable-pool-app} features/format-1 msgr-failures/few objectstore/bluestore-stupid pool/small-cache-pool supported-random-distro$/{rhel_8} workloads/rbd_cli_generic} | 1 | |
Failure Reason:
Command failed (workunit test rbd/cli_generic.sh) on smithi012 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=411b8b72d6945cb015b753832c1621d9b9d349a6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/cli_generic.sh' |
||||||||||||||
pass | 7423415 | 2023-10-12 14:52:52 | 2023-10-12 14:54:20 | 2023-10-12 15:17:31 | 0:23:11 | 0:15:26 | 0:07:45 | smithi | main | rhel | 8.4 | rbd/singleton/{all/read-flags-writethrough conf/{disable-pool-app} objectstore/bluestore-hybrid openstack supported-random-distro$/{rhel_8}} | 1 | |
fail | 7423416 | 2023-10-12 14:52:53 | 2023-10-12 14:54:20 | 2023-10-12 16:00:54 | 1:06:34 | 0:54:00 | 0:12:34 | smithi | main | ubuntu | 20.04 | rbd/nbd/{base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} msgr-failures/few objectstore/bluestore-comp-zlib supported-random-distro$/{ubuntu_latest} thrashers/cache thrashosds-health workloads/rbd_nbd_diff_continuous} | 3 | |
Failure Reason:
Command failed on smithi003 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
pass | 7423417 | 2023-10-12 14:52:53 | 2023-10-12 14:56:01 | 2023-10-12 16:26:21 | 1:30:20 | 1:19:50 | 0:10:30 | smithi | main | ubuntu | 20.04 | rbd/pwl-cache/tmpfs/{1-base/install 2-cluster/{fix-2 openstack} 3-supported-random-distro$/{ubuntu_latest} 4-cache-path 5-cache-mode/rwl 6-cache-size/1G 7-workloads/qemu_xfstests conf/{disable-pool-app}} | 2 | |
pass | 7423418 | 2023-10-12 14:52:54 | 2023-10-12 14:56:01 | 2023-10-12 16:10:45 | 1:14:44 | 1:05:09 | 0:09:35 | smithi | main | rhel | 8.4 | rbd/pwl-cache/tmpfs/{1-base/install 2-cluster/{fix-2 openstack} 3-supported-random-distro$/{rhel_8} 4-cache-path 5-cache-mode/ssd 6-cache-size/5G 7-workloads/qemu_xfstests conf/{disable-pool-app}} | 2 | |
pass | 7423419 | 2023-10-12 14:52:55 | 2023-10-12 14:57:12 | 2023-10-12 16:00:40 | 1:03:28 | 0:54:46 | 0:08:42 | smithi | main | rhel | 8.4 | rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} conf/{disable-pool-app} msgr-failures/few objectstore/bluestore-low-osd-mem-target policy/simple rbd-mirror/four-per-cluster supported-random-distro$/{rhel_8} workloads/rbd-mirror-snapshot-stress-workunit-exclusive-lock} | 2 | |
fail | 7423420 | 2023-10-12 14:52:56 | 2023-10-12 14:57:32 | 2023-10-12 18:42:47 | 3:45:15 | 3:33:13 | 0:12:02 | smithi | main | ubuntu | 20.04 | rbd/cli/{base/install clusters/{fixed-1 openstack} conf/{disable-pool-app} features/layering msgr-failures/few objectstore/filestore-xfs pool/small-cache-pool supported-random-distro$/{ubuntu_latest} workloads/rbd_cli_generic} | 1 | |
Failure Reason:
Command failed (workunit test rbd/cli_generic.sh) on smithi183 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=411b8b72d6945cb015b753832c1621d9b9d349a6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/cli_generic.sh' |
||||||||||||||
fail | 7423421 | 2023-10-12 14:52:57 | 2023-10-12 14:58:53 | 2023-10-12 16:04:41 | 1:05:48 | 0:51:43 | 0:14:05 | smithi | main | ubuntu | 20.04 | rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} conf/{disable-pool-app} msgr-failures/few objectstore/bluestore-stupid policy/none rbd-mirror/four-per-cluster supported-random-distro$/{ubuntu_latest} workloads/rbd-mirror-snapshot-stress-workunit-fast-diff} | 2 | |
Failure Reason:
"1697126093.3491971 mon.a (mon.0) 1546 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |