User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2023-07-27 16:29:37 | 2023-07-27 16:36:44 | 2023-07-28 04:53:57 | 12:17:13 | rbd | reef-release | smithi | c144450 | 6 | 3 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7354719 | 2023-07-27 16:31:28 | 2023-07-27 16:32:16 | 2023-07-27 17:16:20 | 0:44:04 | 0:25:13 | 0:18:51 | smithi | main | ubuntu | 22.04 | rbd/mirror/{base/install clients/{mirror-extra mirror} cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-comp-zlib supported-random-distro$/{ubuntu_latest} workloads/rbd-mirror-snapshot-workunit-minimum} | 2 | |
Failure Reason:
"2023-07-27T17:03:17.170814+0000 mon.a (mon.0) 319 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive, 2 pgs peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
dead | 7354720 | 2023-07-27 16:31:33 | 2023-07-27 16:36:06 | 2023-07-28 04:53:21 | 12:17:15 | smithi | main | ubuntu | 20.04 | rbd/cli/{base/install clusters/{fixed-1 openstack} features/layering msgr-failures/few objectstore/bluestore-comp-zstd pool/small-cache-pool supported-random-distro$/{ubuntu_20.04} workloads/rbd_cli_migration} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7354721 | 2023-07-27 16:31:39 | 2023-07-27 16:36:44 | 2023-07-27 18:18:29 | 1:41:45 | 1:19:26 | 0:22:19 | smithi | main | centos | 8.stream | rbd/nbd/{base/install cluster/{fixed-3 openstack} msgr-failures/few objectstore/bluestore-bitmap supported-random-distro$/{centos_8} thrashers/cache thrashosds-health workloads/rbd_nbd_diff_continuous} | 3 | |
pass | 7354722 | 2023-07-27 16:31:55 | 2023-07-27 16:36:44 | 2023-07-27 17:21:21 | 0:44:37 | 0:32:12 | 0:12:25 | smithi | main | ubuntu | 20.04 | rbd/cli/{base/install clusters/{fixed-1 openstack} features/defaults msgr-failures/few objectstore/bluestore-hybrid pool/ec-data-pool supported-random-distro$/{ubuntu_20.04} workloads/rbd_cli_generic} | 1 | |
pass | 7354723 | 2023-07-27 16:32:06 | 2023-07-27 16:36:45 | 2023-07-27 17:50:53 | 1:14:08 | 0:58:03 | 0:16:05 | smithi | main | centos | 9.stream | rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-zstd pool/replicated-data-pool supported-random-distro$/{centos_latest} workloads/c_api_tests_with_defaults} | 3 | |
pass | 7354724 | 2023-07-27 16:32:09 | 2023-07-27 16:37:39 | 2023-07-27 19:18:04 | 2:40:25 | 2:29:31 | 0:10:54 | smithi | main | rhel | 8.6 | rbd/encryption/{cache/none clusters/{fixed-3 openstack} features/defaults msgr-failures/few objectstore/bluestore-low-osd-mem-target pool/replicated-data-pool supported-random-distro$/{rhel_8} workloads/qemu_xfstests_luks2} | 3 | |
pass | 7354725 | 2023-07-27 16:32:15 | 2023-07-27 16:38:19 | 2023-07-27 17:11:20 | 0:33:01 | 0:18:00 | 0:15:01 | smithi | main | ubuntu | 20.04 | rbd/nbd/{base/install cluster/{fixed-3 openstack} msgr-failures/few objectstore/bluestore-comp-zstd supported-random-distro$/{ubuntu_20.04} thrashers/cache thrashosds-health workloads/rbd_fsx_nbd} | 3 | |
fail | 7354726 | 2023-07-27 16:32:21 | 2023-07-27 16:39:21 | 2023-07-27 17:01:44 | 0:22:23 | 0:11:46 | 0:10:37 | smithi | main | centos | 9.stream | rbd/singleton/{all/qemu-iotests-no-cache objectstore/bluestore-comp-snappy openstack supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command failed (workunit test rbd/qemu-iotests.sh) on smithi008 with status 13: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c1444501ab7918ce42bdc26b9d860ad26e34dd69 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/qemu-iotests.sh' |
||||||||||||||
pass | 7354727 | 2023-07-27 16:32:27 | 2023-07-27 16:39:22 | 2023-07-27 18:43:46 | 2:04:24 | 1:51:45 | 0:12:39 | smithi | main | ubuntu | 20.04 | rbd/pwl-cache/tmpfs/{1-base/install 2-cluster/{fix-2 openstack} 3-supported-random-distro$/{ubuntu_20.04} 4-cache-path 5-cache-mode/ssd 6-cache-size/5G 7-workloads/qemu_xfstests} | 2 | |
fail | 7354728 | 2023-07-27 16:32:32 | 2023-07-27 16:39:22 | 2023-07-27 20:21:38 | 3:42:16 | 3:25:41 | 0:16:35 | smithi | main | centos | 9.stream | rbd/nbd/{base/install cluster/{fixed-3 openstack} msgr-failures/few objectstore/bluestore-low-osd-mem-target supported-random-distro$/{centos_latest} thrashers/cache thrashosds-health workloads/rbd_nbd_diff_continuous} | 3 | |
Failure Reason:
Command failed (workunit test rbd/diff_continuous.sh) on smithi178 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c1444501ab7918ce42bdc26b9d860ad26e34dd69 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_DEVICE_TYPE=nbd adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/diff_continuous.sh' |
||||||||||||||
dead | 7354729 | 2023-07-27 16:32:33 | 2023-07-27 16:39:42 | 2023-07-28 04:53:57 | 12:14:15 | smithi | main | centos | 8.stream | rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-stupid policy/simple rbd-mirror/four-per-cluster supported-random-distro$/{centos_8} workloads/rbd-mirror-snapshot-stress-workunit-exclusive-lock} | 2 | |||
Failure Reason:
hit max job timeout |