Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 7103323 2022-12-05 09:00:03 2022-12-05 09:41:55 2022-12-05 10:01:11 0:19:16 smithi main rhel 8.6 rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-stupid policy/simple rbd-mirror/four-per-cluster supported-random-distro$/{rhel_8} workloads/rbd-mirror-fsx-workunit} 2
Failure Reason:

Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds

pass 7103324 2022-12-05 09:00:05 2022-12-05 09:42:06 2022-12-05 10:33:18 0:51:12 0:36:14 0:14:58 smithi main ubuntu 20.04 rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/none min-compat-client/default msgr-failures/few objectstore/bluestore-comp-snappy pool/small-cache-pool supported-random-distro$/{ubuntu_latest} workloads/c_api_tests_with_journaling} 3
fail 7103325 2022-12-05 09:00:07 2022-12-05 09:47:37 2022-12-05 10:34:33 0:46:56 0:35:54 0:11:02 smithi main centos 8.stream rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-comp-lz4 pool/small-cache-pool supported-random-distro$/{centos_8} workloads/c_api_tests_with_journaling} 3
Failure Reason:

"1670235684.370817 mon.a (mon.0) 945 : cluster [WRN] Health check failed: Degraded data redundancy: 1/818 objects degraded (0.122%), 1 pg degraded (PG_DEGRADED)" in cluster log

pass 7103326 2022-12-05 09:00:09 2022-12-05 09:50:39 2022-12-05 10:46:10 0:55:31 0:49:38 0:05:53 smithi main rhel 8.6 rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/none min-compat-client/octopus msgr-failures/few objectstore/filestore-xfs pool/replicated-data-pool supported-random-distro$/{rhel_8} workloads/c_api_tests_with_defaults} 3
pass 7103327 2022-12-05 09:00:11 2022-12-05 09:51:00 2022-12-05 10:37:39 0:46:39 0:38:38 0:08:01 smithi main centos 8.stream rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/default msgr-failures/few objectstore/bluestore-bitmap pool/small-cache-pool supported-random-distro$/{centos_8} workloads/c_api_tests_with_journaling} 3
pass 7103328 2022-12-05 09:00:13 2022-12-05 09:51:50 2022-12-05 11:07:25 1:15:35 1:08:37 0:06:58 smithi main centos 8.stream rbd/encryption/{cache/writearound clusters/{fixed-3 openstack} features/defaults msgr-failures/few objectstore/bluestore-comp-snappy pool/ec-cache-pool supported-random-distro$/{centos_8} workloads/qemu_xfstests_none_luks1} 3
pass 7103329 2022-12-05 09:00:15 2022-12-05 09:52:21 2022-12-05 10:49:06 0:56:45 0:45:06 0:11:39 smithi main ubuntu 20.04 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/default msgr-failures/few objectstore/bluestore-low-osd-mem-target pool/none supported-random-distro$/{ubuntu_latest} workloads/c_api_tests} 3
fail 7103330 2022-12-05 09:00:17 2022-12-05 09:53:22 2022-12-05 10:27:06 0:33:44 0:23:33 0:10:11 smithi main ubuntu 20.04 rbd/qemu/{cache/writeback clusters/{fixed-3 openstack} features/readbalance msgr-failures/few objectstore/filestore-xfs pool/ec-cache-pool supported-random-distro$/{ubuntu_latest} workloads/qemu_bonnie} 3
Failure Reason:

"1670235617.3100727 osd.1 (osd.1) 3 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 7 'waiting for sub ops' : 3 ] most affected pool [ 'cache' : 10 ])" in cluster log

pass 7103331 2022-12-05 09:00:19 2022-12-05 09:53:42 2022-12-05 10:50:57 0:57:15 0:44:51 0:12:24 smithi main ubuntu 20.04 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/none min-compat-client/default msgr-failures/few objectstore/bluestore-hybrid pool/none supported-random-distro$/{ubuntu_latest} workloads/c_api_tests} 3
pass 7103332 2022-12-05 09:00:20 2022-12-05 09:55:23 2022-12-05 10:54:44 0:59:21 0:53:10 0:06:11 smithi main centos 8.stream rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-comp-zlib policy/none rbd-mirror/four-per-cluster supported-random-distro$/{centos_8} workloads/rbd-mirror-snapshot-stress-workunit-minimum} 2
fail 7103333 2022-12-05 09:00:22 2022-12-05 09:55:43 2022-12-05 10:47:43 0:52:00 0:44:34 0:07:26 smithi main rhel 8.6 rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-comp-zstd policy/simple rbd-mirror/four-per-cluster supported-random-distro$/{rhel_8} workloads/rbd-mirror-fsx-workunit} 2
Failure Reason:

"1670235467.0148106 mon.a (mon.0) 207 : cluster [WRN] Health check failed: Degraded data redundancy: 7/2568 objects degraded (0.273%), 1 pg degraded (PG_DEGRADED)" in cluster log

pass 7103334 2022-12-05 09:00:24 2022-12-05 09:56:34 2022-12-05 10:58:23 1:01:49 0:50:20 0:11:29 smithi main rhel 8.6 rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/none min-compat-client/octopus msgr-failures/few objectstore/bluestore-hybrid pool/replicated-data-pool supported-random-distro$/{rhel_8} workloads/c_api_tests_with_defaults} 3
pass 7103335 2022-12-05 09:00:25 2022-12-05 10:00:55 2022-12-05 11:08:39 1:07:44 1:00:40 0:07:04 smithi main centos 8.stream rbd/pwl-cache/tmpfs/{1-base/install 2-cluster/{fix-2 openstack} 3-supported-random-distro$/{centos_8} 4-cache-path 5-cache-mode/ssd 6-cache-size/1G 7-workloads/qemu_xfstests} 2
pass 7103336 2022-12-05 09:00:27 2022-12-05 10:01:26 2022-12-05 10:57:03 0:55:37 0:46:24 0:09:13 smithi main centos 8.stream rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/default msgr-failures/few objectstore/bluestore-comp-zlib pool/none supported-random-distro$/{centos_8} workloads/c_api_tests} 3
pass 7103337 2022-12-05 09:00:29 2022-12-05 10:03:47 2022-12-05 11:50:49 1:47:02 1:37:41 0:09:21 smithi main centos 8.stream rbd/maintenance/{base/install clusters/{fixed-3 openstack} objectstore/bluestore-hybrid qemu/xfstests supported-random-distro$/{centos_8} workloads/dynamic_features_no_cache} 3
pass 7103338 2022-12-05 09:00:31 2022-12-05 10:04:59 2022-12-05 10:49:24 0:44:25 0:35:49 0:08:36 smithi main centos 8.stream rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-low-osd-mem-target policy/simple rbd-mirror/four-per-cluster supported-random-distro$/{centos_8} workloads/rbd-mirror-journal-workunit} 2
dead 7103339 2022-12-05 09:00:32 2022-12-05 10:06:30 2022-12-05 22:19:36 12:13:06 smithi main centos 8.stream rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-stupid policy/none rbd-mirror/four-per-cluster supported-random-distro$/{centos_8} workloads/rbd-mirror-snapshot-stress-workunit-exclusive-lock} 2
Failure Reason:

hit max job timeout

fail 7103340 2022-12-05 09:00:34 2022-12-05 10:07:21 2022-12-05 11:37:36 1:30:15 1:20:55 0:09:20 smithi main ubuntu 20.04 rbd/pwl-cache/tmpfs/{1-base/install 2-cluster/{fix-2 openstack} 3-supported-random-distro$/{ubuntu_latest} 4-cache-path 5-cache-mode/rwl 6-cache-size/5G 7-workloads/qemu_xfstests} 2
Failure Reason:

Command failed on smithi040 with status 1: 'test -f /home/ubuntu/cephtest/archive/qemu/client.0/success'

pass 7103341 2022-12-05 09:00:36 2022-12-05 10:08:32 2022-12-05 11:09:26 1:00:54 0:52:10 0:08:44 smithi main centos 8.stream rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/filestore-xfs policy/simple rbd-mirror/four-per-cluster supported-random-distro$/{centos_8} workloads/rbd-mirror-snapshot-stress-workunit-fast-diff} 2
fail 7103342 2022-12-05 09:00:38 2022-12-05 10:10:04 2022-12-05 10:43:52 0:33:48 0:26:10 0:07:38 smithi main centos 8.stream rbd/mirror/{base/install clients/{mirror-extra mirror} cluster/{2-node openstack} msgr-failures/few objectstore/filestore-xfs supported-random-distro$/{centos_8} workloads/rbd-mirror-workunit-policy-simple} 2
Failure Reason:

"1670236149.1475244 mon.a (mon.0) 157 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive, 2 pgs peering (PG_AVAILABILITY)" in cluster log