User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
pdonnell | 2022-07-22 19:42:58 | 2022-07-22 19:43:26 | 2022-07-23 11:34:46 | 15:51:20 | fs | wip-pdonnell-testing-20220721.235756 | smithi | 89768db | 28 | 33 | 7 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6945803 | 2022-07-22 19:43:07 | 2022-07-22 19:43:15 | 2022-07-22 20:21:10 | 0:37:55 | 0:26:55 | 0:11:00 | smithi | main | centos | 8.stream | fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} | 3 | |
fail | 6945804 | 2022-07-22 19:43:09 | 2022-07-22 19:43:26 | 2022-07-22 20:18:45 | 0:35:19 | 0:28:44 | 0:06:35 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} scrub/yes standby-replay subvolume/{with-quota} tasks/{0-check-counter workunit/suites/pjd}} | 3 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi006 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=89768db311950607682ea2bb29f56edc324f86ac TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
fail | 6945805 | 2022-07-22 19:43:10 | 2022-07-22 19:43:47 | 2022-07-22 20:52:05 | 1:08:18 | 1:00:17 | 0:08:01 | smithi | main | ubuntu | 20.04 | fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/failover} | 2 | |
Failure Reason:
"2022-07-22T20:28:39.658647+0000 mon.a (mon.0) 4850 : cluster [ERR] Health check failed: ??? (MDS_DUMMY)" in cluster log |
||||||||||||||
fail | 6945806 | 2022-07-22 19:43:11 | 2022-07-22 19:44:37 | 2022-07-22 21:48:29 | 2:03:52 | 1:56:24 | 0:07:28 | smithi | main | ubuntu | 20.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi109 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=89768db311950607682ea2bb29f56edc324f86ac TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-multiple-capsnaps.sh' |
||||||||||||||
pass | 6945807 | 2022-07-22 19:43:12 | 2022-07-22 19:45:08 | 2022-07-22 22:08:41 | 2:23:33 | 2:17:06 | 0:06:27 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} scrub/no standby-replay subvolume/{with-namespace-isolated-and-quota} tasks/{0-check-counter workunit/kernel_untar_build}} | 3 | |
dead | 6945808 | 2022-07-22 19:43:14 | 2022-07-22 19:45:28 | 2022-07-23 08:03:58 | 12:18:30 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} scrub/yes standby-replay subvolume/{with-namespace-isolated} tasks/{0-check-counter workunit/suites/blogbench}} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6945809 | 2022-07-22 19:43:15 | 2022-07-22 19:46:39 | 2022-07-22 22:07:23 | 2:20:44 | 2:08:26 | 0:12:18 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} scrub/no standby-replay subvolume/{with-namespace-isolated} tasks/{0-check-counter workunit/suites/dbench}} | 3 | |
Failure Reason:
Command failed on smithi099 with status 13: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs dump --format=json' |
||||||||||||||
fail | 6945810 | 2022-07-22 19:43:17 | 2022-07-22 19:47:19 | 2022-07-23 01:54:34 | 6:07:15 | 5:59:20 | 0:07:55 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
"2022-07-23T01:04:50.883161+0000 osd.7 (osd.7) 177 : cluster [ERR] 5.9s0 deep-scrub : stat mismatch, got 0/1 objects, 0/0 clones, 0/1 dirty, 0/0 omap, 0/0 pinned, 0/0 hit_set_archive, 0/0 whiteouts, 0/2754 bytes, 0/0 manifest objects, 0/0 hit_set_archive bytes." in cluster log |
||||||||||||||
pass | 6945811 | 2022-07-22 19:43:18 | 2022-07-22 19:48:40 | 2022-07-22 20:22:09 | 0:33:29 | 0:26:24 | 0:07:05 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} scrub/yes standby-replay subvolume/{no-subvolume} tasks/{0-check-counter workunit/suites/fsync-tester}} | 3 | |
fail | 6945812 | 2022-07-22 19:43:19 | 2022-07-22 19:48:51 | 2022-07-22 20:24:32 | 0:35:41 | 0:26:42 | 0:08:59 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} scrub/yes standby-replay subvolume/{with-quota} tasks/{0-check-counter workunit/suites/pjd}} | 3 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi064 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=89768db311950607682ea2bb29f56edc324f86ac TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
pass | 6945813 | 2022-07-22 19:43:20 | 2022-07-22 19:50:52 | 2022-07-22 20:13:00 | 0:22:08 | 0:13:56 | 0:08:12 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/trivial_sync}} | 2 | |
fail | 6945814 | 2022-07-22 19:43:22 | 2022-07-22 19:51:55 | 2022-07-22 21:22:59 | 1:31:04 | 1:21:59 | 0:09:05 | smithi | main | ubuntu | 20.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=89768db311950607682ea2bb29f56edc324f86ac TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-multiple-capsnaps.sh' |
||||||||||||||
dead | 6945815 | 2022-07-22 19:43:23 | 2022-07-22 19:53:30 | 2022-07-23 08:11:52 | 12:18:22 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} scrub/yes standby-replay subvolume/{with-quota} tasks/{0-check-counter workunit/suites/blogbench}} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6945816 | 2022-07-22 19:43:25 | 2022-07-22 19:54:22 | 2022-07-23 01:08:13 | 5:13:51 | 5:03:37 | 0:10:14 | smithi | main | centos | 8.stream | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
"2022-07-22T20:52:18.439680+0000 mds.d (mds.0) 1 : cluster [WRN] client.4767 isn't responding to mclientcaps(revoke), ino 0x10000003d65 pending pAsLsXsFscr issued pAsLsXsFsxcrwb, sent 300.028290 seconds ago" in cluster log |
||||||||||||||
fail | 6945817 | 2022-07-22 19:43:26 | 2022-07-22 19:54:44 | 2022-07-22 20:48:20 | 0:53:36 | 0:46:24 | 0:07:12 | smithi | main | ubuntu | 20.04 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/snap-schedule} | 2 | |
Failure Reason:
"2022-07-22T20:24:16.696798+0000 mon.a (mon.0) 931 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
dead | 6945818 | 2022-07-22 19:43:27 | 2022-07-22 19:54:45 | 2022-07-23 08:10:31 | 12:15:46 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} scrub/yes standby-replay subvolume/{with-no-extra-options} tasks/{0-check-counter workunit/suites/blogbench}} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6945819 | 2022-07-22 19:43:29 | 2022-07-22 19:55:07 | 2022-07-22 20:44:05 | 0:48:58 | 0:38:08 | 0:10:50 | smithi | main | ubuntu | 20.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi066 with status 128: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=89768db311950607682ea2bb29f56edc324f86ac TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-git-ceph.sh' |
||||||||||||||
fail | 6945820 | 2022-07-22 19:43:30 | 2022-07-22 19:56:49 | 2022-07-22 22:11:16 | 2:14:27 | 2:02:22 | 0:12:05 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} scrub/no standby-replay subvolume/{with-namespace-isolated} tasks/{0-check-counter workunit/suites/dbench}} | 3 | |
Failure Reason:
Command failed on smithi153 with status 125: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:89768db311950607682ea2bb29f56edc324f86ac shell --fsid d77283b6-09fb-11ed-842f-001a4aab830c -- ceph daemon mds.g perf dump' |
||||||||||||||
fail | 6945821 | 2022-07-22 19:43:32 | 2022-07-22 19:57:41 | 2022-07-23 05:40:05 | 9:42:24 | 9:34:27 | 0:07:57 | smithi | main | ubuntu | 20.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
"2022-07-22T20:34:59.799495+0000 mds.d (mds.1) 1 : cluster [WRN] client.4845 isn't responding to mclientcaps(revoke), ino 0x20000000585 pending pAsLsXsFscr issued pAsLsXsFsxcrwb, sent 300.004876 seconds ago" in cluster log |
||||||||||||||
fail | 6945822 | 2022-07-22 19:43:33 | 2022-07-22 19:58:12 | 2022-07-22 20:32:07 | 0:33:55 | 0:25:38 | 0:08:17 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} scrub/yes standby-replay subvolume/{with-namespace-isolated} tasks/{0-check-counter workunit/suites/pjd}} | 3 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi029 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=89768db311950607682ea2bb29f56edc324f86ac TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
fail | 6945823 | 2022-07-22 19:43:35 | 2022-07-22 19:58:43 | 2022-07-22 21:22:54 | 1:24:11 | 1:13:30 | 0:10:41 | smithi | main | centos | 8.stream | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi077 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=89768db311950607682ea2bb29f56edc324f86ac TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-multiple-capsnaps.sh' |
||||||||||||||
fail | 6945824 | 2022-07-22 19:43:36 | 2022-07-22 19:59:54 | 2022-07-22 21:08:39 | 1:08:45 | 0:55:20 | 0:13:25 | smithi | main | rhel | 8.6 | fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/failover} | 2 | |
Failure Reason:
"2022-07-22T20:57:59.894830+0000 mon.a (mon.0) 4862 : cluster [ERR] Health check failed: ??? (MDS_DUMMY)" in cluster log |
||||||||||||||
pass | 6945825 | 2022-07-22 19:43:37 | 2022-07-22 20:03:45 | 2022-07-22 22:32:58 | 2:29:13 | 2:18:27 | 0:10:46 | smithi | main | rhel | 8.6 | fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_snaptests}} | 2 | |
pass | 6945826 | 2022-07-22 19:43:39 | 2022-07-22 20:03:55 | 2022-07-22 21:19:29 | 1:15:34 | 1:09:03 | 0:06:31 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} scrub/yes standby-replay subvolume/{with-namespace-isolated-and-quota} tasks/{0-check-counter workunit/suites/ffsb}} | 3 | |
pass | 6945827 | 2022-07-22 19:43:40 | 2022-07-22 20:04:46 | 2022-07-22 20:37:46 | 0:33:00 | 0:25:53 | 0:07:07 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} scrub/no standby-replay subvolume/{no-subvolume} tasks/{0-check-counter workunit/suites/iozone}} | 3 | |
fail | 6945828 | 2022-07-22 19:43:42 | 2022-07-22 20:04:47 | 2022-07-22 20:38:17 | 0:33:30 | 0:25:42 | 0:07:48 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 scrub/yes standby-replay subvolume/{with-quota} tasks/{0-check-counter workunit/suites/pjd}} | 3 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi100 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=89768db311950607682ea2bb29f56edc324f86ac TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
pass | 6945829 | 2022-07-22 19:43:43 | 2022-07-22 20:04:57 | 2022-07-22 20:28:42 | 0:23:45 | 0:16:56 | 0:06:49 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/pjd}} | 2 | |
fail | 6945830 | 2022-07-22 19:43:44 | 2022-07-22 20:05:08 | 2022-07-22 20:26:29 | 0:21:21 | 0:14:17 | 0:07:04 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} scrub/yes standby-replay subvolume/{no-subvolume} tasks/{0-check-counter workunit/fs/misc}} | 3 | |
Failure Reason:
Command failed on smithi141 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:89768db311950607682ea2bb29f56edc324f86ac shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c7db3ef2-09fb-11ed-842f-001a4aab830c -- ceph orch daemon add osd smithi141:vg_nvme/lv_4' |
||||||||||||||
pass | 6945831 | 2022-07-22 19:43:46 | 2022-07-22 20:05:28 | 2022-07-22 21:26:36 | 1:21:08 | 1:15:38 | 0:05:30 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} | 2 | |
fail | 6945832 | 2022-07-22 19:43:47 | 2022-07-22 20:05:29 | 2022-07-22 20:35:04 | 0:29:35 | 0:20:34 | 0:09:01 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/fsstress}} | 2 | |
Failure Reason:
reached maximum tries (90) after waiting for 540 seconds |
||||||||||||||
pass | 6945833 | 2022-07-22 19:43:48 | 2022-07-22 20:08:10 | 2022-07-22 20:38:10 | 0:30:00 | 0:18:38 | 0:11:22 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/mon 2-workunit/fs/trivial_sync}} | 2 | |
pass | 6945834 | 2022-07-22 19:43:50 | 2022-07-22 20:09:00 | 2022-07-22 20:46:43 | 0:37:43 | 0:25:52 | 0:11:51 | smithi | main | centos | 8.stream | fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} | 3 | |
fail | 6945835 | 2022-07-22 19:43:51 | 2022-07-22 20:10:01 | 2022-07-22 20:29:59 | 0:19:58 | 0:12:43 | 0:07:15 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 scrub/yes standby-replay subvolume/{with-namespace-isolated} tasks/{0-check-counter workunit/suites/fsync-tester}} | 3 | |
Failure Reason:
Command failed on smithi081 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:89768db311950607682ea2bb29f56edc324f86ac shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 78a8b160-09fc-11ed-842f-001a4aab830c -- ceph orch daemon add osd smithi081:vg_nvme/lv_4' |
||||||||||||||
pass | 6945836 | 2022-07-22 19:43:52 | 2022-07-22 20:10:42 | 2022-07-22 20:46:43 | 0:36:01 | 0:24:54 | 0:11:07 | smithi | main | centos | 8.stream | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client-limits} | 2 | |
fail | 6945837 | 2022-07-22 19:43:54 | 2022-07-22 20:11:32 | 2022-07-22 20:43:16 | 0:31:44 | 0:25:25 | 0:06:19 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} scrub/yes standby-replay subvolume/{with-namespace-isolated} tasks/{0-check-counter workunit/suites/pjd}} | 3 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi017 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=89768db311950607682ea2bb29f56edc324f86ac TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
pass | 6945838 | 2022-07-22 19:43:55 | 2022-07-22 20:11:43 | 2022-07-22 20:47:54 | 0:36:11 | 0:29:24 | 0:06:47 | smithi | main | rhel | 8.6 | fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/multifs-auth} | 2 | |
fail | 6945839 | 2022-07-22 19:43:56 | 2022-07-22 20:13:03 | 2022-07-22 20:56:29 | 0:43:26 | 0:35:26 | 0:08:00 | smithi | main | rhel | 8.6 | fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/workunit/snaps} | 2 | |
Failure Reason:
Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi149 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=89768db311950607682ea2bb29f56edc324f86ac TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-multiple-capsnaps.sh' |
||||||||||||||
fail | 6945840 | 2022-07-22 19:43:57 | 2022-07-22 20:14:54 | 2022-07-22 21:24:56 | 1:10:02 | 0:58:20 | 0:11:42 | smithi | main | rhel | 8.6 | fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/1 tasks/dbench validater/valgrind} | 2 | |
Failure Reason:
SELinux denials found on ubuntu@smithi146.front.sepia.ceph.com: ['type=AVC msg=audit(1658521476.880:193): avc: denied { node_bind } for pid=1440 comm="ping" saddr=172.21.15.146 scontext=system_u:system_r:ping_t:s0 tcontext=system_u:object_r:node_t:s0 tclass=icmp_socket permissive=1'] |
||||||||||||||
dead | 6945841 | 2022-07-22 19:43:59 | 2022-07-22 20:15:35 | 2022-07-23 08:29:54 | 12:14:19 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} scrub/yes standby-replay subvolume/{no-subvolume} tasks/{0-check-counter workunit/suites/blogbench}} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6945842 | 2022-07-22 19:44:00 | 2022-07-22 20:19:49 | 2022-07-22 20:59:12 | 0:39:23 | 0:30:17 | 0:09:06 | smithi | main | rhel | 8.6 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client-recovery} | 2 | |
Failure Reason:
"2022-07-22T20:40:25.492983+0000 mon.a (mon.0) 634 : cluster [WRN] Health check failed: Reduced data availability: 3 pgs peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 6945843 | 2022-07-22 19:44:02 | 2022-07-22 20:21:19 | 2022-07-22 21:07:21 | 0:46:02 | 0:34:24 | 0:11:38 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} scrub/no standby-replay subvolume/{with-no-extra-options} tasks/{0-check-counter workunit/fs/norstats}} | 3 | |
pass | 6945844 | 2022-07-22 19:44:03 | 2022-07-22 20:22:10 | 2022-07-22 20:45:17 | 0:23:07 | 0:17:09 | 0:05:58 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/iozone}} | 2 | |
pass | 6945845 | 2022-07-22 19:44:04 | 2022-07-22 20:22:10 | 2022-07-22 20:45:14 | 0:23:04 | 0:16:48 | 0:06:16 | smithi | main | rhel | 8.6 | fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_pjd}} | 2 | |
pass | 6945846 | 2022-07-22 19:44:05 | 2022-07-22 20:22:21 | 2022-07-22 20:58:00 | 0:35:39 | 0:27:00 | 0:08:39 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} scrub/no standby-replay subvolume/{with-no-extra-options} tasks/{0-check-counter workunit/suites/iozone}} | 3 | |
fail | 6945847 | 2022-07-22 19:44:07 | 2022-07-22 20:24:41 | 2022-07-22 20:57:30 | 0:32:49 | 0:25:16 | 0:07:33 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} scrub/yes standby-replay subvolume/{with-namespace-isolated-and-quota} tasks/{0-check-counter workunit/suites/pjd}} | 3 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi135 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=89768db311950607682ea2bb29f56edc324f86ac TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
dead | 6945848 | 2022-07-22 19:44:08 | 2022-07-22 20:26:32 | 2022-07-23 09:38:31 | 13:11:59 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} scrub/yes standby-replay subvolume/{with-quota} tasks/{0-check-counter workunit/suites/blogbench}} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6945849 | 2022-07-22 19:44:10 | 2022-07-22 20:28:43 | 2022-07-22 22:41:17 | 2:12:34 | 2:00:46 | 0:11:48 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} scrub/no standby-replay subvolume/{with-namespace-isolated-and-quota} tasks/{0-check-counter workunit/suites/dbench}} | 3 | |
fail | 6945850 | 2022-07-22 19:44:11 | 2022-07-22 20:30:04 | 2022-07-22 20:55:15 | 0:25:11 | 0:16:42 | 0:08:29 | smithi | main | ubuntu | 20.04 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/fragment} | 2 | |
Failure Reason:
"2022-07-22T20:47:34.813571+0000 mon.a (mon.0) 398 : cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
pass | 6945851 | 2022-07-22 19:44:12 | 2022-07-22 20:32:04 | 2022-07-22 21:03:34 | 0:31:30 | 0:25:19 | 0:06:11 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} scrub/yes standby-replay subvolume/{with-quota} tasks/{0-check-counter workunit/suites/fsync-tester}} | 3 | |
fail | 6945852 | 2022-07-22 19:44:14 | 2022-07-22 20:32:15 | 2022-07-22 21:33:56 | 1:01:41 | 0:52:25 | 0:09:16 | smithi | main | rhel | 8.6 | fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/failover} | 2 | |
Failure Reason:
"2022-07-22T21:16:31.328016+0000 mon.a (mon.0) 5793 : cluster [ERR] Health check failed: ??? (MDS_DUMMY)" in cluster log |
||||||||||||||
fail | 6945853 | 2022-07-22 19:44:15 | 2022-07-22 20:35:06 | 2022-07-22 21:10:53 | 0:35:47 | 0:25:59 | 0:09:48 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} scrub/yes standby-replay subvolume/{with-no-extra-options} tasks/{0-check-counter workunit/suites/pjd}} | 3 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi107 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=89768db311950607682ea2bb29f56edc324f86ac TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
pass | 6945854 | 2022-07-22 19:44:16 | 2022-07-22 20:37:46 | 2022-07-22 20:58:59 | 0:21:13 | 0:14:26 | 0:06:47 | smithi | main | ubuntu | 20.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/iozone}} | 2 | |
pass | 6945855 | 2022-07-22 19:44:17 | 2022-07-22 20:37:47 | 2022-07-22 21:12:37 | 0:34:50 | 0:27:56 | 0:06:54 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} scrub/no standby-replay subvolume/{with-no-extra-options} tasks/{0-check-counter workunit/direct_io}} | 3 | |
pass | 6945856 | 2022-07-22 19:44:19 | 2022-07-22 20:37:47 | 2022-07-22 21:07:15 | 0:29:28 | 0:22:49 | 0:06:39 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} | 2 | |
dead | 6945857 | 2022-07-22 19:44:20 | 2022-07-22 20:38:18 | 2022-07-23 08:49:49 | 12:11:31 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} scrub/yes standby-replay subvolume/{with-namespace-isolated} tasks/{0-check-counter workunit/suites/blogbench}} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6945858 | 2022-07-22 19:44:22 | 2022-07-22 20:38:28 | 2022-07-22 21:02:48 | 0:24:20 | 0:12:03 | 0:12:17 | smithi | main | rhel | 8.6 | fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{rhel_8} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/libcephfs_python} | 2 | |
fail | 6945859 | 2022-07-22 19:44:23 | 2022-07-22 20:43:20 | 2022-07-22 21:14:35 | 0:31:15 | 0:23:07 | 0:08:08 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/fsstress}} | 2 | |
Failure Reason:
Command failed on smithi066 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
pass | 6945860 | 2022-07-22 19:44:24 | 2022-07-22 20:44:10 | 2022-07-22 21:24:47 | 0:40:37 | 0:28:53 | 0:11:44 | smithi | main | ubuntu | 20.04 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/metrics} | 2 | |
pass | 6945861 | 2022-07-22 19:44:25 | 2022-07-22 20:45:21 | 2022-07-22 21:14:18 | 0:28:57 | 0:17:37 | 0:11:20 | smithi | main | centos | 8.stream | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/mon 2-workunit/fs/trivial_sync}} | 2 | |
fail | 6945862 | 2022-07-22 19:44:27 | 2022-07-22 20:45:21 | 2022-07-22 21:19:21 | 0:34:00 | 0:25:11 | 0:08:49 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} scrub/yes standby-replay subvolume/{with-namespace-isolated-and-quota} tasks/{0-check-counter workunit/suites/pjd}} | 3 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi027 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=89768db311950607682ea2bb29f56edc324f86ac TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
fail | 6945863 | 2022-07-22 19:44:28 | 2022-07-22 20:46:52 | 2022-07-22 23:48:16 | 3:01:24 | 2:51:00 | 0:10:24 | smithi | main | centos | 8.stream | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
"2022-07-22T21:39:19.957241+0000 mds.e (mds.0) 1 : cluster [WRN] client.4589 isn't responding to mclientcaps(revoke), ino 0x100000071e6 pending pFc issued pFcb, sent 300.251653 seconds ago" in cluster log |
||||||||||||||
fail | 6945864 | 2022-07-22 19:44:30 | 2022-07-22 20:46:52 | 2022-07-22 23:09:11 | 2:22:19 | 2:10:21 | 0:11:58 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} scrub/no standby-replay subvolume/{with-namespace-isolated} tasks/{0-check-counter workunit/suites/dbench}} | 3 | |
Failure Reason:
Command failed on smithi003 with status 13: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs dump --format=json' |
||||||||||||||
pass | 6945865 | 2022-07-22 19:44:31 | 2022-07-22 20:48:03 | 2022-07-22 21:13:23 | 0:25:20 | 0:15:43 | 0:09:37 | smithi | main | centos | 8.stream | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/iozone}} | 2 | |
fail | 6945866 | 2022-07-22 19:44:33 | 2022-07-22 20:48:23 | 2022-07-22 21:22:59 | 0:34:36 | 0:23:40 | 0:10:56 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 scrub/yes standby-replay subvolume/{with-namespace-isolated-and-quota} tasks/{0-check-counter workunit/suites/pjd}} | 3 | |
Failure Reason:
Command failed on smithi139 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:89768db311950607682ea2bb29f56edc324f86ac shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3c7bdea0-0a02-11ed-842f-001a4aab830c -- ceph orch daemon add osd smithi139:vg_nvme/lv_1' |
||||||||||||||
dead | 6945867 | 2022-07-22 19:44:34 | 2022-07-22 20:52:14 | 2022-07-23 11:34:46 | 14:42:32 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} scrub/yes standby-replay subvolume/{no-subvolume} tasks/{0-check-counter workunit/suites/blogbench}} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6945868 | 2022-07-22 19:44:36 | 2022-07-22 20:56:35 | 2022-07-23 00:45:03 | 3:48:28 | 3:36:35 | 0:11:53 | smithi | main | ubuntu | 20.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
"2022-07-22T21:47:02.891801+0000 mon.a (mon.0) 3431 : cluster [WRN] Health check failed: 1 MDSs behind on trimming (MDS_TRIM)" in cluster log |
||||||||||||||
pass | 6945869 | 2022-07-22 19:44:37 | 2022-07-22 20:57:36 | 2022-07-22 21:50:35 | 0:52:59 | 0:47:17 | 0:05:42 | smithi | main | rhel | 8.6 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/snap-schedule} | 2 | |
pass | 6945870 | 2022-07-22 19:44:39 | 2022-07-22 20:57:37 | 2022-07-22 21:21:24 | 0:23:47 | 0:16:37 | 0:07:10 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/iozone}} | 2 |