User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
pdonnell | 2021-07-04 02:32:34 | 2021-07-04 02:33:32 | 2021-07-04 14:45:19 | 12:11:47 | fs | wip-pdonnell-testing-20210703.052904 | smithi | d0c859d | 30 | 16 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6251264 |
![]() |
2021-07-04 02:32:40 | 2021-07-04 02:33:26 | 2021-07-04 03:06:53 | 0:33:27 | 0:17:08 | 0:16:19 | smithi | master | centos | 8.3 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 replication/always scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{yes}} | 3 |
fail | 6251265 |
![]() |
2021-07-04 02:32:41 | 2021-07-04 02:33:26 | 2021-07-04 03:06:47 | 0:33:21 | 0:22:53 | 0:10:28 | smithi | master | ubuntu | 20.04 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 replication/default scrub/no standby-replay tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{yes}} | 3 |
Failure Reason:
"2021-07-04T02:51:55.609588+0000 mon.a (mon.0) 145 : cluster [WRN] Health check failed: Degraded data redundancy: 1/4 objects degraded (25.000%), 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 6251266 |
![]() |
2021-07-04 02:32:42 | 2021-07-04 02:33:27 | 2021-07-04 03:43:31 | 1:10:04 | 0:56:40 | 0:13:24 | smithi | master | centos | 8.stream | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8.stream} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 replication/always scrub/yes standby-replay tasks/{0-check-counter workunit/suites/iogen} wsync/{yes}} | 3 |
Failure Reason:
The following counters failed to be set on mds daemons: {'mds.imported', 'mds.exported'} |
||||||||||||||
pass | 6251267 |
![]() |
2021-07-04 02:32:43 | 2021-07-04 02:33:27 | 2021-07-04 03:02:58 | 0:29:31 | 0:16:05 | 0:13:26 | smithi | master | ubuntu | 20.04 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 replication/default scrub/no standby-replay tasks/{0-check-counter workunit/direct_io} wsync/{yes}} | 3 |
pass | 6251268 |
![]() |
2021-07-04 02:32:43 | 2021-07-04 02:33:27 | 2021-07-04 03:53:31 | 1:20:04 | 1:04:29 | 0:15:35 | smithi | master | ubuntu | 20.04 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 replication/default scrub/no standby-replay tasks/{0-check-counter workunit/suites/dbench} wsync/{yes}} | 3 |
pass | 6251269 |
![]() |
2021-07-04 02:32:44 | 2021-07-04 02:33:27 | 2021-07-04 03:06:56 | 0:33:29 | 0:19:33 | 0:13:56 | smithi | master | ubuntu | 20.04 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 replication/default scrub/no standby-replay tasks/{0-check-counter workunit/suites/iozone} wsync/{no}} | 3 |
pass | 6251270 |
![]() |
2021-07-04 02:32:45 | 2021-07-04 02:33:28 | 2021-07-04 03:27:03 | 0:53:35 | 0:39:54 | 0:13:41 | smithi | master | centos | 8.3 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 replication/always scrub/yes standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{no}} | 3 |
fail | 6251271 |
![]() |
2021-07-04 02:32:46 | 2021-07-04 02:33:28 | 2021-07-04 03:31:46 | 0:58:18 | 0:45:55 | 0:12:23 | smithi | master | ubuntu | 20.04 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 replication/default scrub/no standby-replay tasks/{0-check-counter workunit/kernel_untar_build} wsync/{no}} | 3 |
Failure Reason:
"2021-07-04T02:54:36.218020+0000 mon.a (mon.0) 149 : cluster [WRN] Health check failed: Degraded data redundancy: 2/4 objects degraded (50.000%), 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 6251272 |
![]() ![]() |
2021-07-04 02:32:47 | 2021-07-04 02:33:28 | 2021-07-04 03:12:18 | 0:38:50 | 0:20:48 | 0:18:02 | smithi | master | centos | 8.3 | fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/mds-full} | 2 |
Failure Reason:
Test failure: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) |
||||||||||||||
fail | 6251273 |
![]() ![]() |
2021-07-04 02:32:48 | 2021-07-04 02:33:29 | 2021-07-04 03:33:02 | 0:59:33 | 0:44:37 | 0:14:56 | smithi | master | centos | 8.3 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 replication/always scrub/yes standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{yes}} | 3 |
Failure Reason:
Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi068 with status 23: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d0c859d990c131d38934cc45467367c8d5eab1ef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/multiple_rsync.sh' |
||||||||||||||
fail | 6251274 |
![]() |
2021-07-04 02:32:49 | 2021-07-04 02:33:30 | 2021-07-04 04:14:49 | 1:41:19 | 1:27:51 | 0:13:28 | smithi | master | ubuntu | 20.04 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 replication/default scrub/no standby-replay tasks/{0-check-counter workunit/suites/dbench} wsync/{no}} | 3 |
Failure Reason:
"2021-07-04T02:56:38.886440+0000 mon.a (mon.0) 137 : cluster [WRN] Health check failed: Degraded data redundancy: 2/4 objects degraded (50.000%), 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
pass | 6251275 |
![]() |
2021-07-04 02:32:50 | 2021-07-04 02:33:30 | 2021-07-04 03:59:46 | 1:26:16 | 1:10:08 | 0:16:08 | smithi | master | ubuntu | 20.04 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 replication/always scrub/yes standby-replay tasks/{0-check-counter workunit/suites/ffsb} wsync/{no}} | 3 |
fail | 6251276 |
![]() |
2021-07-04 02:32:51 | 2021-07-04 02:33:31 | 2021-07-04 03:21:28 | 0:47:57 | 0:34:28 | 0:13:29 | smithi | master | ubuntu | 20.04 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 replication/default scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsx} wsync/{yes}} | 3 |
Failure Reason:
"2021-07-04T02:59:21.190880+0000 mon.a (mon.0) 146 : cluster [WRN] Health check failed: Degraded data redundancy: 2/4 objects degraded (50.000%), 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
pass | 6251277 |
![]() |
2021-07-04 02:32:51 | 2021-07-04 02:33:31 | 2021-07-04 04:04:06 | 1:30:35 | 1:20:21 | 0:10:14 | smithi | master | rhel | 8.4 | fs/thrash/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag_enable multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_snaptests}} | 2 |
fail | 6251278 |
![]() |
2021-07-04 02:32:52 | 2021-07-04 02:33:32 | 2021-07-04 03:06:01 | 0:32:29 | 0:18:43 | 0:13:46 | smithi | master | centos | 8.3 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 replication/default scrub/no standby-replay tasks/{0-check-counter workunit/fs/norstats} wsync/{no}} | 3 |
Failure Reason:
"2021-07-04T02:58:59.591549+0000 mon.a (mon.0) 146 : cluster [WRN] Health check failed: Degraded data redundancy: 1/4 objects degraded (25.000%), 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
pass | 6251279 |
![]() |
2021-07-04 02:32:53 | 2021-07-04 02:33:32 | 2021-07-04 02:56:25 | 0:22:53 | 0:15:49 | 0:07:04 | smithi | master | rhel | 8.4 | fs/32bits/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{faked-ino frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} | 2 |
pass | 6251280 |
![]() |
2021-07-04 02:32:54 | 2021-07-04 02:33:33 | 2021-07-04 03:01:57 | 0:28:24 | 0:16:22 | 0:12:02 | smithi | master | centos | 8.3 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 replication/default scrub/no standby-replay tasks/{0-check-counter workunit/direct_io} wsync/{no}} | 3 |
pass | 6251281 |
![]() |
2021-07-04 02:32:55 | 2021-07-04 02:33:33 | 2021-07-04 04:20:05 | 1:46:32 | 1:28:40 | 0:17:52 | smithi | master | centos | 8.3 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 replication/default scrub/no standby-replay tasks/{0-check-counter workunit/kernel_untar_build} wsync/{yes}} | 3 |
pass | 6251282 |
![]() |
2021-07-04 02:32:56 | 2021-07-04 02:33:33 | 2021-07-04 03:16:30 | 0:42:57 | 0:32:37 | 0:10:20 | smithi | master | rhel | 8.4 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 replication/always scrub/yes standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{no}} | 3 |
dead | 6251283 |
![]() |
2021-07-04 02:32:57 | 2021-07-04 02:33:33 | 2021-07-04 14:45:19 | 12:11:46 | smithi | master | centos | 8.3 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 replication/default scrub/no standby-replay tasks/{0-check-counter workunit/kernel_untar_build} wsync/{no}} | 3 | ||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6251284 |
![]() ![]() |
2021-07-04 02:32:58 | 2021-07-04 02:33:34 | 2021-07-04 03:40:04 | 1:06:30 | 0:56:10 | 0:10:20 | smithi | master | rhel | 8.4 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 replication/always scrub/yes standby-replay tasks/{0-check-counter workunit/suites/ffsb} wsync/{no}} | 3 |
Failure Reason:
Command failed (workunit test suites/ffsb.sh) on smithi101 with status 135: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d0c859d990c131d38934cc45467367c8d5eab1ef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh' |
||||||||||||||
fail | 6251285 |
![]() |
2021-07-04 02:32:59 | 2021-07-04 02:33:35 | 2021-07-04 03:09:38 | 0:36:03 | 0:21:21 | 0:14:42 | smithi | master | ubuntu | 20.04 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 replication/always scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsstress} wsync/{yes}} | 3 |
Failure Reason:
"2021-07-04T02:59:38.446814+0000 mon.a (mon.0) 145 : cluster [WRN] Health check failed: Degraded data redundancy: 1/4 objects degraded (25.000%), 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 6251286 |
![]() |
2021-07-04 02:33:00 | 2021-07-04 02:33:35 | 2021-07-04 03:07:21 | 0:33:46 | 0:17:35 | 0:16:11 | smithi | master | centos | 8.3 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 replication/default scrub/no standby-replay tasks/{0-check-counter workunit/direct_io} wsync/{yes}} | 3 |
Failure Reason:
"2021-07-04T03:00:37.275568+0000 mon.a (mon.0) 146 : cluster [WRN] Health check failed: Degraded data redundancy: 2/4 objects degraded (50.000%), 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 6251287 |
![]() |
2021-07-04 02:33:01 | 2021-07-04 02:33:36 | 2021-07-04 03:09:15 | 0:35:39 | 0:29:19 | 0:06:20 | smithi | master | rhel | 8.4 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 replication/always scrub/yes standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{yes}} | 3 |
Failure Reason:
"2021-07-04T02:53:30.658293+0000 mds.l (mds.0) 19 : cluster [WRN] Scrub error on inode 0x10000000a64 (/client.0/tmp/blogbench-1.0/src/blogtest_in/blog-10) see mds.l log and `damage ls` output for details" in cluster log |
||||||||||||||
pass | 6251288 |
![]() |
2021-07-04 02:33:02 | 2021-07-04 02:33:36 | 2021-07-04 04:05:05 | 1:31:29 | 1:17:33 | 0:13:56 | smithi | master | ubuntu | 20.04 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 replication/always scrub/yes standby-replay tasks/{0-check-counter workunit/suites/ffsb} wsync/{no}} | 3 |
fail | 6251289 |
![]() |
2021-07-04 02:33:03 | 2021-07-04 02:33:37 | 2021-07-04 03:00:05 | 0:26:28 | 0:14:23 | 0:12:05 | smithi | master | ubuntu | 20.04 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 replication/always scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{yes}} | 3 |
Failure Reason:
"2021-07-04T02:55:09.367435+0000 mon.a (mon.0) 152 : cluster [WRN] Health check failed: Degraded data redundancy: 1/4 objects degraded (25.000%), 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
pass | 6251290 |
![]() |
2021-07-04 02:33:04 | 2021-07-04 02:33:38 | 2021-07-04 03:10:55 | 0:37:17 | 0:26:23 | 0:10:54 | smithi | master | rhel | 8.4 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 replication/always scrub/yes standby-replay tasks/{0-check-counter workunit/suites/iogen} wsync/{yes}} | 3 |
fail | 6251291 |
![]() |
2021-07-04 02:33:05 | 2021-07-04 02:33:38 | 2021-07-04 03:02:27 | 0:28:49 | 0:16:22 | 0:12:27 | smithi | master | ubuntu | 20.04 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 replication/always scrub/yes standby-replay tasks/{0-check-counter workunit/suites/pjd} wsync/{yes}} | 3 |
Failure Reason:
"2021-07-04T02:56:04.788685+0000 mon.a (mon.0) 172 : cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 6251292 |
![]() ![]() |
2021-07-04 02:33:06 | 2021-07-04 02:33:38 | 2021-07-04 03:32:48 | 0:59:10 | 0:45:39 | 0:13:31 | smithi | master | centos | 8.3 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 replication/always scrub/yes standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{no}} | 3 |
Failure Reason:
Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi059 with status 23: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d0c859d990c131d38934cc45467367c8d5eab1ef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/multiple_rsync.sh' |
||||||||||||||
pass | 6251293 |
![]() |
2021-07-04 02:33:07 | 2021-07-04 02:33:39 | 2021-07-04 02:56:55 | 0:23:16 | 0:12:56 | 0:10:20 | smithi | master | rhel | 8.4 | fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/backtrace} | 2 |
pass | 6251294 |
![]() |
2021-07-04 02:33:07 | 2021-07-04 02:33:39 | 2021-07-04 02:56:58 | 0:23:19 | 0:14:01 | 0:09:18 | smithi | master | rhel | 8.4 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 replication/default scrub/no standby-replay tasks/{0-check-counter workunit/direct_io} wsync/{no}} | 3 |
pass | 6251295 |
![]() |
2021-07-04 02:33:08 | 2021-07-04 02:33:40 | 2021-07-04 03:04:46 | 0:31:06 | 0:16:04 | 0:15:02 | smithi | master | centos | 8.3 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 replication/always scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{no}} | 3 |
dead | 6251296 |
![]() |
2021-07-04 02:33:09 | 2021-07-04 02:33:40 | 2021-07-04 14:44:32 | 12:10:52 | smithi | master | centos | 8.3 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 replication/always scrub/yes standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{no}} | 3 | ||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6251297 |
![]() |
2021-07-04 02:33:10 | 2021-07-04 02:33:41 | 2021-07-04 03:03:14 | 0:29:33 | 0:17:48 | 0:11:45 | smithi | master | rhel | 8.4 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 replication/default scrub/no standby-replay tasks/{0-check-counter workunit/fs/norstats} wsync/{no}} | 3 |
pass | 6251298 |
![]() |
2021-07-04 02:33:11 | 2021-07-04 02:33:41 | 2021-07-04 03:01:16 | 0:27:35 | 0:13:51 | 0:13:44 | smithi | master | centos | 8.3 | fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_misc} | 2 |
pass | 6251299 |
![]() |
2021-07-04 02:33:12 | 2021-07-04 02:33:43 | 2021-07-04 03:03:53 | 0:30:10 | 0:19:46 | 0:10:24 | smithi | master | centos | 8.3 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 replication/always scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsstress} wsync/{no}} | 3 |
pass | 6251300 |
![]() |
2021-07-04 02:33:13 | 2021-07-04 02:33:43 | 2021-07-04 03:19:29 | 0:45:46 | 0:30:40 | 0:15:06 | smithi | master | ubuntu | 20.04 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 replication/default scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsx} wsync/{no}} | 3 |
pass | 6251301 |
![]() |
2021-07-04 02:33:14 | 2021-07-04 02:33:43 | 2021-07-04 03:57:53 | 1:24:10 | 1:17:21 | 0:06:49 | smithi | master | rhel | 8.4 | fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} | 2 |
fail | 6251302 |
![]() ![]() |
2021-07-04 02:33:15 | 2021-07-04 02:33:45 | 2021-07-04 03:29:13 | 0:55:28 | 0:42:57 | 0:12:31 | smithi | master | centos | 8.3 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 replication/always scrub/yes standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{no}} | 3 |
Failure Reason:
Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi079 with status 23: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d0c859d990c131d38934cc45467367c8d5eab1ef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/multiple_rsync.sh' |
||||||||||||||
pass | 6251303 |
![]() |
2021-07-04 02:33:16 | 2021-07-04 02:33:45 | 2021-07-04 04:06:28 | 1:32:43 | 1:23:37 | 0:09:06 | smithi | master | rhel | 8.4 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 replication/default scrub/no standby-replay tasks/{0-check-counter workunit/suites/dbench} wsync/{yes}} | 3 |
pass | 6251304 |
![]() |
2021-07-04 02:33:17 | 2021-07-04 02:33:45 | 2021-07-04 03:53:38 | 1:19:53 | 1:06:50 | 0:13:03 | smithi | master | centos | 8.3 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 replication/always scrub/yes standby-replay tasks/{0-check-counter workunit/suites/ffsb} wsync/{no}} | 3 |
pass | 6251305 |
![]() |
2021-07-04 02:33:18 | 2021-07-04 02:33:45 | 2021-07-04 03:09:58 | 0:36:13 | 0:22:30 | 0:13:43 | smithi | master | centos | 8.3 | fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/fragment} | 2 |
pass | 6251306 |
![]() |
2021-07-04 02:33:18 | 2021-07-04 02:33:46 | 2021-07-04 02:59:14 | 0:25:28 | 0:15:31 | 0:09:57 | smithi | master | centos | 8.3 | fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/fs/trivial_sync}} | 2 |
pass | 6251307 |
![]() |
2021-07-04 02:33:19 | 2021-07-04 02:33:46 | 2021-07-04 03:05:01 | 0:31:15 | 0:17:41 | 0:13:34 | smithi | master | ubuntu | 20.04 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 replication/default scrub/no standby-replay tasks/{0-check-counter workunit/fs/norstats} wsync/{yes}} | 3 |
pass | 6251308 |
![]() |
2021-07-04 02:33:20 | 2021-07-04 02:33:47 | 2021-07-04 03:07:13 | 0:33:26 | 0:18:05 | 0:15:21 | smithi | master | ubuntu | 20.04 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 replication/always scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsstress} wsync/{yes}} | 3 |
pass | 6251309 |
![]() |
2021-07-04 02:33:21 | 2021-07-04 02:33:48 | 2021-07-04 03:03:01 | 0:29:13 | 0:19:04 | 0:10:09 | smithi | master | ubuntu | 20.04 | fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/iozone}} | 2 |
pass | 6251310 |
![]() |
2021-07-04 02:33:22 | 2021-07-04 02:33:48 | 2021-07-04 03:05:13 | 0:31:25 | 0:17:03 | 0:14:22 | smithi | master | centos | 8.3 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 replication/always scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{yes}} | 3 |
pass | 6251311 |
![]() |
2021-07-04 02:33:23 | 2021-07-04 02:33:48 | 2021-07-04 03:07:59 | 0:34:11 | 0:20:40 | 0:13:31 | smithi | master | ubuntu | 20.04 | fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/journal-repair} | 2 |