User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
vshankar | 2022-04-03 07:59:05 | 2022-04-03 08:00:40 | 2022-04-04 04:59:20 | 20:58:40 | fs | wip-55110-for-testing | smithi | cc9ed50 | 8 | 55 | 14 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6775197 | 2022-04-03 07:59:12 | 2022-04-03 08:00:19 | 2022-04-03 08:23:46 | 0:23:27 | 0:13:16 | 0:10:11 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/norstats} wsync/{no}} | 3 | |
Failure Reason:
Command failed on smithi052 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 16a34984-b327-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.52 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775198 | 2022-04-03 07:59:13 | 2022-04-03 08:00:40 | 2022-04-03 08:59:50 | 0:59:10 | 0:49:09 | 0:10:01 | smithi | master | ubuntu | 20.04 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/snap-schedule} | 2 | |
Failure Reason:
"2022-04-03T08:20:26.417708+0000 mon.a (mon.0) 345 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 6775199 | 2022-04-03 07:59:13 | 2022-04-03 08:00:40 | 2022-04-03 08:59:43 | 0:59:03 | 0:48:42 | 0:10:21 | smithi | master | centos | 8.stream | fs/cephadm/multivolume/{0-start 1-mount 2-workload/dbench distro/single-container-host} | 2 | |
fail | 6775200 | 2022-04-03 07:59:14 | 2022-04-03 08:00:41 | 2022-04-03 08:30:36 | 0:29:55 | 0:14:00 | 0:15:55 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{secure} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsstress} wsync/{no}} | 3 | |
Failure Reason:
Command failed on smithi059 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid dca85016-b327-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.59 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775201 | 2022-04-03 07:59:15 | 2022-04-03 08:05:42 | 2022-04-03 08:42:20 | 0:36:38 | 0:25:12 | 0:11:26 | smithi | master | ubuntu | 20.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
Command failed (workunit test fs/snaps/snaptest-realm-split.sh) on smithi027 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=cc9ed509562d767c1a5d8fc115593bc1453dfea1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-realm-split.sh' |
||||||||||||||
fail | 6775202 | 2022-04-03 07:59:16 | 2022-04-03 08:06:02 | 2022-04-03 08:29:44 | 0:23:42 | 0:12:46 | 0:10:56 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{yes}} | 3 | |
Failure Reason:
Command failed on smithi141 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid d9f4734a-b327-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.141 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
dead | 6775203 | 2022-04-03 07:59:17 | 2022-04-03 08:06:43 | 2022-04-03 14:45:49 | 6:39:06 | smithi | master | centos | 8.stream | fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6775204 | 2022-04-03 07:59:17 | 2022-04-03 08:07:54 | 2022-04-03 08:34:28 | 0:26:34 | 0:13:31 | 0:13:03 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{no}} | 3 | |
Failure Reason:
Command failed on smithi018 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 5ea3cd2a-b328-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.18 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775205 | 2022-04-03 07:59:18 | 2022-04-03 08:08:24 | 2022-04-03 08:44:01 | 0:35:37 | 0:26:18 | 0:09:19 | smithi | master | ubuntu | 20.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
Command failed (workunit test fs/snaps/snaptest-realm-split.sh) on smithi022 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=cc9ed509562d767c1a5d8fc115593bc1453dfea1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-realm-split.sh' |
||||||||||||||
fail | 6775206 | 2022-04-03 07:59:19 | 2022-04-03 08:08:35 | 2022-04-03 08:32:06 | 0:23:31 | 0:13:08 | 0:10:23 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/iozone} wsync/{yes}} | 3 | |
Failure Reason:
Command failed on smithi037 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 2d353e5e-b328-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.37 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775207 | 2022-04-03 07:59:20 | 2022-04-03 08:09:15 | 2022-04-03 08:36:59 | 0:27:44 | 0:13:35 | 0:14:09 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/pjd} wsync/{no}} | 3 | |
Failure Reason:
Command failed on smithi029 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid cca62822-b328-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.29 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775208 | 2022-04-03 07:59:21 | 2022-04-03 08:11:56 | 2022-04-03 08:46:34 | 0:34:38 | 0:23:01 | 0:11:37 | smithi | master | ubuntu | 20.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
Command failed (workunit test fs/snaps/snaptest-realm-split.sh) on smithi106 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=cc9ed509562d767c1a5d8fc115593bc1453dfea1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-realm-split.sh' |
||||||||||||||
fail | 6775209 | 2022-04-03 07:59:22 | 2022-04-03 08:13:07 | 2022-04-03 08:39:06 | 0:25:59 | 0:13:11 | 0:12:48 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{yes}} | 3 | |
Failure Reason:
Command failed on smithi066 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 3b01cc04-b329-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.66 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775210 | 2022-04-03 07:59:22 | 2022-04-03 08:15:17 | 2022-04-03 08:43:50 | 0:28:33 | 0:15:26 | 0:13:07 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{secure} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/kernel_untar_build} wsync/{no}} | 3 | |
Failure Reason:
Command failed on smithi005 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 7d16ed72-b329-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.5 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775211 | 2022-04-03 07:59:23 | 2022-04-03 08:15:58 | 2022-04-03 08:50:39 | 0:34:41 | 0:22:44 | 0:11:57 | smithi | master | ubuntu | 20.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
Command failed (workunit test fs/snaps/snaptest-realm-split.sh) on smithi007 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=cc9ed509562d767c1a5d8fc115593bc1453dfea1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-realm-split.sh' |
||||||||||||||
fail | 6775212 | 2022-04-03 07:59:24 | 2022-04-03 08:17:18 | 2022-04-03 08:40:29 | 0:23:11 | 0:13:39 | 0:09:32 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/dbench} wsync/{no}} | 3 | |
Failure Reason:
Command failed on smithi017 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 73e03cc2-b329-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.17 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
dead | 6775213 | 2022-04-03 07:59:25 | 2022-04-03 08:17:29 | 2022-04-03 15:01:15 | 6:43:46 | smithi | master | centos | 8.stream | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/snaps}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6775214 | 2022-04-03 07:59:26 | 2022-04-03 08:18:39 | 2022-04-03 08:41:56 | 0:23:17 | 0:13:21 | 0:09:56 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/ffsb} wsync/{no}} | 3 | |
Failure Reason:
Command failed on smithi045 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid b46cc1f2-b329-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.45 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775215 | 2022-04-03 07:59:27 | 2022-04-03 08:18:50 | 2022-04-03 08:44:50 | 0:26:00 | 0:13:09 | 0:12:51 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsstress} wsync/{no}} | 3 | |
Failure Reason:
Command failed on smithi038 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid f52d0bb6-b329-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.38 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
dead | 6775216 | 2022-04-03 07:59:27 | 2022-04-03 08:21:50 | 2022-04-04 04:58:52 | 20:37:02 | smithi | master | centos | 8.stream | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6775217 | 2022-04-03 07:59:28 | 2022-04-03 08:22:31 | 2022-04-03 08:48:24 | 0:25:53 | 0:15:01 | 0:10:52 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsx} wsync/{yes}} | 3 | |
Failure Reason:
Command failed on smithi025 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 5924e8aa-b32a-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.25 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
dead | 6775218 | 2022-04-03 07:59:29 | 2022-04-03 08:22:42 | 2022-04-03 15:01:38 | 6:38:56 | smithi | master | centos | 8.stream | fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6775219 | 2022-04-03 07:59:30 | 2022-04-03 08:23:22 | 2022-04-03 09:09:11 | 0:45:49 | 0:34:56 | 0:10:53 | smithi | master | ubuntu | 20.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
Command failed (workunit test fs/snaps/snaptest-realm-split.sh) on smithi114 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=cc9ed509562d767c1a5d8fc115593bc1453dfea1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-realm-split.sh' |
||||||||||||||
pass | 6775220 | 2022-04-03 07:59:31 | 2022-04-03 08:23:52 | 2022-04-03 09:42:34 | 1:18:42 | 1:08:09 | 0:10:33 | smithi | master | ubuntu | 20.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} | 2 | |
fail | 6775221 | 2022-04-03 07:59:32 | 2022-04-03 08:24:43 | 2022-04-03 08:48:55 | 0:24:12 | 0:13:36 | 0:10:36 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{yes}} | 3 | |
Failure Reason:
Command failed on smithi008 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 9b87f156-b32a-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.8 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775222 | 2022-04-03 07:59:32 | 2022-04-03 08:25:53 | 2022-04-03 08:49:21 | 0:23:28 | 0:13:21 | 0:10:07 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{secure} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/iogen} wsync/{yes}} | 3 | |
Failure Reason:
Command failed on smithi016 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid b4ad9726-b32a-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.16 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775223 | 2022-04-03 07:59:33 | 2022-04-03 08:26:14 | 2022-04-03 08:50:08 | 0:23:54 | 0:13:12 | 0:10:42 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/pjd} wsync/{yes}} | 3 | |
Failure Reason:
Command failed on smithi055 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid c97ffc52-b32a-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.55 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775224 | 2022-04-03 07:59:34 | 2022-04-03 08:27:14 | 2022-04-03 08:54:53 | 0:27:39 | 0:13:49 | 0:13:50 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{legacy} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/direct_io} wsync/{yes}} | 3 | |
Failure Reason:
Command failed on smithi141 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 530ba80e-b32b-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.141 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
dead | 6775225 | 2022-04-03 07:59:35 | 2022-04-03 08:29:55 | 2022-04-03 15:13:43 | 6:43:48 | smithi | master | rhel | 8.4 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6775226 | 2022-04-03 07:59:36 | 2022-04-03 08:29:55 | 2022-04-03 08:55:47 | 0:25:52 | 0:13:50 | 0:12:02 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/kernel_untar_build} wsync/{yes}} | 3 | |
Failure Reason:
Command failed on smithi059 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 58b5c780-b32b-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.59 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775227 | 2022-04-03 07:59:37 | 2022-04-03 08:30:46 | 2022-04-03 08:57:04 | 0:26:18 | 0:13:20 | 0:12:58 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{legacy} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{no}} | 3 | |
Failure Reason:
Command failed on smithi037 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 9b2dde5e-b32b-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.37 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
dead | 6775228 | 2022-04-03 07:59:37 | 2022-04-03 08:32:16 | 2022-04-03 15:09:51 | 6:37:35 | smithi | master | centos | 8.stream | fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6775229 | 2022-04-03 07:59:38 | 2022-04-03 08:32:17 | 2022-04-04 04:59:19 | 20:27:02 | smithi | master | rhel | 8.4 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/snaps}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6775230 | 2022-04-03 07:59:39 | 2022-04-03 08:34:37 | 2022-04-03 08:59:43 | 0:25:06 | 0:13:13 | 0:11:53 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/ffsb} wsync/{no}} | 3 | |
Failure Reason:
Command failed on smithi122 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 18059dea-b32c-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.122 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775231 | 2022-04-03 07:59:40 | 2022-04-03 08:36:48 | 2022-04-03 09:02:03 | 0:25:15 | 0:14:04 | 0:11:11 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/norstats} wsync/{yes}} | 3 | |
Failure Reason:
Command failed on smithi029 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 5790ed3e-b32c-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.29 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
pass | 6775232 | 2022-04-03 07:59:41 | 2022-04-03 08:37:08 | 2022-04-03 08:58:19 | 0:21:11 | 0:14:16 | 0:06:55 | smithi | master | rhel | 8.4 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/pjd}} | 2 | |
fail | 6775233 | 2022-04-03 07:59:42 | 2022-04-03 08:37:19 | 2022-04-03 09:04:54 | 0:27:35 | 0:13:23 | 0:14:12 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsx} wsync/{no}} | 3 | |
Failure Reason:
Command failed on smithi066 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid ba90d0b6-b32c-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.66 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775234 | 2022-04-03 07:59:42 | 2022-04-03 08:39:09 | 2022-04-03 09:05:03 | 0:25:54 | 0:13:53 | 0:12:01 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{yes}} | 3 | |
Failure Reason:
Command failed on smithi006 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid d0f42bd2-b32c-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.6 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
pass | 6775235 | 2022-04-03 07:59:43 | 2022-04-03 08:40:10 | 2022-04-03 09:29:14 | 0:49:04 | 0:38:45 | 0:10:19 | smithi | master | centos | 8.stream | fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
dead | 6775236 | 2022-04-03 07:59:44 | 2022-04-03 08:40:30 | 2022-04-04 04:59:20 | 20:18:50 | smithi | master | rhel | 8.4 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/snaps}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6775237 | 2022-04-03 07:59:45 | 2022-04-03 08:42:01 | 2022-04-03 09:06:01 | 0:24:00 | 0:13:24 | 0:10:36 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/iogen} wsync/{no}} | 3 | |
Failure Reason:
Command failed on smithi027 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 0c4d8264-b32d-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.27 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775238 | 2022-04-03 07:59:46 | 2022-04-03 08:42:21 | 2022-04-03 09:19:56 | 0:37:35 | 0:25:04 | 0:12:31 | smithi | master | ubuntu | 20.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
Command failed (workunit test fs/snaps/snaptest-realm-split.sh) on smithi049 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=cc9ed509562d767c1a5d8fc115593bc1453dfea1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-realm-split.sh' |
||||||||||||||
fail | 6775239 | 2022-04-03 07:59:47 | 2022-04-03 08:43:52 | 2022-04-03 09:08:50 | 0:24:58 | 0:14:25 | 0:10:33 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/iozone} wsync/{no}} | 3 | |
Failure Reason:
Command failed on smithi005 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 3b23c788-b32d-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.5 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775240 | 2022-04-03 07:59:48 | 2022-04-03 08:44:12 | 2022-04-03 09:07:46 | 0:23:34 | 0:13:53 | 0:09:41 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/direct_io} wsync/{no}} | 3 | |
Failure Reason:
Command failed on smithi038 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 33a97796-b32d-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.38 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
dead | 6775241 | 2022-04-03 07:59:48 | 2022-04-03 08:44:53 | 2022-04-04 04:59:19 | 20:14:26 | smithi | master | centos | 8.stream | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6775242 | 2022-04-03 07:59:49 | 2022-04-03 08:46:43 | 2022-04-03 09:15:59 | 0:29:16 | 0:20:39 | 0:08:37 | smithi | master | rhel | 8.4 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/metrics} | 2 | |
Failure Reason:
Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics) |
||||||||||||||
fail | 6775243 | 2022-04-03 07:59:50 | 2022-04-03 08:48:34 | 2022-04-03 09:13:59 | 0:25:25 | 0:13:32 | 0:11:53 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{no}} | 3 | |
Failure Reason:
Command failed on smithi008 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid ea2db89c-b32d-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.8 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
pass | 6775244 | 2022-04-03 07:59:51 | 2022-04-03 08:49:04 | 2022-04-03 09:17:58 | 0:28:54 | 0:21:48 | 0:07:06 | smithi | master | rhel | 8.4 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} | 2 | |
dead | 6775245 | 2022-04-03 07:59:52 | 2022-04-03 08:49:05 | 2022-04-03 15:27:02 | 6:37:57 | smithi | master | centos | 8.stream | fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6775246 | 2022-04-03 07:59:53 | 2022-04-03 08:49:25 | 2022-04-03 09:12:45 | 0:23:20 | 0:13:20 | 0:10:00 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{yes}} | 3 | |
Failure Reason:
Command failed on smithi052 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid f6215730-b32d-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.52 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775247 | 2022-04-03 07:59:54 | 2022-04-03 08:49:45 | 2022-04-03 09:15:14 | 0:25:29 | 0:13:58 | 0:11:31 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/dbench} wsync/{yes}} | 3 | |
Failure Reason:
Command failed on smithi055 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 3a822d28-b32e-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.55 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
pass | 6775248 | 2022-04-03 07:59:55 | 2022-04-03 08:50:16 | 2022-04-03 09:11:33 | 0:21:17 | 0:14:52 | 0:06:25 | smithi | master | rhel | 8.4 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/pjd}} | 2 | |
dead | 6775249 | 2022-04-03 07:59:55 | 2022-04-03 08:50:46 | 2022-04-03 15:36:36 | 6:45:50 | smithi | master | rhel | 8.4 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/snaps}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6775250 | 2022-04-03 07:59:56 | 2022-04-03 08:54:57 | 2022-04-03 09:18:44 | 0:23:47 | 0:13:22 | 0:10:25 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/norstats} wsync/{no}} | 3 | |
Failure Reason:
Command failed on smithi059 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid c5635d0e-b32e-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.59 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775251 | 2022-04-03 07:59:57 | 2022-04-03 08:55:58 | 2022-04-03 09:21:56 | 0:25:58 | 0:13:55 | 0:12:03 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{secure} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsstress} wsync/{yes}} | 3 | |
Failure Reason:
Command failed on smithi037 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 030bc3c6-b32f-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.37 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775252 | 2022-04-03 07:59:58 | 2022-04-03 08:57:08 | 2022-04-03 09:21:35 | 0:24:27 | 0:12:38 | 0:11:49 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{yes}} | 3 | |
Failure Reason:
Command failed on smithi125 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 11670b7e-b32f-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.125 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775253 | 2022-04-03 07:59:59 | 2022-04-03 08:58:29 | 2022-04-03 09:24:41 | 0:26:12 | 0:13:29 | 0:12:43 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{yes}} | 3 | |
Failure Reason:
Command failed on smithi002 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 75004542-b32f-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.2 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
dead | 6775254 | 2022-04-03 08:00:00 | 2022-04-03 08:59:49 | 2022-04-03 15:42:57 | 6:43:08 | smithi | master | rhel | 8.4 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/snaps}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6775255 | 2022-04-03 08:00:01 | 2022-04-03 08:59:50 | 2022-04-03 09:23:39 | 0:23:49 | 0:13:23 | 0:10:26 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/iozone} wsync/{no}} | 3 | |
Failure Reason:
Command failed on smithi026 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 63fcb640-b32f-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.26 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775256 | 2022-04-03 08:00:01 | 2022-04-03 09:00:00 | 2022-04-03 10:01:27 | 1:01:27 | 0:49:14 | 0:12:13 | smithi | master | ubuntu | 20.04 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/snap-schedule} | 2 | |
Failure Reason:
"2022-04-03T09:22:43.548575+0000 mon.a (mon.0) 376 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
dead | 6775257 | 2022-04-03 08:00:02 | 2022-04-03 09:02:11 | 2022-04-03 15:41:09 | 6:38:58 | smithi | master | centos | 8.stream | fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6775258 | 2022-04-03 08:00:03 | 2022-04-03 09:02:11 | 2022-04-03 09:30:43 | 0:28:32 | 0:13:42 | 0:14:50 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{secure} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/pjd} wsync/{no}} | 3 | |
Failure Reason:
Command failed on smithi066 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 3d902f54-b330-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.66 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775259 | 2022-04-03 08:00:04 | 2022-04-03 09:05:02 | 2022-04-03 09:38:22 | 0:33:20 | 0:23:47 | 0:09:33 | smithi | master | ubuntu | 20.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
Command failed (workunit test fs/snaps/snaptest-realm-split.sh) on smithi006 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=cc9ed509562d767c1a5d8fc115593bc1453dfea1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-realm-split.sh' |
||||||||||||||
fail | 6775260 | 2022-04-03 08:00:05 | 2022-04-03 09:05:12 | 2022-04-03 09:29:45 | 0:24:33 | 0:13:32 | 0:11:01 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{yes}} | 3 | |
Failure Reason:
Command failed on smithi027 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 4f45afe4-b330-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.27 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775261 | 2022-04-03 08:00:06 | 2022-04-03 09:06:02 | 2022-04-03 09:32:45 | 0:26:43 | 0:13:25 | 0:13:18 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{secure} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/kernel_untar_build} wsync/{no}} | 3 | |
Failure Reason:
Command failed on smithi038 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 86c8d5f4-b330-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.38 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
pass | 6775262 | 2022-04-03 08:00:07 | 2022-04-03 09:07:53 | 2022-04-03 09:57:46 | 0:49:53 | 0:37:20 | 0:12:33 | smithi | master | centos | 8.stream | fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6775263 | 2022-04-03 08:00:07 | 2022-04-03 09:08:53 | 2022-04-03 09:32:06 | 0:23:13 | 0:13:14 | 0:09:59 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/dbench} wsync/{no}} | 3 | |
Failure Reason:
Command failed on smithi005 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid b6af8470-b330-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.5 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775264 | 2022-04-03 08:00:08 | 2022-04-03 09:09:14 | 2022-04-03 09:44:55 | 0:35:41 | 0:23:00 | 0:12:41 | smithi | master | ubuntu | 20.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
Command failed (workunit test fs/snaps/snaptest-realm-split.sh) on smithi007 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=cc9ed509562d767c1a5d8fc115593bc1453dfea1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-realm-split.sh' |
||||||||||||||
fail | 6775265 | 2022-04-03 08:00:09 | 2022-04-03 09:11:34 | 2022-04-03 09:38:05 | 0:26:31 | 0:13:19 | 0:13:12 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/ffsb} wsync/{yes}} | 3 | |
Failure Reason:
Command failed on smithi043 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 4a8078f8-b331-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.43 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775266 | 2022-04-03 08:00:10 | 2022-04-03 09:12:35 | 2022-04-03 09:35:47 | 0:23:12 | 0:13:53 | 0:09:19 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsstress} wsync/{yes}} | 3 | |
Failure Reason:
Command failed on smithi052 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 202bc814-b331-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.52 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775267 | 2022-04-03 08:00:11 | 2022-04-03 09:12:55 | 2022-04-03 09:32:57 | 0:20:02 | 0:13:21 | 0:06:41 | smithi | master | rhel | 8.4 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/acls} | 2 | |
Failure Reason:
Test failure: test_acls (tasks.cephfs.test_acls.TestACLs) |
||||||||||||||
fail | 6775268 | 2022-04-03 08:00:12 | 2022-04-03 09:14:06 | 2022-04-03 09:40:14 | 0:26:08 | 0:13:26 | 0:12:42 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsx} wsync/{yes}} | 3 | |
Failure Reason:
Command failed on smithi055 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 9e1aa2b8-b331-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.55 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
pass | 6775269 | 2022-04-03 08:00:13 | 2022-04-03 09:15:16 | 2022-04-03 09:40:13 | 0:24:57 | 0:15:23 | 0:09:34 | smithi | master | rhel | 8.4 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} | 2 | |
dead | 6775270 | 2022-04-03 08:00:14 | 2022-04-03 09:15:17 | 2022-04-03 15:54:14 | 6:38:57 | smithi | master | centos | 8.stream | fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6775271 | 2022-04-03 08:00:14 | 2022-04-03 09:16:07 | 2022-04-03 10:15:35 | 0:59:28 | 0:48:04 | 0:11:24 | smithi | master | ubuntu | 20.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
Command failed (workunit test fs/snaps/snaptest-realm-split.sh) on smithi100 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=cc9ed509562d767c1a5d8fc115593bc1453dfea1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-realm-split.sh' |
||||||||||||||
fail | 6775272 | 2022-04-03 08:00:15 | 2022-04-03 09:18:08 | 2022-04-03 09:41:39 | 0:23:31 | 0:13:14 | 0:10:17 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{no}} | 3 | |
Failure Reason:
Command failed on smithi059 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid f656bcdc-b331-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.59 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6775273 | 2022-04-03 08:00:16 | 2022-04-03 09:18:48 | 2022-04-03 09:46:46 | 0:27:58 | 0:13:02 | 0:14:56 | smithi | master | rhel | 8.4 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/iogen} wsync/{yes}} | 3 | |
Failure Reason:
Command failed on smithi125 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cc9ed509562d767c1a5d8fc115593bc1453dfea1 -v bootstrap --fsid 8badf85e-b332-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.125 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |