User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2021-10-28 23:09:03 | 2021-10-29 06:16:20 | 2021-10-30 04:13:59 | 21:57:39 | rados | wip-yuri2-testing-2021-10-28-1343 | smithi | 9466ff3 | 263 | 124 | 26 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6465934 | 2021-10-28 23:10:56 | 2021-10-29 06:16:20 | 2021-10-29 06:36:16 | 0:19:56 | 0:09:04 | 0:10:52 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_read} | 1 | |
pass | 6465935 | 2021-10-28 23:10:57 | 2021-10-29 06:16:20 | 2021-10-29 06:46:52 | 0:30:32 | 0:10:35 | 0:19:57 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 6465936 | 2021-10-28 23:10:58 | 2021-10-29 06:22:12 | 2021-10-29 06:56:32 | 0:34:20 | 0:22:13 | 0:12:07 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
Command failed on smithi090 with status 5: 'sudo systemctl stop ceph-25fa05d6-3883-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465937 | 2021-10-28 23:10:58 | 2021-10-29 06:22:52 | 2021-10-29 06:53:10 | 0:30:18 | 0:16:29 | 0:13:49 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8.stream} thrashers/default thrashosds-health workloads/dedup-io-snaps} | 2 | |
pass | 6465938 | 2021-10-28 23:10:59 | 2021-10-29 06:25:53 | 2021-10-29 06:49:23 | 0:23:30 | 0:13:41 | 0:09:49 | smithi | master | centos | 8.stream | rados/singleton/{all/mon-config mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6465939 | 2021-10-28 23:11:00 | 2021-10-29 06:26:23 | 2021-10-29 07:02:46 | 0:36:23 | 0:19:28 | 0:16:55 | smithi | master | centos | 8.3 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi158 with status 5: 'sudo systemctl stop ceph-2b83a218-3884-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465940 | 2021-10-28 23:11:01 | 2021-10-29 06:31:04 | 2021-10-29 06:53:29 | 0:22:25 | 0:09:14 | 0:13:11 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 6465941 | 2021-10-28 23:11:02 | 2021-10-29 06:31:04 | 2021-10-29 06:58:04 | 0:27:00 | 0:15:46 | 0:11:14 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/msgr mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6465942 | 2021-10-28 23:11:03 | 2021-10-29 06:31:25 | 2021-10-29 07:02:17 | 0:30:52 | 0:17:14 | 0:13:38 | smithi | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/libcephsqlite} | 2 | |
fail | 6465943 | 2021-10-28 23:11:04 | 2021-10-29 06:35:06 | 2021-10-29 07:08:09 | 0:33:03 | 0:23:03 | 0:10:00 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465944 | 2021-10-28 23:11:05 | 2021-10-29 06:38:46 | 2021-10-29 07:00:13 | 0:21:27 | 0:12:27 | 0:09:00 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi204 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9466ff3c1b9d2f6b2d5c2fa1e5cfc2396b9701ee TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 6465945 | 2021-10-28 23:11:06 | 2021-10-29 06:38:47 | 2021-10-29 07:18:19 | 0:39:32 | 0:33:24 | 0:06:08 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-small-objects} | 2 | |
dead | 6465946 | 2021-10-28 23:11:07 | 2021-10-29 06:39:07 | 2021-10-29 18:50:41 | 12:11:34 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465947 | 2021-10-28 23:11:08 | 2021-10-29 06:39:17 | 2021-10-29 07:26:55 | 0:47:38 | 0:36:52 | 0:10:46 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/pool-snaps-few-objects} | 2 | |
pass | 6465948 | 2021-10-28 23:11:09 | 2021-10-29 06:43:38 | 2021-10-29 07:54:28 | 1:10:50 | 0:58:03 | 0:12:47 | smithi | master | centos | 8.stream | rados/singleton/{all/osd-backfill mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6465949 | 2021-10-28 23:11:10 | 2021-10-29 06:46:59 | 2021-10-29 07:12:28 | 0:25:29 | 0:14:40 | 0:10:49 | smithi | master | centos | 8.3 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-comp-zlib supported-random-distro$/{centos_8} tasks/insights} | 2 | |
fail | 6465950 | 2021-10-28 23:11:11 | 2021-10-29 06:46:59 | 2021-10-29 07:21:34 | 0:34:35 | 0:22:00 | 0:12:35 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi099 with status 5: 'sudo systemctl stop ceph-b666eaf0-3886-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465951 | 2021-10-28 23:11:12 | 2021-10-29 06:48:20 | 2021-10-29 07:29:56 | 0:41:36 | 0:28:32 | 0:13:04 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 2 | |
fail | 6465952 | 2021-10-28 23:11:13 | 2021-10-29 06:49:30 | 2021-10-29 07:25:43 | 0:36:13 | 0:21:49 | 0:14:24 | smithi | master | centos | 8.2 | rados/cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi197 with status 5: 'sudo systemctl stop ceph-42a72c8c-3887-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465953 | 2021-10-28 23:11:14 | 2021-10-29 06:52:31 | 2021-10-29 07:32:38 | 0:40:07 | 0:27:35 | 0:12:32 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{centos_8.stream} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 6465954 | 2021-10-28 23:11:15 | 2021-10-29 06:53:11 | 2021-10-29 07:24:57 | 0:31:46 | 0:24:03 | 0:07:43 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465955 | 2021-10-28 23:11:16 | 2021-10-29 06:53:32 | 2021-10-29 07:40:08 | 0:46:36 | 0:37:19 | 0:09:17 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/rados_api_tests} | 2 | |
fail | 6465956 | 2021-10-28 23:11:16 | 2021-10-29 06:56:43 | 2021-10-29 07:25:38 | 0:28:55 | 0:23:11 | 0:05:44 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465957 | 2021-10-28 23:11:17 | 2021-10-29 06:56:43 | 2021-10-29 08:03:40 | 1:06:57 | 0:56:32 | 0:10:25 | smithi | master | centos | 8.stream | rados/singleton/{all/osd-recovery-incomplete mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6465958 | 2021-10-28 23:11:18 | 2021-10-29 06:58:13 | 2021-10-29 07:31:50 | 0:33:37 | 0:23:04 | 0:10:33 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi139 with status 5: 'sudo systemctl stop ceph-4f7d4b66-3888-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465959 | 2021-10-28 23:11:19 | 2021-10-29 07:02:24 | 2021-10-29 07:31:43 | 0:29:19 | 0:20:16 | 0:09:03 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/osd_stale_reads mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6465960 | 2021-10-28 23:11:20 | 2021-10-29 07:02:25 | 2021-10-29 09:50:08 | 2:47:43 | 2:38:11 | 0:09:32 | smithi | master | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/scrub} | 1 | |
pass | 6465961 | 2021-10-28 23:11:21 | 2021-10-29 07:02:55 | 2021-10-29 07:39:48 | 0:36:53 | 0:24:56 | 0:11:57 | smithi | master | rhel | 8.4 | rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} tasks/mon_recovery} | 3 | |
pass | 6465962 | 2021-10-28 23:11:22 | 2021-10-29 07:08:16 | 2021-10-29 07:33:32 | 0:25:16 | 0:12:30 | 0:12:46 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} | 1 | |
pass | 6465963 | 2021-10-28 23:11:23 | 2021-10-29 07:08:26 | 2021-10-29 07:28:02 | 0:19:36 | 0:06:54 | 0:12:42 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_cephadm_repos} | 1 | |
fail | 6465964 | 2021-10-28 23:11:24 | 2021-10-29 07:12:37 | 2021-10-29 07:54:53 | 0:42:16 | 0:23:47 | 0:18:29 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465965 | 2021-10-28 23:11:25 | 2021-10-29 07:18:28 | 2021-10-29 07:56:53 | 0:38:25 | 0:24:10 | 0:14:15 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
pass | 6465966 | 2021-10-28 23:11:26 | 2021-10-29 07:21:39 | 2021-10-29 07:59:17 | 0:37:38 | 0:31:25 | 0:06:13 | smithi | master | rhel | 8.4 | rados/singleton/{all/osd-recovery mon_election/classic msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6465967 | 2021-10-28 23:11:27 | 2021-10-29 07:21:39 | 2021-10-29 08:12:30 | 0:50:51 | 0:41:11 | 0:09:40 | smithi | master | rhel | 8.4 | rados/objectstore/{backends/objectstore-filestore-memstore supported-random-distro$/{rhel_8}} | 1 | |
dead | 6465968 | 2021-10-28 23:11:28 | 2021-10-29 07:25:00 | 2021-10-29 19:38:08 | 12:13:08 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465969 | 2021-10-28 23:11:29 | 2021-10-29 07:25:40 | 2021-10-29 08:16:52 | 0:51:12 | 0:41:12 | 0:10:00 | smithi | master | centos | 8.3 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/sync workloads/rados_mon_osdmap_prune} | 2 | |
pass | 6465970 | 2021-10-28 23:11:30 | 2021-10-29 07:25:51 | 2021-10-29 08:03:44 | 0:37:53 | 0:30:27 | 0:07:26 | smithi | master | rhel | 8.4 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} tasks/rados_api_tests} | 2 | |
pass | 6465971 | 2021-10-28 23:11:31 | 2021-10-29 07:26:31 | 2021-10-29 08:20:19 | 0:53:48 | 0:42:38 | 0:11:10 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/mon_recovery validater/valgrind} | 2 | |
pass | 6465972 | 2021-10-28 23:11:32 | 2021-10-29 07:27:02 | 2021-10-29 08:04:27 | 0:37:25 | 0:22:46 | 0:14:39 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |
pass | 6465973 | 2021-10-28 23:11:32 | 2021-10-29 07:30:02 | 2021-10-29 07:49:10 | 0:19:08 | 0:09:50 | 0:09:18 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/pool-access mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
dead | 6465974 | 2021-10-28 23:11:34 | 2021-10-29 07:30:02 | 2021-10-29 19:43:32 | 12:13:30 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls mon_election/connectivity} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465975 | 2021-10-28 23:11:35 | 2021-10-29 07:31:53 | 2021-10-29 07:59:59 | 0:28:06 | 0:10:52 | 0:17:14 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 6465976 | 2021-10-28 23:11:36 | 2021-10-29 07:33:34 | 2021-10-29 08:11:35 | 0:38:01 | 0:23:50 | 0:14:11 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465977 | 2021-10-28 23:11:37 | 2021-10-29 07:39:55 | 2021-10-29 08:50:28 | 1:10:33 | 1:02:29 | 0:08:04 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/radosbench} | 2 | |
fail | 6465978 | 2021-10-28 23:11:38 | 2021-10-29 07:40:15 | 2021-10-29 08:20:42 | 0:40:27 | 0:23:27 | 0:17:00 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi154 with status 5: 'sudo systemctl stop ceph-e359c6c4-388e-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465979 | 2021-10-28 23:11:39 | 2021-10-29 07:49:17 | 2021-10-29 08:30:47 | 0:41:30 | 0:24:19 | 0:17:11 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 6465980 | 2021-10-28 23:11:40 | 2021-10-29 07:54:58 | 2021-10-29 08:21:52 | 0:26:54 | 0:19:30 | 0:07:24 | smithi | master | rhel | 8.4 | rados/singleton/{all/peer mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 6465981 | 2021-10-28 23:11:41 | 2021-10-29 07:56:58 | 2021-10-29 08:18:10 | 0:21:12 | 0:10:57 | 0:10:15 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
Command failed on smithi063 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:9466ff3c1b9d2f6b2d5c2fa1e5cfc2396b9701ee shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1e10bdd0-3890-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6465982 | 2021-10-28 23:11:42 | 2021-10-29 07:56:59 | 2021-10-29 08:59:03 | 1:02:04 | 0:53:36 | 0:08:28 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 6465983 | 2021-10-28 23:11:43 | 2021-10-29 07:59:19 | 2021-10-29 08:29:36 | 0:30:17 | 0:19:22 | 0:10:55 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465984 | 2021-10-28 23:11:44 | 2021-10-29 08:00:10 | 2021-10-29 08:45:53 | 0:45:43 | 0:34:44 | 0:10:59 | smithi | master | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 2 | |
fail | 6465985 | 2021-10-28 23:11:45 | 2021-10-29 08:00:10 | 2021-10-29 08:31:17 | 0:31:07 | 0:17:09 | 0:13:58 | smithi | master | centos | 8.3 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-comp-zstd supported-random-distro$/{centos_8} tasks/module_selftest} | 2 | |
Failure Reason:
Test failure: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) |
||||||||||||||
fail | 6465986 | 2021-10-28 23:11:46 | 2021-10-29 08:03:51 | 2021-10-29 08:37:55 | 0:34:04 | 0:22:23 | 0:11:41 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi162 with status 5: 'sudo systemctl stop ceph-63bbbbcc-3891-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465987 | 2021-10-28 23:11:47 | 2021-10-29 08:04:31 | 2021-10-29 08:36:45 | 0:32:14 | 0:11:42 | 0:20:32 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/redirect} | 2 | |
fail | 6465988 | 2021-10-28 23:11:48 | 2021-10-29 08:11:42 | 2021-10-29 08:47:40 | 0:35:58 | 0:23:48 | 0:12:10 | smithi | master | centos | 8.3 | rados/cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi178 with status 5: 'sudo systemctl stop ceph-d80544fc-3892-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465989 | 2021-10-28 23:11:49 | 2021-10-29 08:12:13 | 2021-10-29 08:38:15 | 0:26:02 | 0:12:04 | 0:13:58 | smithi | master | centos | 8.stream | rados/singleton/{all/pg-autoscaler-progress-off mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8.stream}} | 2 | |
pass | 6465990 | 2021-10-28 23:11:50 | 2021-10-29 08:16:54 | 2021-10-29 08:36:50 | 0:19:56 | 0:09:49 | 0:10:07 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write} | 1 | |
fail | 6465991 | 2021-10-28 23:11:51 | 2021-10-29 08:16:54 | 2021-10-29 08:51:54 | 0:35:00 | 0:21:29 | 0:13:31 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465992 | 2021-10-28 23:11:52 | 2021-10-29 08:18:55 | 2021-10-29 08:55:29 | 0:36:34 | 0:23:12 | 0:13:22 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/pacific backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} | 3 | |
Failure Reason:
Command failed on smithi072 with status 5: 'sudo systemctl stop ceph-b1a1965c-3893-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465993 | 2021-10-28 23:11:53 | 2021-10-29 08:19:55 | 2021-10-29 08:55:21 | 0:35:26 | 0:23:48 | 0:11:38 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6465994 | 2021-10-28 23:11:53 | 2021-10-29 08:20:05 | 2021-10-29 08:43:56 | 0:23:51 | 0:14:10 | 0:09:41 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/version-number-sanity mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6465995 | 2021-10-28 23:11:54 | 2021-10-29 08:20:26 | 2021-10-29 09:44:41 | 1:24:15 | 1:13:23 | 0:10:52 | smithi | master | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465996 | 2021-10-28 23:11:55 | 2021-10-29 08:20:26 | 2021-10-29 08:59:46 | 0:39:20 | 0:29:27 | 0:09:53 | smithi | master | ubuntu | 20.04 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6465997 | 2021-10-28 23:11:56 | 2021-10-29 08:20:26 | 2021-10-29 08:55:05 | 0:34:39 | 0:22:13 | 0:12:26 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi154 with status 5: 'sudo systemctl stop ceph-12196c7c-3893-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465998 | 2021-10-28 23:11:57 | 2021-10-29 08:20:47 | 2021-10-29 08:49:11 | 0:28:24 | 0:16:54 | 0:11:30 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/redirect_promote_tests} | 2 | |
pass | 6465999 | 2021-10-28 23:11:58 | 2021-10-29 08:20:47 | 2021-10-29 08:44:31 | 0:23:44 | 0:12:12 | 0:11:32 | smithi | master | centos | 8.stream | rados/singleton/{all/pg-autoscaler mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8.stream}} | 2 | |
pass | 6466000 | 2021-10-28 23:11:59 | 2021-10-29 08:21:27 | 2021-10-29 08:48:47 | 0:27:20 | 0:16:44 | 0:10:36 | smithi | master | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream} tasks/rados_cls_all} | 2 | |
fail | 6466001 | 2021-10-28 23:12:00 | 2021-10-29 08:21:28 | 2021-10-29 08:41:09 | 0:19:41 | 0:10:41 | 0:09:00 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi099 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:9466ff3c1b9d2f6b2d5c2fa1e5cfc2396b9701ee shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 90820092-3893-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6466002 | 2021-10-28 23:12:01 | 2021-10-29 08:21:58 | 2021-10-29 09:00:15 | 0:38:17 | 0:25:07 | 0:13:10 | smithi | master | centos | 8.stream | rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8.stream} tasks/mon_recovery} | 3 | |
fail | 6466003 | 2021-10-28 23:12:02 | 2021-10-29 08:22:59 | 2021-10-29 09:00:05 | 0:37:06 | 0:23:45 | 0:13:21 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6466004 | 2021-10-28 23:12:03 | 2021-10-29 08:29:10 | 2021-10-29 09:02:55 | 0:33:45 | 0:23:25 | 0:10:20 | smithi | master | centos | 8.2 | rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_ganesha (unittest.loader._FailedTest) |
||||||||||||||
pass | 6466005 | 2021-10-28 23:12:04 | 2021-10-29 08:29:40 | 2021-10-29 08:50:16 | 0:20:36 | 0:09:42 | 0:10:54 | smithi | master | centos | 8.3 | rados/objectstore/{backends/alloc-hint supported-random-distro$/{centos_8}} | 1 | |
pass | 6466006 | 2021-10-28 23:12:05 | 2021-10-29 08:30:50 | 2021-10-29 09:01:22 | 0:30:32 | 0:23:14 | 0:07:18 | smithi | master | rhel | 8.4 | rados/rest/{mgr-restful supported-random-distro$/{rhel_8}} | 1 | |
fail | 6466007 | 2021-10-28 23:12:06 | 2021-10-29 08:30:51 | 2021-10-29 09:07:17 | 0:36:26 | 0:25:54 | 0:10:32 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.7.0} | 1 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
pass | 6466008 | 2021-10-28 23:12:07 | 2021-10-29 08:30:51 | 2021-10-29 08:58:24 | 0:27:33 | 0:18:05 | 0:09:28 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6466009 | 2021-10-28 23:12:08 | 2021-10-29 08:31:21 | 2021-10-29 08:56:36 | 0:25:15 | 0:15:09 | 0:10:06 | smithi | master | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/c2c} | 1 | |
fail | 6466010 | 2021-10-28 23:12:09 | 2021-10-29 08:31:22 | 2021-10-29 12:18:49 | 3:47:27 | 3:30:52 | 0:16:35 | smithi | master | centos | 8.3 | rados/upgrade/parallel/{0-distro$/{centos_8.3_container_tools_3.0} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi080 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh' |
||||||||||||||
pass | 6466011 | 2021-10-28 23:12:10 | 2021-10-29 08:36:53 | 2021-10-29 09:12:50 | 0:35:57 | 0:25:58 | 0:09:59 | smithi | master | centos | 8.3 | rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} | 1 | |
dead | 6466012 | 2021-10-28 23:12:11 | 2021-10-29 08:36:53 | 2021-10-29 20:50:11 | 12:13:18 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6466013 | 2021-10-28 23:12:12 | 2021-10-29 08:38:04 | 2021-10-29 09:11:36 | 0:33:32 | 0:26:17 | 0:07:15 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/redirect_set_object} | 2 | |
dead | 6466014 | 2021-10-28 23:12:13 | 2021-10-29 08:38:24 | 2021-10-29 20:52:36 | 12:14:12 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6466015 | 2021-10-28 23:12:14 | 2021-10-29 08:41:15 | 2021-10-29 09:20:17 | 0:39:02 | 0:25:14 | 0:13:48 | smithi | master | ubuntu | 20.04 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/force-sync-many workloads/rados_mon_workunits} | 2 | |
pass | 6466016 | 2021-10-28 23:12:15 | 2021-10-29 08:44:35 | 2021-10-29 09:21:35 | 0:37:00 | 0:24:56 | 0:12:04 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados tasks/rados_api_tests validater/lockdep} | 2 | |
pass | 6466017 | 2021-10-28 23:12:16 | 2021-10-29 08:45:56 | 2021-10-29 09:07:52 | 0:21:56 | 0:10:52 | 0:11:04 | smithi | master | centos | 8.3 | rados/singleton/{all/pg-removal-interruption mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6466018 | 2021-10-28 23:12:17 | 2021-10-29 08:45:56 | 2021-10-29 09:19:13 | 0:33:17 | 0:25:21 | 0:07:56 | smithi | master | rhel | 8.4 | rados/cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.4_container_tools_rhel8} 2-node-mgr orchestrator_cli} | 2 | |
pass | 6466019 | 2021-10-28 23:12:18 | 2021-10-29 08:47:47 | 2021-10-29 09:18:44 | 0:30:57 | 0:22:39 | 0:08:18 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/filestore-xfs rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 6466020 | 2021-10-28 23:12:19 | 2021-10-29 08:49:17 | 2021-10-29 09:20:16 | 0:30:59 | 0:19:46 | 0:11:13 | smithi | master | centos | 8.2 | rados/cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466021 | 2021-10-28 23:12:20 | 2021-10-29 08:50:38 | 2021-10-29 09:08:33 | 0:17:55 | 0:08:02 | 0:09:53 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/balancer mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6466022 | 2021-10-28 23:12:21 | 2021-10-29 08:50:38 | 2021-10-29 09:18:06 | 0:27:28 | 0:09:53 | 0:17:35 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
fail | 6466023 | 2021-10-28 23:12:22 | 2021-10-29 08:54:09 | 2021-10-29 09:23:52 | 0:29:43 | 0:19:14 | 0:10:29 | smithi | master | centos | 8.2 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi122 with status 5: 'sudo systemctl stop ceph-f87dcdbc-3897-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466024 | 2021-10-28 23:12:23 | 2021-10-29 08:54:29 | 2021-10-29 09:15:22 | 0:20:53 | 0:10:28 | 0:10:25 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} | 1 | |
pass | 6466025 | 2021-10-28 23:12:24 | 2021-10-29 08:55:10 | 2021-10-29 09:22:04 | 0:26:54 | 0:13:03 | 0:13:51 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/set-chunks-read} | 2 | |
pass | 6466026 | 2021-10-28 23:12:25 | 2021-10-29 08:55:30 | 2021-10-29 09:26:27 | 0:30:57 | 0:22:12 | 0:08:45 | smithi | master | centos | 8.3 | rados/singleton/{all/radostool mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} | 1 | |
fail | 6466027 | 2021-10-28 23:12:26 | 2021-10-29 08:55:30 | 2021-10-29 09:14:56 | 0:19:26 | 0:13:00 | 0:06:26 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-singlehost/{0-distro$/{rhel_8.4_container_tools_rhel8} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
Command failed on smithi064 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:9466ff3c1b9d2f6b2d5c2fa1e5cfc2396b9701ee shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2ab423f8-3898-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6466028 | 2021-10-28 23:12:27 | 2021-10-29 08:55:31 | 2021-10-29 09:28:43 | 0:33:12 | 0:21:48 | 0:11:24 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi102 with status 5: 'sudo systemctl stop ceph-7697d4ae-3898-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466029 | 2021-10-28 23:12:28 | 2021-10-29 08:55:31 | 2021-10-29 09:28:50 | 0:33:19 | 0:19:30 | 0:13:49 | smithi | master | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-hybrid supported-random-distro$/{ubuntu_latest} tasks/progress} | 2 | |
pass | 6466030 | 2021-10-28 23:12:29 | 2021-10-29 08:57:49 | 2021-10-29 09:37:29 | 0:39:40 | 0:28:41 | 0:10:59 | smithi | master | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} | 2 | |
dead | 6466031 | 2021-10-28 23:12:30 | 2021-10-29 08:58:09 | 2021-10-29 21:12:04 | 12:13:55 | smithi | master | centos | 8.3 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6466032 | 2021-10-28 23:12:31 | 2021-10-29 08:59:10 | 2021-10-29 09:37:44 | 0:38:34 | 0:25:20 | 0:13:14 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |
pass | 6466033 | 2021-10-28 23:12:32 | 2021-10-29 09:00:10 | 2021-10-29 09:23:07 | 0:22:57 | 0:13:57 | 0:09:00 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6466034 | 2021-10-28 23:12:33 | 2021-10-29 09:00:11 | 2021-10-29 09:37:47 | 0:37:36 | 0:29:58 | 0:07:38 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/small-objects-balanced} | 2 | |
fail | 6466035 | 2021-10-28 23:12:34 | 2021-10-29 09:00:21 | 2021-10-29 09:40:04 | 0:39:43 | 0:33:10 | 0:06:33 | smithi | master | rhel | 8.4 | rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi133 with status 5: 'sudo systemctl stop ceph-4b46800a-389a-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466036 | 2021-10-28 23:12:35 | 2021-10-29 09:00:31 | 2021-10-29 09:25:43 | 0:25:12 | 0:13:39 | 0:11:33 | smithi | master | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/rados_python} | 2 | |
pass | 6466037 | 2021-10-28 23:12:35 | 2021-10-29 09:01:32 | 2021-10-29 09:36:18 | 0:34:46 | 0:23:09 | 0:11:37 | smithi | master | centos | 8.3 | rados/singleton/{all/random-eio mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} | 2 | |
pass | 6466038 | 2021-10-28 23:12:36 | 2021-10-29 09:03:02 | 2021-10-29 09:26:40 | 0:23:38 | 0:10:05 | 0:13:33 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_adoption} | 1 | |
pass | 6466039 | 2021-10-28 23:12:37 | 2021-10-29 09:07:23 | 2021-10-29 09:44:11 | 0:36:48 | 0:25:19 | 0:11:29 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 6466040 | 2021-10-28 23:12:38 | 2021-10-29 09:08:44 | 2021-10-29 09:43:09 | 0:34:25 | 0:23:51 | 0:10:34 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6466041 | 2021-10-28 23:12:39 | 2021-10-29 09:11:44 | 2021-10-29 09:44:31 | 0:32:47 | 0:19:50 | 0:12:57 | smithi | master | centos | 8.3 | rados/cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466042 | 2021-10-28 23:12:40 | 2021-10-29 09:15:05 | 2021-10-29 09:34:48 | 0:19:43 | 0:10:04 | 0:09:39 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6466043 | 2021-10-28 23:12:41 | 2021-10-29 09:15:25 | 2021-10-29 09:49:32 | 0:34:07 | 0:21:41 | 0:12:26 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/small-objects-localized} | 2 | |
fail | 6466044 | 2021-10-28 23:12:42 | 2021-10-29 09:18:16 | 2021-10-29 09:48:13 | 0:29:57 | 0:19:25 | 0:10:32 | smithi | master | centos | 8.3 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi156 with status 5: 'sudo systemctl stop ceph-5b31c1ae-389b-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466045 | 2021-10-28 23:12:43 | 2021-10-29 09:18:46 | 2021-10-29 09:42:02 | 0:23:16 | 0:14:45 | 0:08:31 | smithi | master | centos | 8.stream | rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6466046 | 2021-10-28 23:12:44 | 2021-10-29 09:18:47 | 2021-10-29 09:36:17 | 0:17:30 | 0:06:36 | 0:10:54 | smithi | master | ubuntu | 20.04 | rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} | 2 | |
pass | 6466047 | 2021-10-28 23:12:45 | 2021-10-29 09:18:47 | 2021-10-29 09:44:43 | 0:25:56 | 0:16:19 | 0:09:37 | smithi | master | centos | 8.3 | rados/singleton/{all/rebuild-mondb mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} | 1 | |
fail | 6466048 | 2021-10-28 23:12:46 | 2021-10-29 09:19:17 | 2021-10-29 09:40:28 | 0:21:11 | 0:12:40 | 0:08:31 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi105 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9466ff3c1b9d2f6b2d5c2fa1e5cfc2396b9701ee TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 6466049 | 2021-10-28 23:12:47 | 2021-10-29 09:19:18 | 2021-10-29 09:40:24 | 0:21:06 | 0:10:21 | 0:10:45 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4K_seq_read} | 1 | |
fail | 6466050 | 2021-10-28 23:12:48 | 2021-10-29 09:20:18 | 2021-10-29 10:01:27 | 0:41:09 | 0:27:08 | 0:14:01 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6466051 | 2021-10-28 23:12:49 | 2021-10-29 09:20:18 | 2021-10-29 09:55:18 | 0:35:00 | 0:21:47 | 0:13:13 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi157 with status 5: 'sudo systemctl stop ceph-341d4952-389c-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466052 | 2021-10-28 23:12:50 | 2021-10-29 09:21:39 | 2021-10-29 10:00:56 | 0:39:17 | 0:27:29 | 0:11:48 | smithi | master | ubuntu | 20.04 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/many workloads/rados_mon_workunits} | 2 | |
pass | 6466053 | 2021-10-28 23:12:51 | 2021-10-29 09:22:09 | 2021-10-29 09:41:23 | 0:19:14 | 0:09:35 | 0:09:39 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6466054 | 2021-10-28 23:12:52 | 2021-10-29 09:22:09 | 2021-10-29 09:54:06 | 0:31:57 | 0:21:59 | 0:09:58 | smithi | master | centos | 8.3 | rados/standalone/{supported-random-distro$/{centos_8} workloads/crush} | 1 | |
pass | 6466055 | 2021-10-28 23:12:53 | 2021-10-29 09:23:10 | 2021-10-29 10:01:13 | 0:38:03 | 0:31:27 | 0:06:36 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/small-objects} | 2 | |
fail | 6466056 | 2021-10-28 23:12:54 | 2021-10-29 09:24:00 | 2021-10-29 09:59:50 | 0:35:50 | 0:22:13 | 0:13:37 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
Command failed on smithi111 with status 5: 'sudo systemctl stop ceph-c9b9c6b6-389c-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466057 | 2021-10-28 23:12:55 | 2021-10-29 09:26:31 | 2021-10-29 10:54:07 | 1:27:36 | 1:13:50 | 0:13:46 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
pass | 6466058 | 2021-10-28 23:12:56 | 2021-10-29 09:28:52 | 2021-10-29 10:00:17 | 0:31:25 | 0:12:34 | 0:18:51 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 6466059 | 2021-10-28 23:12:57 | 2021-10-29 09:34:53 | 2021-10-29 10:59:24 | 1:24:31 | 1:15:49 | 0:08:42 | smithi | master | rhel | 8.4 | rados/singleton/{all/recovery-preemption mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 6466060 | 2021-10-28 23:12:58 | 2021-10-29 09:36:23 | 2021-10-29 10:17:48 | 0:41:25 | 0:33:51 | 0:07:34 | smithi | master | rhel | 8.4 | rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi168 with status 5: 'sudo systemctl stop ceph-508b5388-389f-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6466061 | 2021-10-28 23:12:59 | 2021-10-29 09:36:24 | 2021-10-29 10:07:05 | 0:30:41 | 0:19:31 | 0:11:10 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466062 | 2021-10-28 23:13:00 | 2021-10-29 09:37:34 | 2021-10-29 10:19:12 | 0:41:38 | 0:30:48 | 0:10:50 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
fail | 6466063 | 2021-10-28 23:13:01 | 2021-10-29 09:37:55 | 2021-10-29 10:09:25 | 0:31:30 | 0:23:38 | 0:07:52 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466064 | 2021-10-28 23:13:02 | 2021-10-29 09:37:55 | 2021-10-29 10:25:40 | 0:47:45 | 0:33:30 | 0:14:15 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |
pass | 6466065 | 2021-10-28 23:13:03 | 2021-10-29 09:40:06 | 2021-10-29 09:58:21 | 0:18:15 | 0:07:16 | 0:10:59 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/crushdiff mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6466066 | 2021-10-28 23:13:04 | 2021-10-29 09:40:26 | 2021-10-29 10:10:36 | 0:30:10 | 0:22:20 | 0:07:50 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi105 with status 5: 'sudo systemctl stop ceph-7d019ff4-389e-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466067 | 2021-10-28 23:13:05 | 2021-10-29 09:41:26 | 2021-10-29 10:08:21 | 0:26:55 | 0:15:36 | 0:11:19 | smithi | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} tasks/rados_stress_watch} | 2 | |
pass | 6466068 | 2021-10-28 23:13:06 | 2021-10-29 09:43:17 | 2021-10-29 10:08:20 | 0:25:03 | 0:14:27 | 0:10:36 | smithi | master | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-low-osd-mem-target supported-random-distro$/{centos_8.stream} tasks/prometheus} | 2 | |
pass | 6466069 | 2021-10-28 23:13:07 | 2021-10-29 09:43:27 | 2021-10-29 10:03:17 | 0:19:50 | 0:09:24 | 0:10:26 | smithi | master | centos | 8.3 | rados/singleton/{all/resolve_stuck_peering mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} | 2 | |
pass | 6466070 | 2021-10-28 23:13:08 | 2021-10-29 09:43:48 | 2021-10-29 09:58:41 | 0:14:53 | 0:06:53 | 0:08:00 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_cephadm_repos} | 1 | |
pass | 6466071 | 2021-10-28 23:13:09 | 2021-10-29 09:43:48 | 2021-10-29 10:27:13 | 0:43:25 | 0:33:39 | 0:09:46 | smithi | master | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8.stream} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} | 2 | |
dead | 6466072 | 2021-10-28 23:13:10 | 2021-10-29 09:44:08 | 2021-10-29 21:56:37 | 12:12:29 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6466073 | 2021-10-28 23:13:11 | 2021-10-29 09:44:19 | 2021-10-29 21:56:03 | 12:11:44 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/connectivity} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6466074 | 2021-10-28 23:13:12 | 2021-10-29 09:44:39 | 2021-10-29 10:23:30 | 0:38:51 | 0:26:42 | 0:12:09 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/snaps-few-objects-localized} | 2 | |
pass | 6466075 | 2021-10-28 23:13:13 | 2021-10-29 09:44:49 | 2021-10-29 10:09:30 | 0:24:41 | 0:18:51 | 0:05:50 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/export-after-evict mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 6466076 | 2021-10-28 23:13:14 | 2021-10-29 09:44:50 | 2021-10-29 10:17:34 | 0:32:44 | 0:19:36 | 0:13:08 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466077 | 2021-10-28 23:13:15 | 2021-10-29 09:48:20 | 2021-10-29 10:08:55 | 0:20:35 | 0:10:51 | 0:09:44 | smithi | master | centos | 8.3 | rados/singleton/{all/test-crash mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6466078 | 2021-10-28 23:13:15 | 2021-10-29 09:49:41 | 2021-10-29 10:09:06 | 0:19:25 | 0:10:39 | 0:08:46 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} | 1 | |
pass | 6466079 | 2021-10-28 23:13:16 | 2021-10-29 09:49:41 | 2021-10-29 10:35:24 | 0:45:43 | 0:30:25 | 0:15:18 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 6466080 | 2021-10-28 23:13:17 | 2021-10-29 09:54:12 | 2021-10-29 10:24:55 | 0:30:43 | 0:23:13 | 0:07:30 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466081 | 2021-10-28 23:13:18 | 2021-10-29 09:55:23 | 2021-10-29 10:14:03 | 0:18:40 | 0:06:19 | 0:12:21 | smithi | master | ubuntu | 20.04 | rados/objectstore/{backends/filejournal supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6466082 | 2021-10-28 23:13:19 | 2021-10-29 09:58:23 | 2021-10-29 10:31:07 | 0:32:44 | 0:24:04 | 0:08:40 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi176 with status 5: 'sudo systemctl stop ceph-2c62b83c-38a1-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466083 | 2021-10-28 23:13:20 | 2021-10-29 09:59:54 | 2021-10-29 10:41:20 | 0:41:26 | 0:29:34 | 0:11:52 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8.stream} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} | 2 | |
fail | 6466084 | 2021-10-28 23:13:21 | 2021-10-29 09:59:54 | 2021-10-29 10:19:27 | 0:19:33 | 0:10:32 | 0:09:01 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_nfs} | 1 | |
Failure Reason:
Command failed on smithi159 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:9466ff3c1b9d2f6b2d5c2fa1e5cfc2396b9701ee shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 51ed3910-38a1-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6466085 | 2021-10-28 23:13:22 | 2021-10-29 10:00:24 | 2021-10-29 10:21:51 | 0:21:27 | 0:09:39 | 0:11:48 | smithi | master | centos | 8.3 | rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 2 | |
pass | 6466086 | 2021-10-28 23:13:23 | 2021-10-29 10:00:25 | 2021-10-29 10:27:36 | 0:27:11 | 0:21:04 | 0:06:07 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/full-tiering mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6466087 | 2021-10-28 23:13:24 | 2021-10-29 10:00:25 | 2021-10-29 10:35:48 | 0:35:23 | 0:22:13 | 0:13:10 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
pass | 6466088 | 2021-10-28 23:13:25 | 2021-10-29 10:01:05 | 2021-10-29 10:30:27 | 0:29:22 | 0:19:37 | 0:09:45 | smithi | master | centos | 8.stream | rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6466089 | 2021-10-28 23:13:26 | 2021-10-29 10:01:16 | 2021-10-29 10:33:06 | 0:31:50 | 0:25:23 | 0:06:27 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
dead | 6466090 | 2021-10-28 23:13:27 | 2021-10-29 10:01:36 | 2021-10-29 22:14:00 | 12:12:24 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6466091 | 2021-10-28 23:13:28 | 2021-10-29 10:02:06 | 2021-10-29 10:34:34 | 0:32:28 | 0:20:19 | 0:12:09 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} | 2 | |
pass | 6466092 | 2021-10-28 23:13:29 | 2021-10-29 10:03:27 | 2021-10-29 10:54:27 | 0:51:00 | 0:37:52 | 0:13:08 | smithi | master | centos | 8.3 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/one workloads/snaps-few-objects} | 2 | |
fail | 6466093 | 2021-10-28 23:13:30 | 2021-10-29 10:07:08 | 2021-10-29 10:41:39 | 0:34:31 | 0:23:29 | 0:11:02 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi197 with status 5: 'sudo systemctl stop ceph-c87eccaa-38a2-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466094 | 2021-10-28 23:13:31 | 2021-10-29 10:08:28 | 2021-10-29 10:34:38 | 0:26:10 | 0:14:38 | 0:11:32 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/write_fadvise_dontneed} | 2 | |
pass | 6466095 | 2021-10-28 23:13:32 | 2021-10-29 10:08:29 | 2021-10-29 10:36:41 | 0:28:12 | 0:13:55 | 0:14:17 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 6466096 | 2021-10-28 23:13:33 | 2021-10-29 10:09:29 | 2021-10-29 10:31:06 | 0:21:37 | 0:10:56 | 0:10:41 | smithi | master | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8.stream} tasks/rados_striper} | 2 | |
fail | 6466097 | 2021-10-28 23:13:34 | 2021-10-29 10:09:39 | 2021-10-29 10:50:12 | 0:40:33 | 0:27:36 | 0:12:57 | smithi | master | ubuntu | 20.04 | rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi105 with status 5: 'sudo systemctl stop ceph-f1c0f49e-38a2-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466098 | 2021-10-28 23:13:35 | 2021-10-29 10:10:40 | 2021-10-29 10:54:55 | 0:44:15 | 0:31:06 | 0:13:09 | smithi | master | ubuntu | 20.04 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6466099 | 2021-10-28 23:13:36 | 2021-10-29 10:13:11 | 2021-10-29 10:34:13 | 0:21:02 | 0:11:39 | 0:09:23 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/health-warnings mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6466100 | 2021-10-28 23:13:37 | 2021-10-29 10:13:11 | 2021-10-29 11:53:40 | 1:40:29 | 1:29:43 | 0:10:46 | smithi | master | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/erasure-code} | 1 | |
pass | 6466101 | 2021-10-28 23:13:38 | 2021-10-29 10:14:11 | 2021-10-29 11:51:05 | 1:36:54 | 1:25:57 | 0:10:57 | smithi | master | rhel | 8.4 | rados/singleton/{all/thrash-backfill-full mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} | 2 | |
fail | 6466102 | 2021-10-28 23:13:39 | 2021-10-29 10:17:42 | 2021-10-29 10:54:10 | 0:36:28 | 0:23:50 | 0:12:38 | smithi | master | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466103 | 2021-10-28 23:13:40 | 2021-10-29 10:17:52 | 2021-10-29 10:44:39 | 0:26:47 | 0:12:31 | 0:14:16 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
fail | 6466104 | 2021-10-28 23:13:41 | 2021-10-29 10:19:13 | 2021-10-29 10:53:12 | 0:33:59 | 0:23:08 | 0:10:51 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi052 with status 5: 'sudo systemctl stop ceph-8e7688d0-38a3-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466105 | 2021-10-28 23:13:41 | 2021-10-29 10:19:53 | 2021-10-29 10:48:57 | 0:29:04 | 0:20:11 | 0:08:53 | smithi | master | rhel | 8.4 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-stupid supported-random-distro$/{rhel_8} tasks/workunits} | 2 | |
pass | 6466106 | 2021-10-28 23:13:42 | 2021-10-29 10:21:54 | 2021-10-29 11:04:58 | 0:43:04 | 0:32:49 | 0:10:15 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
pass | 6466107 | 2021-10-28 23:13:43 | 2021-10-29 10:21:54 | 2021-10-29 10:42:06 | 0:20:12 | 0:10:19 | 0:09:53 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_seq_read} | 1 | |
fail | 6466108 | 2021-10-28 23:13:44 | 2021-10-29 10:21:54 | 2021-10-29 10:55:00 | 0:33:06 | 0:23:39 | 0:09:27 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6466109 | 2021-10-28 23:13:45 | 2021-10-29 10:23:35 | 2021-10-29 11:36:38 | 1:13:03 | 1:00:19 | 0:12:44 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi157 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:9466ff3c1b9d2f6b2d5c2fa1e5cfc2396b9701ee shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e8952cc6-38a4-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6466110 | 2021-10-28 23:13:46 | 2021-10-29 10:25:05 | 2021-10-29 11:11:29 | 0:46:24 | 0:35:21 | 0:11:03 | smithi | master | centos | 8.3 | rados/singleton/{all/thrash-eio mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} | 2 | |
fail | 6466111 | 2021-10-28 23:13:47 | 2021-10-29 10:25:46 | 2021-10-29 10:49:29 | 0:23:43 | 0:11:40 | 0:12:03 | smithi | master | centos | 8.2 | rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/e2e} | 2 | |
Failure Reason:
Command failed on smithi026 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:9466ff3c1b9d2f6b2d5c2fa1e5cfc2396b9701ee shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2e4b03da-38a5-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6466112 | 2021-10-28 23:13:48 | 2021-10-29 10:26:26 | 2021-10-29 11:07:06 | 0:40:40 | 0:26:56 | 0:13:44 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/flannel rook/master} | 3 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
pass | 6466113 | 2021-10-28 23:13:49 | 2021-10-29 10:27:17 | 2021-10-29 10:45:14 | 0:17:57 | 0:07:19 | 0:10:38 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6466114 | 2021-10-28 23:13:50 | 2021-10-29 10:27:37 | 2021-10-29 11:48:35 | 1:20:58 | 1:09:43 | 0:11:15 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-radosbench} | 2 | |
fail | 6466115 | 2021-10-28 23:13:51 | 2021-10-29 10:31:08 | 2021-10-29 11:08:20 | 0:37:12 | 0:24:07 | 0:13:05 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
Command failed on smithi122 with status 5: 'sudo systemctl stop ceph-3b8b3870-38a6-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6466116 | 2021-10-28 23:13:52 | 2021-10-29 10:31:08 | 2021-10-29 11:14:17 | 0:43:09 | 0:27:06 | 0:16:03 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
dead | 6466117 | 2021-10-28 23:13:53 | 2021-10-29 10:33:09 | 2021-10-29 22:46:08 | 12:12:59 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6466118 | 2021-10-28 23:13:54 | 2021-10-29 10:34:39 | 2021-10-29 11:08:08 | 0:33:29 | 0:22:11 | 0:11:18 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8.stream} thrashers/careful thrashosds-health workloads/cache-agent-big} | 2 | |
fail | 6466119 | 2021-10-28 23:13:55 | 2021-10-29 10:34:40 | 2021-10-29 11:04:42 | 0:30:02 | 0:19:42 | 0:10:20 | smithi | master | centos | 8.2 | rados/cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466120 | 2021-10-28 23:13:56 | 2021-10-29 10:35:30 | 2021-10-29 11:17:33 | 0:42:03 | 0:30:48 | 0:11:15 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8.stream} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6466121 | 2021-10-28 23:13:57 | 2021-10-29 10:35:50 | 2021-10-29 13:12:01 | 2:36:11 | 2:25:30 | 0:10:41 | smithi | master | ubuntu | 20.04 | rados/objectstore/{backends/filestore-idempotent-aio-journal supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6466122 | 2021-10-28 23:13:58 | 2021-10-29 10:35:51 | 2021-10-29 11:10:03 | 0:34:12 | 0:21:17 | 0:12:55 | smithi | master | centos | 8.3 | rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} | 2 | |
fail | 6466123 | 2021-10-28 23:13:59 | 2021-10-29 10:36:51 | 2021-10-29 11:06:03 | 0:29:12 | 0:19:40 | 0:09:32 | smithi | master | centos | 8.2 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi036 with status 5: 'sudo systemctl stop ceph-287ec3fa-38a6-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466124 | 2021-10-28 23:14:00 | 2021-10-29 10:36:52 | 2021-10-29 11:04:36 | 0:27:44 | 0:20:34 | 0:07:10 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 6466125 | 2021-10-28 23:14:01 | 2021-10-29 10:37:22 | 2021-10-29 10:58:45 | 0:21:23 | 0:08:28 | 0:12:55 | smithi | master | centos | 8.2 | rados/cephadm/smoke-singlehost/{0-distro$/{centos_8.2_container_tools_3.0} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
Command failed on smithi111 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:9466ff3c1b9d2f6b2d5c2fa1e5cfc2396b9701ee shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ada83106-38a6-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6466126 | 2021-10-28 23:14:02 | 2021-10-29 10:41:23 | 2021-10-29 11:09:10 | 0:27:47 | 0:16:39 | 0:11:08 | smithi | master | centos | 8.stream | rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8.stream} tasks/mon_recovery} | 3 | |
fail | 6466127 | 2021-10-28 23:14:03 | 2021-10-29 10:41:43 | 2021-10-29 12:07:59 | 1:26:16 | 1:12:22 | 0:13:54 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi188 with status 5: 'sudo systemctl stop ceph-9bed891a-38a7-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466128 | 2021-10-28 23:14:04 | 2021-10-29 10:44:44 | 2021-10-29 11:07:12 | 0:22:28 | 0:11:15 | 0:11:13 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/cache-agent-small} | 2 | |
pass | 6466129 | 2021-10-28 23:14:05 | 2021-10-29 10:44:44 | 2021-10-29 11:44:30 | 0:59:46 | 0:44:52 | 0:14:54 | smithi | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} tasks/rados_workunit_loadgen_big} | 2 | |
dead | 6466130 | 2021-10-28 23:14:06 | 2021-10-29 10:49:05 | 2021-10-29 23:01:01 | 12:11:56 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6466131 | 2021-10-28 23:14:07 | 2021-10-29 10:49:35 | 2021-10-29 11:23:31 | 0:33:56 | 0:22:50 | 0:11:06 | smithi | master | centos | 8.stream | rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8.stream}} | 2 | |
fail | 6466132 | 2021-10-28 23:14:08 | 2021-10-29 10:50:16 | 2021-10-29 11:26:05 | 0:35:49 | 0:22:54 | 0:12:55 | smithi | master | centos | 8.2 | rados/cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi052 with status 5: 'sudo systemctl stop ceph-cc7a6412-38a8-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466133 | 2021-10-28 23:14:09 | 2021-10-29 10:53:17 | 2021-10-29 11:26:28 | 0:33:11 | 0:24:38 | 0:08:33 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6466134 | 2021-10-28 23:14:10 | 2021-10-29 10:53:17 | 2021-10-29 12:11:16 | 1:17:59 | 0:59:25 | 0:18:34 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} | 2 | |
pass | 6466135 | 2021-10-28 23:14:11 | 2021-10-29 10:54:17 | 2021-10-29 11:11:58 | 0:17:41 | 0:08:22 | 0:09:19 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4M_write} | 1 | |
pass | 6466136 | 2021-10-28 23:14:12 | 2021-10-29 10:54:18 | 2021-10-29 11:27:53 | 0:33:35 | 0:26:46 | 0:06:49 | smithi | master | rhel | 8.4 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/sync-many workloads/pool-create-delete} | 2 | |
pass | 6466137 | 2021-10-28 23:14:13 | 2021-10-29 10:54:28 | 2021-10-29 11:13:54 | 0:19:26 | 0:09:58 | 0:09:28 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_adoption} | 1 | |
pass | 6466138 | 2021-10-28 23:14:14 | 2021-10-29 10:54:28 | 2021-10-29 11:28:07 | 0:33:39 | 0:22:20 | 0:11:19 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 6466139 | 2021-10-28 23:14:15 | 2021-10-29 10:58:49 | 2021-10-29 12:09:54 | 1:11:05 | 0:42:42 | 0:28:23 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/cache-pool-snaps-readproxy} | 2 | |
fail | 6466140 | 2021-10-28 23:14:16 | 2021-10-29 11:04:40 | 2021-10-29 11:33:54 | 0:29:14 | 0:19:46 | 0:09:28 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6466141 | 2021-10-28 23:14:16 | 2021-10-29 11:04:51 | 2021-10-29 11:52:13 | 0:47:22 | 0:26:11 | 0:21:11 | smithi | master | centos | 8.3 | rados/cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466142 | 2021-10-28 23:14:17 | 2021-10-29 11:05:01 | 2021-10-29 11:32:59 | 0:27:58 | 0:20:05 | 0:07:53 | smithi | master | rhel | 8.4 | rados/singleton/{all/watch-notify-same-primary mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6466143 | 2021-10-28 23:14:18 | 2021-10-29 11:06:11 | 2021-10-29 12:00:30 | 0:54:19 | 0:46:17 | 0:08:02 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
dead | 6466144 | 2021-10-28 23:14:19 | 2021-10-29 11:07:12 | 2021-10-29 23:15:58 | 12:08:46 | smithi | master | centos | 8.3 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6466145 | 2021-10-28 23:14:20 | 2021-10-29 11:07:22 | 2021-10-29 11:30:37 | 0:23:15 | 0:09:30 | 0:13:45 | smithi | master | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/filestore-xfs supported-random-distro$/{ubuntu_latest} tasks/crash} | 2 | |
pass | 6466146 | 2021-10-28 23:14:21 | 2021-10-29 11:08:13 | 2021-10-29 11:33:12 | 0:24:59 | 0:17:09 | 0:07:50 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6466147 | 2021-10-28 23:14:22 | 2021-10-29 11:08:13 | 2021-10-29 11:35:09 | 0:26:56 | 0:17:17 | 0:09:39 | smithi | master | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/mgr} | 1 | |
pass | 6466148 | 2021-10-28 23:14:23 | 2021-10-29 11:08:23 | 2021-10-29 12:03:29 | 0:55:06 | 0:38:02 | 0:17:04 | smithi | master | centos | 8.3 | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
pass | 6466149 | 2021-10-28 23:14:24 | 2021-10-29 11:08:24 | 2021-10-29 11:59:08 | 0:50:44 | 0:38:00 | 0:12:44 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |
fail | 6466150 | 2021-10-28 23:14:25 | 2021-10-29 11:09:14 | 2021-10-29 11:30:40 | 0:21:26 | 0:12:08 | 0:09:18 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi071 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9466ff3c1b9d2f6b2d5c2fa1e5cfc2396b9701ee TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 6466151 | 2021-10-28 23:14:26 | 2021-10-29 11:09:14 | 2021-10-29 12:15:03 | 1:05:49 | 0:38:26 | 0:27:23 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/cache-pool-snaps} | 2 | |
fail | 6466152 | 2021-10-28 23:14:27 | 2021-10-29 12:01:22 | 1637 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | ||||
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466153 | 2021-10-28 23:14:28 | 2021-10-29 11:11:35 | 2021-10-29 11:41:47 | 0:30:12 | 0:18:54 | 0:11:18 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-balanced} | 2 | |
pass | 6466154 | 2021-10-28 23:14:29 | 2021-10-29 11:12:06 | 2021-10-29 11:33:04 | 0:20:58 | 0:10:16 | 0:10:42 | smithi | master | centos | 8.stream | rados/singleton/{all/admin-socket mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8.stream}} | 1 | |
dead | 6466155 | 2021-10-28 23:14:30 | 2021-10-29 11:13:56 | 2021-10-29 23:26:24 | 12:12:28 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6466156 | 2021-10-28 23:14:31 | 2021-10-29 11:14:26 | 2021-10-29 11:57:18 | 0:42:52 | 0:32:40 | 0:10:12 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 2 | |
fail | 6466157 | 2021-10-28 23:14:32 | 2021-10-29 11:17:37 | 2021-10-29 11:54:50 | 0:37:13 | 0:21:45 | 0:15:28 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi105 with status 5: 'sudo systemctl stop ceph-0d6006fe-38ad-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466158 | 2021-10-28 23:14:33 | 2021-10-29 11:23:38 | 2021-10-29 14:15:07 | 2:51:29 | 2:33:03 | 0:18:26 | smithi | master | centos | 8.3 | rados/objectstore/{backends/filestore-idempotent supported-random-distro$/{centos_8}} | 1 | |
pass | 6466159 | 2021-10-28 23:14:34 | 2021-10-29 11:26:09 | 2021-10-29 11:56:11 | 0:30:02 | 0:18:19 | 0:11:43 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/cache-snaps-balanced} | 2 | |
fail | 6466160 | 2021-10-28 23:14:35 | 2021-10-29 11:26:29 | 2021-10-29 12:25:32 | 0:59:03 | 0:36:00 | 0:23:03 | smithi | master | centos | 8.3 | rados/cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi065 with status 5: 'sudo systemctl stop ceph-2e55a75c-38b1-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466161 | 2021-10-28 23:14:36 | 2021-10-29 11:28:00 | 2021-10-29 12:09:49 | 0:41:49 | 0:30:30 | 0:11:19 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{default} supported-random-distro$/{centos_8.stream} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6466162 | 2021-10-28 23:14:37 | 2021-10-29 11:28:10 | 2021-10-29 11:59:37 | 0:31:27 | 0:21:35 | 0:09:52 | smithi | master | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8.stream} tasks/rados_workunit_loadgen_mix} | 2 | |
fail | 6466163 | 2021-10-28 23:14:38 | 2021-10-29 11:28:10 | 2021-10-29 11:57:13 | 0:29:03 | 0:22:41 | 0:06:22 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466164 | 2021-10-28 23:14:39 | 2021-10-29 11:28:11 | 2021-10-29 12:02:25 | 0:34:14 | 0:22:08 | 0:12:06 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} | 1 | |
pass | 6466165 | 2021-10-28 23:14:40 | 2021-10-29 11:30:41 | 2021-10-29 12:00:53 | 0:30:12 | 0:20:52 | 0:09:20 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/backfill-toofull mon_election/classic msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6466166 | 2021-10-28 23:14:41 | 2021-10-29 11:30:42 | 2021-10-29 11:54:08 | 0:23:26 | 0:13:57 | 0:09:29 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
Command failed on smithi110 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:9466ff3c1b9d2f6b2d5c2fa1e5cfc2396b9701ee shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 321fb63c-38ae-11ec-8c28-001a4aab830c -- ceph mon dump -f json' |
||||||||||||||
pass | 6466167 | 2021-10-28 23:14:42 | 2021-10-29 11:33:02 | 2021-10-29 12:02:09 | 0:29:07 | 0:20:43 | 0:08:24 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6466168 | 2021-10-28 23:14:43 | 2021-10-29 11:33:12 | 2021-10-29 12:21:30 | 0:48:18 | 0:22:50 | 0:25:28 | smithi | master | centos | 8.3 | rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} | 3 | |
fail | 6466169 | 2021-10-28 23:14:44 | 2021-10-29 11:34:03 | 2021-10-29 12:06:14 | 0:32:11 | 0:23:04 | 0:09:07 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi157 with status 5: 'sudo systemctl stop ceph-b2b4ae06-38ae-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466170 | 2021-10-28 23:14:45 | 2021-10-29 11:36:44 | 2021-10-29 12:37:34 | 1:00:50 | 0:35:05 | 0:25:45 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/cache-snaps} | 2 | |
fail | 6466171 | 2021-10-28 23:14:46 | 2021-10-29 11:38:14 | 2021-10-29 12:42:48 | 1:04:34 | 0:37:28 | 0:27:06 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/rbd_cls} | 3 | |
Failure Reason:
Command failed on smithi096 with status 5: 'sudo systemctl stop ceph-782e0dcc-38b3-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466172 | 2021-10-28 23:14:47 | 2021-10-29 11:39:45 | 2021-10-29 11:54:47 | 0:15:02 | 0:06:55 | 0:08:07 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_cephadm_repos} | 1 | |
dead | 6466173 | 2021-10-28 23:14:48 | 2021-10-29 11:39:45 | 2021-10-29 23:54:16 | 12:14:31 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6466174 | 2021-10-28 23:14:48 | 2021-10-29 11:41:55 | 2021-10-29 12:05:58 | 0:24:03 | 0:11:07 | 0:12:56 | smithi | master | centos | 8.stream | rados/singleton/{all/deduptool mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6466175 | 2021-10-28 23:14:49 | 2021-10-29 11:44:36 | 2021-10-29 12:47:52 | 1:03:16 | 0:38:20 | 0:24:56 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/lockdep} | 2 | |
pass | 6466176 | 2021-10-28 23:14:50 | 2021-10-29 11:48:37 | 2021-10-29 12:16:14 | 0:27:37 | 0:10:20 | 0:17:17 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 6466177 | 2021-10-28 23:14:51 | 2021-10-29 12:15:55 | 559 | smithi | master | ubuntu | 20.04 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/sync workloads/rados_5925} | 2 | ||||
dead | 6466178 | 2021-10-28 23:14:52 | 2021-10-29 11:53:48 | 2021-10-30 00:06:40 | 12:12:52 | smithi | master | centos | 8.3 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/connectivity} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6466179 | 2021-10-28 23:14:53 | 2021-10-29 11:54:19 | 2021-10-29 12:10:58 | 0:16:39 | 0:05:15 | 0:11:24 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/pool-access mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
No module named 'tasks' |
||||||||||||||
fail | 6466180 | 2021-10-28 23:14:54 | 2021-10-29 11:54:49 | 2021-10-29 12:26:03 | 0:31:14 | 0:23:41 | 0:07:33 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466181 | 2021-10-28 23:14:55 | 2021-10-29 11:55:00 | 2021-10-29 12:24:06 | 0:29:06 | 0:16:49 | 0:12:17 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/cache} | 2 | |
fail | 6466182 | 2021-10-28 23:14:56 | 2021-10-29 11:56:20 | 2021-10-29 12:29:01 | 0:32:41 | 0:23:55 | 0:08:46 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466183 | 2021-10-28 23:14:57 | 2021-10-29 11:57:20 | 2021-10-29 12:20:54 | 0:23:34 | 0:11:39 | 0:11:55 | smithi | master | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/failover} | 2 | |
pass | 6466184 | 2021-10-28 23:14:58 | 2021-10-29 11:57:21 | 2021-10-29 12:24:02 | 0:26:41 | 0:13:02 | 0:13:39 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 6466185 | 2021-10-28 23:14:59 | 2021-10-29 11:59:41 | 2021-10-29 12:20:52 | 0:21:11 | 0:12:04 | 0:09:07 | smithi | master | centos | 8.stream | rados/singleton/{all/divergent_priors mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6466186 | 2021-10-28 23:15:00 | 2021-10-29 11:59:42 | 2021-10-29 12:30:02 | 0:30:20 | 0:23:19 | 0:07:01 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi194 with status 5: 'sudo systemctl stop ceph-ff3e0af8-38b1-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6466187 | 2021-10-28 23:15:01 | 2021-10-29 12:01:11 | 2021-10-29 13:10:37 | 1:09:26 | 1:00:08 | 0:09:18 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
Command failed on smithi171 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:9466ff3c1b9d2f6b2d5c2fa1e5cfc2396b9701ee shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 324ae006-38b2-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6466188 | 2021-10-28 23:15:02 | 2021-10-29 12:01:11 | 2021-10-29 12:48:24 | 0:47:13 | 0:40:23 | 0:06:50 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6466189 | 2021-10-28 23:15:03 | 2021-10-29 12:01:11 | 2021-10-29 13:12:38 | 1:11:27 | 1:04:46 | 0:06:41 | smithi | master | rhel | 8.4 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/misc} | 1 | |
pass | 6466190 | 2021-10-28 23:15:04 | 2021-10-29 12:01:32 | 2021-10-29 12:31:55 | 0:30:23 | 0:23:10 | 0:07:13 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/dedup-io-mixed} | 2 | |
pass | 6466191 | 2021-10-28 23:15:05 | 2021-10-29 12:02:12 | 2021-10-29 12:43:05 | 0:40:53 | 0:29:34 | 0:11:19 | smithi | master | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8.stream} tasks/rados_workunit_loadgen_mostlyread} | 2 | |
pass | 6466192 | 2021-10-28 23:15:06 | 2021-10-29 12:03:33 | 2021-10-29 12:28:27 | 0:24:54 | 0:12:17 | 0:12:37 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/sample_fio} | 1 | |
fail | 6466193 | 2021-10-28 23:15:07 | 2021-10-29 12:06:03 | 2021-10-29 12:37:57 | 0:31:54 | 0:21:38 | 0:10:16 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi157 with status 5: 'sudo systemctl stop ceph-16006a32-38b3-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466194 | 2021-10-28 23:15:08 | 2021-10-29 12:06:24 | 2021-10-29 12:47:08 | 0:40:44 | 0:33:16 | 0:07:28 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read} | 2 | |
pass | 6466195 | 2021-10-28 23:15:09 | 2021-10-29 12:08:04 | 2021-10-29 12:35:21 | 0:27:17 | 0:18:22 | 0:08:55 | smithi | master | rhel | 8.4 | rados/objectstore/{backends/fusestore supported-random-distro$/{rhel_8}} | 1 | |
fail | 6466196 | 2021-10-28 23:15:10 | 2021-10-29 12:09:55 | 2021-10-29 12:49:23 | 0:39:28 | 0:32:54 | 0:06:34 | smithi | master | rhel | 8.4 | rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi184 with status 5: 'sudo systemctl stop ceph-9ad27a4c-38b4-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466197 | 2021-10-28 23:15:11 | 2021-10-29 12:09:55 | 2021-10-29 12:55:24 | 0:45:29 | 0:24:27 | 0:21:02 | smithi | master | centos | 8.3 | rados/singleton/{all/divergent_priors2 mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6466198 | 2021-10-28 23:15:12 | 2021-10-29 12:09:55 | 2021-10-29 12:48:40 | 0:38:45 | 0:28:49 | 0:09:56 | smithi | master | ubuntu | 20.04 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6466199 | 2021-10-28 23:15:13 | 2021-10-29 12:11:06 | 2021-10-29 12:47:52 | 0:36:46 | 0:24:24 | 0:12:22 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466200 | 2021-10-28 23:15:14 | 2021-10-29 12:11:26 | 2021-10-29 12:58:22 | 0:46:56 | 0:37:23 | 0:09:33 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 6466201 | 2021-10-28 23:15:15 | 2021-10-29 12:15:07 | 2021-10-29 12:49:44 | 0:34:37 | 0:22:20 | 0:12:17 | smithi | master | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466202 | 2021-10-28 23:15:16 | 2021-10-29 12:15:58 | 2021-10-29 13:09:46 | 0:53:48 | 0:30:18 | 0:23:30 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/dedup-io-snaps} | 2 | |
pass | 6466203 | 2021-10-28 23:15:17 | 2021-10-29 12:16:18 | 2021-10-29 12:36:14 | 0:19:56 | 0:09:25 | 0:10:31 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/version-number-sanity mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6466204 | 2021-10-28 23:15:18 | 2021-10-29 12:16:18 | 2021-10-29 13:00:31 | 0:44:13 | 0:34:18 | 0:09:55 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |
fail | 6466205 | 2021-10-28 23:15:19 | 2021-10-29 12:18:59 | 2021-10-29 12:57:19 | 0:38:20 | 0:23:01 | 0:15:19 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi192 with status 5: 'sudo systemctl stop ceph-ef755e20-38b4-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6466206 | 2021-10-28 23:15:20 | 2021-10-29 12:21:00 | 2021-10-29 12:39:47 | 0:18:47 | 0:10:19 | 0:08:28 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi134 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:9466ff3c1b9d2f6b2d5c2fa1e5cfc2396b9701ee shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid da626adc-38b4-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6466207 | 2021-10-28 23:15:21 | 2021-10-29 12:21:00 | 2021-10-29 12:42:02 | 0:21:02 | 0:11:47 | 0:09:15 | smithi | master | centos | 8.stream | rados/singleton/{all/dump-stuck mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6466208 | 2021-10-28 23:15:21 | 2021-10-29 12:21:00 | 2021-10-29 12:40:15 | 0:19:15 | 0:09:40 | 0:09:35 | smithi | master | centos | 8.stream | rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8.stream} tasks/mon_clock_with_skews} | 2 | |
fail | 6466209 | 2021-10-28 23:15:23 | 2021-10-29 12:21:00 | 2021-10-29 12:50:42 | 0:29:42 | 0:19:07 | 0:10:35 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466210 | 2021-10-28 23:15:24 | 2021-10-29 12:21:31 | 2021-10-29 13:35:30 | 1:13:59 | 0:47:42 | 0:26:17 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/pool-snaps-few-objects} | 2 | |
fail | 6466211 | 2021-10-28 23:15:25 | 2021-10-29 12:24:01 | 2021-10-29 13:15:24 | 0:51:23 | 0:27:36 | 0:23:47 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6466212 | 2021-10-28 23:15:25 | 2021-10-29 12:24:12 | 2021-10-29 12:57:24 | 0:33:12 | 0:23:47 | 0:09:25 | smithi | master | centos | 8.2 | rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_ganesha (unittest.loader._FailedTest) |
||||||||||||||
fail | 6466213 | 2021-10-28 23:15:26 | 2021-10-29 12:24:12 | 2021-10-29 12:59:54 | 0:35:42 | 0:25:32 | 0:10:10 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/host rook/1.7.0} | 1 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
pass | 6466214 | 2021-10-28 23:15:27 | 2021-10-29 12:24:12 | 2021-10-29 12:53:24 | 0:29:12 | 0:20:03 | 0:09:09 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} | 1 | |
dead | 6466215 | 2021-10-28 23:15:28 | 2021-10-29 12:24:13 | 2021-10-30 00:36:02 | 12:11:49 | smithi | master | centos | 8.3 | rados/upgrade/parallel/{0-distro$/{centos_8.3_container_tools_3.0} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6466216 | 2021-10-28 23:15:29 | 2021-10-29 12:25:33 | 2021-10-30 00:38:10 | 12:12:37 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6466217 | 2021-10-28 23:15:30 | 2021-10-29 12:26:14 | 2021-10-29 15:32:49 | 3:06:35 | 2:38:07 | 0:28:28 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
pass | 6466218 | 2021-10-28 23:15:31 | 2021-10-29 12:29:04 | 2021-10-29 12:56:22 | 0:27:18 | 0:10:50 | 0:16:28 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 6466219 | 2021-10-28 23:15:32 | 2021-10-29 12:32:05 | 2021-10-29 13:48:37 | 1:16:32 | 0:45:03 | 0:31:29 | smithi | master | centos | 8.3 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/force-sync-many workloads/rados_api_tests} | 2 | |
pass | 6466220 | 2021-10-28 23:15:33 | 2021-10-29 12:35:26 | 2021-10-29 13:43:22 | 1:07:56 | 0:57:40 | 0:10:16 | smithi | master | centos | 8.stream | rados/singleton/{all/ec-inconsistent-hinfo mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream}} | 1 | |
dead | 6466221 | 2021-10-28 23:15:34 | 2021-10-29 12:36:16 | 2021-10-30 00:49:06 | 12:12:50 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6466222 | 2021-10-28 23:15:35 | 2021-10-29 12:37:37 | 2021-10-29 12:59:11 | 0:21:34 | 0:09:47 | 0:11:47 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/sample_radosbench} | 1 | |
pass | 6466223 | 2021-10-28 23:15:36 | 2021-10-29 12:38:07 | 2021-10-29 13:13:11 | 0:35:04 | 0:26:10 | 0:08:54 | smithi | master | rhel | 8.4 | rados/cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.4_container_tools_3.0} 2-node-mgr orchestrator_cli} | 2 | |
pass | 6466224 | 2021-10-28 23:15:37 | 2021-10-29 12:39:58 | 2021-10-29 13:33:31 | 0:53:33 | 0:45:59 | 0:07:34 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/rados_api_tests} | 2 | |
pass | 6466225 | 2021-10-28 23:15:38 | 2021-10-29 12:40:18 | 2021-10-29 13:36:04 | 0:55:46 | 0:28:23 | 0:27:23 | smithi | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/readwrite} | 2 | |
pass | 6466226 | 2021-10-28 23:15:39 | 2021-10-29 12:42:49 | 2021-10-29 13:10:07 | 0:27:18 | 0:16:09 | 0:11:09 | smithi | master | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-comp-lz4 supported-random-distro$/{centos_8.stream} tasks/insights} | 2 | |
fail | 6466227 | 2021-10-28 23:15:40 | 2021-10-29 12:42:49 | 2021-10-29 13:14:21 | 0:31:32 | 0:19:58 | 0:11:34 | smithi | master | centos | 8.2 | rados/cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466228 | 2021-10-28 23:15:41 | 2021-10-29 12:43:09 | 2021-10-29 13:20:29 | 0:37:20 | 0:21:32 | 0:15:48 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/balancer mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6466229 | 2021-10-28 23:15:42 | 2021-10-29 12:43:30 | 2021-10-29 13:26:55 | 0:43:25 | 0:31:56 | 0:11:29 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8.stream} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
fail | 6466230 | 2021-10-28 23:15:43 | 2021-10-29 12:43:50 | 2021-10-29 13:16:23 | 0:32:33 | 0:19:27 | 0:13:06 | smithi | master | centos | 8.2 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi188 with status 5: 'sudo systemctl stop ceph-79aa5d5e-38b8-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466231 | 2021-10-28 23:15:44 | 2021-10-29 12:47:11 | 2021-10-29 14:07:20 | 1:20:09 | 1:13:13 | 0:06:56 | smithi | master | rhel | 8.4 | rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 6466232 | 2021-10-28 23:15:45 | 2021-10-29 12:48:01 | 2021-10-29 13:53:31 | 1:05:30 | 0:39:03 | 0:26:27 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
Command failed on smithi073 with status 5: 'sudo systemctl stop ceph-4515e3ec-38bd-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6466233 | 2021-10-28 23:15:46 | 2021-10-29 12:48:02 | 2021-10-29 13:07:43 | 0:19:41 | 0:12:58 | 0:06:43 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-singlehost/{0-distro$/{rhel_8.4_container_tools_3.0} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
Command failed on smithi017 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:9466ff3c1b9d2f6b2d5c2fa1e5cfc2396b9701ee shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid abcb8b3c-38b8-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6466234 | 2021-10-28 23:15:47 | 2021-10-29 12:48:32 | 2021-10-29 13:33:59 | 0:45:27 | 0:28:43 | 0:16:44 | smithi | master | centos | 8.3 | rados/objectstore/{backends/keyvaluedb supported-random-distro$/{centos_8}} | 1 | |
pass | 6466235 | 2021-10-28 23:15:48 | 2021-10-29 12:48:42 | 2021-10-29 13:39:01 | 0:50:19 | 0:43:26 | 0:06:53 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
fail | 6466236 | 2021-10-28 23:15:49 | 2021-10-29 12:49:33 | 2021-10-29 13:21:08 | 0:31:35 | 0:21:08 | 0:10:27 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi195 with status 5: 'sudo systemctl stop ceph-07c9b31e-38b9-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466237 | 2021-10-28 23:15:50 | 2021-10-29 12:49:53 | 2021-10-29 13:24:33 | 0:34:40 | 0:22:37 | 0:12:03 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-small-objects-many-deletes} | 2 | |
dead | 6466238 | 2021-10-28 23:15:51 | 2021-10-29 12:50:44 | 2021-10-30 01:07:25 | 12:16:41 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6466239 | 2021-10-28 23:15:52 | 2021-10-29 12:55:34 | 2021-10-29 13:21:51 | 0:26:17 | 0:15:27 | 0:10:50 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6466240 | 2021-10-28 23:15:53 | 2021-10-29 12:56:25 | 2021-10-29 14:08:29 | 1:12:04 | 1:02:03 | 0:10:01 | smithi | master | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/mon} | 1 | |
pass | 6466241 | 2021-10-28 23:15:54 | 2021-10-29 12:56:25 | 2021-10-29 13:15:25 | 0:19:00 | 0:05:52 | 0:13:08 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/erasure-code-nonregression mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6466242 | 2021-10-28 23:15:55 | 2021-10-29 12:56:25 | 2021-10-29 13:38:33 | 0:42:08 | 0:34:19 | 0:07:49 | smithi | master | rhel | 8.4 | rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi146 with status 5: 'sudo systemctl stop ceph-5ca73b84-38bb-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466243 | 2021-10-28 23:15:56 | 2021-10-29 12:57:16 | 2021-10-29 14:02:32 | 1:05:16 | 0:41:58 | 0:23:18 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6466244 | 2021-10-28 23:15:57 | 2021-10-29 12:57:26 | 2021-10-29 13:17:25 | 0:19:59 | 0:10:23 | 0:09:36 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_adoption} | 1 | |
pass | 6466245 | 2021-10-28 23:15:58 | 2021-10-29 12:57:26 | 2021-10-29 14:34:41 | 1:37:15 | 1:26:27 | 0:10:48 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8.stream} thrashers/careful thrashosds-health workloads/radosbench} | 2 | |
fail | 6466246 | 2021-10-28 23:15:59 | 2021-10-29 12:57:27 | 2021-10-29 13:29:38 | 0:32:11 | 0:23:39 | 0:08:32 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466247 | 2021-10-28 23:16:00 | 2021-10-29 12:58:27 | 2021-10-29 13:20:58 | 0:22:31 | 0:10:43 | 0:11:48 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6466248 | 2021-10-28 23:16:01 | 2021-10-29 12:59:18 | 2021-10-29 13:48:21 | 0:49:03 | 0:26:15 | 0:22:48 | smithi | master | centos | 8.3 | rados/cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466249 | 2021-10-28 23:16:02 | 2021-10-29 13:00:38 | 2021-10-29 13:22:32 | 0:21:54 | 0:12:44 | 0:09:10 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_read} | 1 | |
pass | 6466250 | 2021-10-28 23:16:03 | 2021-10-29 13:00:38 | 2021-10-29 14:01:44 | 1:01:06 | 0:30:22 | 0:30:44 | smithi | master | centos | 8.3 | rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} tasks/mon_recovery} | 2 | |
pass | 6466251 | 2021-10-28 23:16:04 | 2021-10-29 13:08:40 | 2021-10-29 14:37:48 | 1:29:08 | 1:21:22 | 0:07:46 | smithi | master | centos | 8.stream | rados/singleton/{all/lost-unfound-delete mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6466252 | 2021-10-28 23:16:05 | 2021-10-29 13:08:40 | 2021-10-29 14:05:19 | 0:56:39 | 0:25:47 | 0:30:52 | smithi | master | centos | 8.3 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi196 with status 5: 'sudo systemctl stop ceph-3e68ef1a-38bf-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6466253 | 2021-10-28 23:16:06 | 2021-10-29 13:09:51 | 2021-10-29 13:31:22 | 0:21:31 | 0:12:23 | 0:09:08 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi124 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9466ff3c1b9d2f6b2d5c2fa1e5cfc2396b9701ee TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 6466254 | 2021-10-28 23:16:07 | 2021-10-29 13:10:11 | 2021-10-29 13:38:07 | 0:27:56 | 0:20:12 | 0:07:44 | smithi | master | rhel | 8.4 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} tasks/repair_test} | 2 | |
pass | 6466255 | 2021-10-28 23:16:08 | 2021-10-29 13:10:41 | 2021-10-29 14:03:55 | 0:53:14 | 0:30:55 | 0:22:19 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/redirect} | 2 | |
pass | 6466256 | 2021-10-28 23:16:09 | 2021-10-29 13:12:42 | 2021-10-29 13:43:46 | 0:31:04 | 0:23:20 | 0:07:44 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 6466257 | 2021-10-28 23:16:10 | 2021-10-29 13:14:23 | 2021-10-29 13:58:33 | 0:44:10 | 0:18:53 | 0:25:17 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados tasks/mon_recovery validater/lockdep} | 2 | |
fail | 6466258 | 2021-10-28 23:16:11 | 2021-10-29 13:15:33 | 2021-10-29 13:45:27 | 0:29:54 | 0:23:19 | 0:06:35 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466259 | 2021-10-28 23:16:12 | 2021-10-29 13:16:24 | 2021-10-29 14:33:46 | 1:17:22 | 1:07:46 | 0:09:36 | smithi | master | rhel | 8.4 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/many workloads/rados_mon_osdmap_prune} | 2 | |
pass | 6466260 | 2021-10-28 23:16:13 | 2021-10-29 13:17:34 | 2021-10-29 13:45:45 | 0:28:11 | 0:18:20 | 0:09:51 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 6466261 | 2021-10-28 23:16:14 | 2021-10-29 13:20:35 | 2021-10-29 13:52:29 | 0:31:54 | 0:21:18 | 0:10:36 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi195 with status 5: 'sudo systemctl stop ceph-7781303e-38bd-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466262 | 2021-10-28 23:16:15 | 2021-10-29 13:21:15 | 2021-10-29 15:25:04 | 2:03:49 | 1:43:56 | 0:19:53 | smithi | master | centos | 8.3 | rados/singleton/{all/lost-unfound mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6466263 | 2021-10-28 23:16:16 | 2021-10-29 13:21:15 | 2021-10-29 14:13:47 | 0:52:32 | 0:29:53 | 0:22:39 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
pass | 6466264 | 2021-10-28 23:16:17 | 2021-10-29 13:22:36 | 2021-10-29 14:18:04 | 0:55:28 | 0:31:38 | 0:23:50 | smithi | master | centos | 8.3 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-comp-snappy supported-random-distro$/{centos_8} tasks/module_selftest} | 2 | |
fail | 6466265 | 2021-10-28 23:16:18 | 2021-10-29 13:24:37 | 2021-10-29 14:01:00 | 0:36:23 | 0:22:41 | 0:13:42 | smithi | master | ubuntu | 20.04 | rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi132 with status 5: 'sudo systemctl stop ceph-167c4dfe-38be-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466266 | 2021-10-28 23:16:19 | 2021-10-29 13:26:58 | 2021-10-29 14:12:47 | 0:45:49 | 0:17:40 | 0:28:09 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/redirect_promote_tests} | 2 | |
fail | 6466267 | 2021-10-28 23:16:20 | 2021-10-29 13:29:48 | 2021-10-29 14:02:39 | 0:32:51 | 0:25:17 | 0:07:34 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466268 | 2021-10-28 23:16:21 | 2021-10-29 13:31:29 | 2021-10-29 14:03:01 | 0:31:32 | 0:22:12 | 0:09:20 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
fail | 6466269 | 2021-10-28 23:16:22 | 2021-10-29 13:33:39 | 2021-10-29 14:03:08 | 0:29:29 | 0:22:56 | 0:06:33 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi121 with status 5: 'sudo systemctl stop ceph-f955f80a-38be-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466270 | 2021-10-28 23:16:23 | 2021-10-29 13:33:50 | 2021-10-29 14:14:54 | 0:41:04 | 0:34:51 | 0:06:13 | smithi | master | rhel | 8.4 | rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{rhel_8}} | 1 | |
pass | 6466271 | 2021-10-28 23:16:23 | 2021-10-29 13:33:50 | 2021-10-29 13:53:22 | 0:19:32 | 0:10:56 | 0:08:36 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/crushdiff mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6466272 | 2021-10-28 23:16:24 | 2021-10-29 13:34:00 | 2021-10-29 13:59:21 | 0:25:21 | 0:19:18 | 0:06:03 | smithi | master | rhel | 8.4 | rados/singleton/{all/max-pg-per-osd.from-mon mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 6466273 | 2021-10-28 23:16:25 | 2021-10-29 13:34:21 | 2021-10-29 14:11:57 | 0:37:36 | 0:23:02 | 0:14:34 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466274 | 2021-10-28 23:16:26 | 2021-10-29 13:35:31 | 2021-10-29 13:50:44 | 0:15:13 | 0:06:57 | 0:08:16 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_cephadm_repos} | 1 | |
pass | 6466275 | 2021-10-28 23:16:27 | 2021-10-29 13:35:31 | 2021-10-29 14:13:43 | 0:38:12 | 0:26:38 | 0:11:34 | smithi | master | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/careful thrashosds-health workloads/ec-small-objects} | 2 | |
pass | 6466276 | 2021-10-28 23:16:28 | 2021-10-29 13:36:12 | 2021-10-29 14:11:35 | 0:35:23 | 0:14:48 | 0:20:35 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/redirect_set_object} | 2 | |
pass | 6466277 | 2021-10-28 23:16:29 | 2021-10-29 13:38:12 | 2021-10-29 13:59:53 | 0:21:41 | 0:13:00 | 0:08:41 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} | 1 | |
dead | 6466278 | 2021-10-28 23:16:30 | 2021-10-29 13:38:43 | 2021-10-30 01:50:41 | 12:11:58 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6466279 | 2021-10-28 23:16:31 | 2021-10-29 13:39:03 | 2021-10-30 01:54:33 | 12:15:30 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/connectivity} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6466280 | 2021-10-28 23:16:32 | 2021-10-29 13:43:24 | 2021-10-29 14:09:05 | 0:25:41 | 0:19:12 | 0:06:29 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6466281 | 2021-10-28 23:16:33 | 2021-10-29 13:43:55 | 2021-10-29 18:15:02 | 4:31:07 | 4:22:06 | 0:09:01 | smithi | master | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/osd-backfill} | 1 | |
pass | 6466282 | 2021-10-28 23:16:34 | 2021-10-29 13:43:55 | 2021-10-29 14:18:44 | 0:34:49 | 0:24:59 | 0:09:50 | smithi | master | centos | 8.3 | rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} | 1 | |
pass | 6466283 | 2021-10-28 23:16:35 | 2021-10-29 13:43:55 | 2021-10-29 14:07:17 | 0:23:22 | 0:14:38 | 0:08:44 | smithi | master | centos | 8.3 | rados/singleton/{all/max-pg-per-osd.from-primary mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6466284 | 2021-10-28 23:16:36 | 2021-10-29 13:43:56 | 2021-10-29 14:18:40 | 0:34:44 | 0:23:31 | 0:11:13 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 6466285 | 2021-10-28 23:16:37 | 2021-10-29 13:45:36 | 2021-10-29 14:20:00 | 0:34:24 | 0:20:11 | 0:14:13 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466286 | 2021-10-28 23:16:38 | 2021-10-29 13:50:46 | 2021-10-29 14:14:35 | 0:23:49 | 0:10:32 | 0:13:17 | smithi | master | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/scrub_test} | 2 | |
pass | 6466287 | 2021-10-28 23:16:39 | 2021-10-29 13:52:37 | 2021-10-29 14:17:21 | 0:24:44 | 0:12:02 | 0:12:42 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/set-chunks-read} | 2 | |
fail | 6466288 | 2021-10-28 23:16:40 | 2021-10-29 13:53:37 | 2021-10-29 14:24:48 | 0:31:11 | 0:24:58 | 0:06:13 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6466289 | 2021-10-28 23:16:41 | 2021-10-29 13:53:38 | 2021-10-29 14:34:40 | 0:41:02 | 0:22:52 | 0:18:10 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/test_rbd_api} | 3 | |
Failure Reason:
Command failed on smithi049 with status 5: 'sudo systemctl stop ceph-fd3640de-38c2-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6466290 | 2021-10-28 23:16:42 | 2021-10-29 13:59:29 | 2021-10-29 14:32:13 | 0:32:44 | 0:23:35 | 0:09:09 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi132 with status 5: 'sudo systemctl stop ceph-cb590cae-38c2-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466291 | 2021-10-28 23:16:43 | 2021-10-29 14:01:09 | 2021-10-29 14:23:13 | 0:22:04 | 0:09:02 | 0:13:02 | smithi | master | centos | 8.3 | rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} | 3 | |
fail | 6466292 | 2021-10-28 23:16:44 | 2021-10-29 14:01:50 | 2021-10-29 14:21:17 | 0:19:27 | 0:10:42 | 0:08:45 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_nfs} | 1 | |
Failure Reason:
Command failed on smithi037 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:9466ff3c1b9d2f6b2d5c2fa1e5cfc2396b9701ee shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1302c392-38c3-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6466293 | 2021-10-28 23:16:45 | 2021-10-29 14:02:30 | 2021-10-29 14:25:30 | 0:23:00 | 0:14:23 | 0:08:37 | smithi | master | centos | 8.stream | rados/singleton/{all/max-pg-per-osd.from-replica mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6466294 | 2021-10-28 23:16:46 | 2021-10-29 14:02:30 | 2021-10-29 14:20:09 | 0:17:39 | 0:07:10 | 0:10:29 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/full-tiering mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
dead | 6466295 | 2021-10-28 23:16:47 | 2021-10-29 14:02:41 | 2021-10-30 02:14:18 | 12:11:37 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6466296 | 2021-10-28 23:16:48 | 2021-10-29 14:02:41 | 2021-10-29 14:32:27 | 0:29:46 | 0:22:25 | 0:07:21 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 6466297 | 2021-10-28 23:16:49 | 2021-10-29 14:03:12 | 2021-10-29 14:42:39 | 0:39:27 | 0:29:23 | 0:10:04 | smithi | master | ubuntu | 20.04 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6466298 | 2021-10-28 23:16:50 | 2021-10-29 14:03:12 | 2021-10-29 14:39:19 | 0:36:07 | 0:24:22 | 0:11:45 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/small-objects-balanced} | 2 | |
fail | 6466299 | 2021-10-28 23:16:51 | 2021-10-29 14:04:03 | 2021-10-29 20:49:10 | 6:45:07 | 6:33:23 | 0:11:44 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi156 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9466ff3c1b9d2f6b2d5c2fa1e5cfc2396b9701ee TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 6466300 | 2021-10-28 23:16:52 | 2021-10-29 14:05:24 | 2021-10-29 14:40:51 | 0:35:27 | 0:22:40 | 0:12:47 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi168 with status 5: 'sudo systemctl stop ceph-15a0de58-38c4-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466301 | 2021-10-28 23:16:53 | 2021-10-29 14:07:24 | 2021-10-29 14:47:36 | 0:40:12 | 0:28:52 | 0:11:20 | smithi | master | centos | 8.3 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/one workloads/rados_mon_workunits} | 2 | |
fail | 6466302 | 2021-10-28 23:16:54 | 2021-10-29 14:08:35 | 2021-10-29 14:44:40 | 0:36:05 | 0:21:54 | 0:14:11 | smithi | master | centos | 8.2 | rados/cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi171 with status 5: 'sudo systemctl stop ceph-97f978c4-38c4-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466303 | 2021-10-28 23:16:55 | 2021-10-29 14:11:36 | 2021-10-29 14:51:20 | 0:39:44 | 0:31:46 | 0:07:58 | smithi | master | rhel | 8.4 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-comp-zlib supported-random-distro$/{rhel_8} tasks/progress} | 2 | |
pass | 6466304 | 2021-10-28 23:16:56 | 2021-10-29 14:12:06 | 2021-10-29 14:29:47 | 0:17:41 | 0:07:43 | 0:09:58 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/mon-auth-caps mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6466305 | 2021-10-28 23:16:57 | 2021-10-29 14:12:06 | 2021-10-29 14:44:13 | 0:32:07 | 0:19:59 | 0:12:08 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466306 | 2021-10-28 23:16:58 | 2021-10-29 14:12:57 | 2021-10-29 14:33:02 | 0:20:05 | 0:10:38 | 0:09:27 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/health-warnings mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6466307 | 2021-10-28 23:16:59 | 2021-10-29 14:13:47 | 2021-10-29 14:33:57 | 0:20:10 | 0:09:33 | 0:10:37 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_read} | 1 | |
pass | 6466308 | 2021-10-28 23:17:00 | 2021-10-29 14:13:48 | 2021-10-29 16:51:07 | 2:37:19 | 2:14:41 | 0:22:38 | smithi | master | centos | 8.3 | rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8}} | 1 | |
fail | 6466309 | 2021-10-28 23:17:01 | 2021-10-29 14:13:48 | 2021-10-29 14:50:32 | 0:36:44 | 0:24:18 | 0:12:26 | smithi | master | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
dead | 6466310 | 2021-10-28 23:17:02 | 2021-10-29 14:14:38 | 2021-10-30 02:31:33 | 12:16:55 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6466311 | 2021-10-28 23:17:03 | 2021-10-29 14:17:29 | 2021-10-29 14:47:55 | 0:30:26 | 0:19:08 | 0:11:18 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/small-objects-localized} | 2 | |
fail | 6466312 | 2021-10-28 23:17:04 | 2021-10-29 14:18:09 | 2021-10-29 14:52:44 | 0:34:35 | 0:22:47 | 0:11:48 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi188 with status 5: 'sudo systemctl stop ceph-0ccc764c-38c5-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6466313 | 2021-10-28 23:17:05 | 2021-10-29 14:18:50 | 2021-10-29 14:38:00 | 0:19:10 | 0:10:15 | 0:08:55 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi170 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:9466ff3c1b9d2f6b2d5c2fa1e5cfc2396b9701ee shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5c4eb838-38c5-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6466314 | 2021-10-28 23:17:06 | 2021-10-29 14:18:50 | 2021-10-29 14:39:51 | 0:21:01 | 0:09:22 | 0:11:39 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/mon-config-key-caps mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6466315 | 2021-10-28 23:17:07 | 2021-10-29 14:20:10 | 2021-10-29 14:49:43 | 0:29:33 | 0:22:46 | 0:06:47 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466316 | 2021-10-28 23:17:08 | 2021-10-29 14:20:11 | 2021-10-29 15:06:06 | 0:45:55 | 0:34:14 | 0:11:41 | smithi | master | centos | 8.3 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 2 | |
fail | 6466317 | 2021-10-28 23:17:09 | 2021-10-29 14:20:51 | 2021-10-29 14:44:45 | 0:23:54 | 0:11:13 | 0:12:41 | smithi | master | centos | 8.2 | rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/e2e} | 2 | |
Failure Reason:
Command failed on smithi148 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:9466ff3c1b9d2f6b2d5c2fa1e5cfc2396b9701ee shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 295cdc4c-38c6-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6466318 | 2021-10-28 23:17:10 | 2021-10-29 14:23:22 | 2021-10-29 15:06:38 | 0:43:16 | 0:28:22 | 0:14:54 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
pass | 6466319 | 2021-10-28 23:17:11 | 2021-10-29 14:24:52 | 2021-10-29 14:45:36 | 0:20:44 | 0:11:13 | 0:09:31 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6466320 | 2021-10-28 23:17:12 | 2021-10-29 14:24:53 | 2021-10-29 14:55:20 | 0:30:27 | 0:16:06 | 0:14:21 | smithi | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} tasks/libcephsqlite} | 2 | |
pass | 6466321 | 2021-10-28 23:17:13 | 2021-10-29 14:32:32 | 2021-10-29 15:13:45 | 0:41:13 | 0:35:26 | 0:05:47 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/small-objects} | 2 | |
fail | 6466322 | 2021-10-28 23:17:14 | 2021-10-29 14:32:32 | 2021-10-29 15:04:36 | 0:32:04 | 0:24:49 | 0:07:15 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466323 | 2021-10-28 23:17:15 | 2021-10-29 14:33:33 | 2021-10-29 15:12:06 | 0:38:33 | 0:27:35 | 0:10:58 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8.stream} thrashers/minsize_recovery thrashosds-health workloads/ec-snaps-few-objects-overwrites} | 2 | |
dead | 6466324 | 2021-10-28 23:17:16 | 2021-10-29 14:33:53 | 2021-10-30 02:45:27 | 12:11:34 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6466325 | 2021-10-28 23:17:17 | 2021-10-29 14:34:04 | 2021-10-29 15:12:32 | 0:38:28 | 0:26:05 | 0:12:23 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 6466326 | 2021-10-28 23:17:18 | 2021-10-29 14:34:14 | 2021-10-29 15:05:55 | 0:31:41 | 0:20:15 | 0:11:26 | smithi | master | centos | 8.2 | rados/cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6466327 | 2021-10-28 23:17:19 | 2021-10-29 14:34:44 | 2021-10-29 14:56:14 | 0:21:30 | 0:12:20 | 0:09:10 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/mon-config-keys mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6466328 | 2021-10-28 23:17:20 | 2021-10-29 14:34:45 | 2021-10-29 15:05:49 | 0:31:04 | 0:19:51 | 0:11:13 | smithi | master | centos | 8.2 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi160 with status 5: 'sudo systemctl stop ceph-82a69ff8-38c7-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466329 | 2021-10-28 23:17:20 | 2021-10-29 14:34:45 | 2021-10-29 15:05:07 | 0:30:22 | 0:21:38 | 0:08:44 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6466330 | 2021-10-28 23:17:21 | 2021-10-29 14:37:56 | 2021-10-29 17:59:52 | 3:21:56 | 3:13:00 | 0:08:56 | smithi | master | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd} | 1 | |
pass | 6466331 | 2021-10-28 23:17:22 | 2021-10-29 14:38:06 | 2021-10-29 15:14:39 | 0:36:33 | 0:25:19 | 0:11:14 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |
fail | 6466332 | 2021-10-28 23:17:23 | 2021-10-29 16:04:14 | 2021-10-29 16:23:46 | 0:19:32 | 0:12:48 | 0:06:44 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-singlehost/{0-distro$/{rhel_8.4_container_tools_rhel8} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
Command failed on smithi171 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:9466ff3c1b9d2f6b2d5c2fa1e5cfc2396b9701ee shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 08ccbb74-38d4-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6466333 | 2021-10-28 23:17:24 | 2021-10-29 16:04:15 | 2021-10-29 16:27:36 | 0:23:21 | 0:11:22 | 0:11:59 | smithi | master | centos | 8.stream | rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream} tasks/mon_clock_with_skews} | 3 | |
fail | 6466334 | 2021-10-28 23:17:25 | 2021-10-29 16:04:15 | 2021-10-29 16:35:36 | 0:31:21 | 0:21:55 | 0:09:26 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi174 with status 5: 'sudo systemctl stop ceph-50f99746-38d4-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466335 | 2021-10-28 23:17:26 | 2021-10-29 16:04:15 | 2021-10-29 16:25:43 | 0:21:28 | 0:09:27 | 0:12:01 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} | 1 | |
pass | 6466336 | 2021-10-28 23:17:27 | 2021-10-29 16:04:16 | 2021-10-29 16:23:21 | 0:19:05 | 0:10:33 | 0:08:32 | smithi | master | centos | 8.3 | rados/singleton/{all/mon-config mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} | 1 | |
dead | 6466337 | 2021-10-28 23:17:28 | 2021-10-29 16:04:16 | 2021-10-29 16:05:27 | 0:01:11 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{centos_8.stream} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi027 |
||||||||||||||
dead | 6466338 | 2021-10-28 23:17:29 | 2021-10-29 16:04:16 | 2021-10-30 04:13:59 | 12:09:43 | smithi | master | centos | 8.3 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6466339 | 2021-10-28 23:17:30 | 2021-10-29 16:04:17 | 2021-10-29 16:32:57 | 0:28:40 | 0:18:08 | 0:10:32 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/rados_cls_all validater/lockdep} | 2 | |
pass | 6466340 | 2021-10-28 23:17:31 | 2021-10-29 16:04:17 | 2021-10-29 16:26:16 | 0:21:59 | 0:14:01 | 0:07:58 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6466341 | 2021-10-28 23:17:32 | 2021-10-29 16:04:17 | 2021-10-29 16:47:38 | 0:43:21 | 0:35:45 | 0:07:36 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects-localized} | 2 | |
fail | 6466342 | 2021-10-28 23:17:33 | 2021-10-29 16:04:18 | 2021-10-29 16:39:37 | 0:35:19 | 0:22:54 | 0:12:25 | smithi | master | centos | 8.3 | rados/cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi148 with status 5: 'sudo systemctl stop ceph-9a4f0e08-38d4-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6466343 | 2021-10-28 23:17:34 | 2021-10-29 16:04:18 | 2021-10-29 16:45:56 | 0:41:38 | 0:30:34 | 0:11:04 | smithi | master | ubuntu | 20.04 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/snaps-few-objects} | 2 | |
pass | 6466344 | 2021-10-28 23:17:35 | 2021-10-29 16:04:18 | 2021-10-29 16:26:34 | 0:22:16 | 0:10:29 | 0:11:47 | smithi | master | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-comp-zstd supported-random-distro$/{ubuntu_latest} tasks/prometheus} | 2 | |
pass | 6466345 | 2021-10-28 23:17:36 | 2021-10-29 16:04:19 | 2021-10-29 16:24:10 | 0:19:51 | 0:10:40 | 0:09:11 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_adoption} | 1 | |
pass | 6466346 | 2021-10-28 23:17:37 | 2021-10-29 16:04:19 | 2021-10-29 18:57:47 | 2:53:28 | 2:23:11 | 0:30:17 | smithi | master | centos | 8.3 | rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8}} | 1 |