User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
nojha | 2021-10-28 21:32:34 | 2021-10-28 21:34:04 | 2021-10-29 13:10:34 | 15:36:30 | rados | wip-43687 | smithi | 5ad5661 | 142 | 95 | 18 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6465387 | 2021-10-28 21:33:18 | 2021-10-28 21:34:04 | 2021-10-28 21:58:15 | 0:24:11 | 0:13:55 | 0:10:16 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados tasks/mon_recovery validater/lockdep} | 2 | |
pass | 6465388 | 2021-10-28 21:33:19 | 2021-10-28 21:34:05 | 2021-10-28 22:06:06 | 0:32:01 | 0:18:35 | 0:13:26 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/cache-pool-snaps-readproxy} | 2 | |
pass | 6465389 | 2021-10-28 21:33:20 | 2021-10-28 21:34:05 | 2021-10-28 22:05:41 | 0:31:36 | 0:20:46 | 0:10:50 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/backfill-toofull mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6465390 | 2021-10-28 21:33:20 | 2021-10-28 21:34:05 | 2021-10-28 21:52:08 | 0:18:03 | 0:06:35 | 0:11:28 | smithi | master | ubuntu | 20.04 | rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} | 2 | |
pass | 6465391 | 2021-10-28 21:33:21 | 2021-10-28 21:34:06 | 2021-10-28 22:19:27 | 0:45:21 | 0:32:12 | 0:13:09 | smithi | master | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8.stream} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} | 2 | |
pass | 6465392 | 2021-10-28 21:33:22 | 2021-10-28 21:34:06 | 2021-10-28 22:17:41 | 0:43:35 | 0:28:20 | 0:15:15 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 6465393 | 2021-10-28 21:33:23 | 2021-10-28 21:34:06 | 2021-10-28 22:10:19 | 0:36:13 | 0:26:08 | 0:10:05 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 6465394 | 2021-10-28 21:33:23 | 2021-10-28 21:34:07 | 2021-10-28 22:04:39 | 0:30:32 | 0:23:26 | 0:07:06 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465395 | 2021-10-28 21:33:24 | 2021-10-28 21:34:07 | 2021-10-28 22:11:30 | 0:37:23 | 0:24:00 | 0:13:23 | smithi | master | ubuntu | 20.04 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/force-sync-many workloads/rados_mon_workunits} | 2 | |
fail | 6465396 | 2021-10-28 21:33:25 | 2021-10-28 21:34:07 | 2021-10-28 22:10:43 | 0:36:36 | 0:24:37 | 0:11:59 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/1-node k8s/1.21 net/flannel rook/1.7.0} | 1 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
fail | 6465397 | 2021-10-28 21:33:25 | 2021-10-28 21:34:07 | 2021-10-28 22:07:32 | 0:33:25 | 0:23:42 | 0:09:43 | smithi | master | centos | 8.2 | rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_ganesha (unittest.loader._FailedTest) |
||||||||||||||
pass | 6465398 | 2021-10-28 21:33:26 | 2021-10-28 21:34:08 | 2021-10-28 21:53:29 | 0:19:21 | 0:07:06 | 0:12:15 | smithi | master | ubuntu | 20.04 | rados/objectstore/{backends/alloc-hint supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6465399 | 2021-10-28 21:33:27 | 2021-10-28 21:34:08 | 2021-10-28 21:59:09 | 0:25:01 | 0:13:43 | 0:11:18 | smithi | master | centos | 8.3 | rados/rest/{mgr-restful supported-random-distro$/{centos_8}} | 1 | |
pass | 6465400 | 2021-10-28 21:33:27 | 2021-10-28 21:34:08 | 2021-10-28 22:04:30 | 0:30:22 | 0:19:28 | 0:10:54 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6465401 | 2021-10-28 21:33:28 | 2021-10-28 21:34:09 | 2021-10-28 21:56:26 | 0:22:17 | 0:11:14 | 0:11:03 | smithi | master | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/c2c} | 1 | |
dead | 6465402 | 2021-10-28 21:33:29 | 2021-10-28 21:34:09 | 2021-10-29 09:46:03 | 12:11:54 | smithi | master | ubuntu | 20.04 | rados/upgrade/parallel/{0-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465403 | 2021-10-28 21:33:30 | 2021-10-28 21:34:09 | 2021-10-28 22:09:31 | 0:35:22 | 0:26:14 | 0:09:08 | smithi | master | centos | 8.3 | rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} | 1 | |
dead | 6465404 | 2021-10-28 21:33:30 | 2021-10-28 21:34:09 | 2021-10-29 09:45:47 | 12:11:38 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6465405 | 2021-10-28 21:33:31 | 2021-10-28 21:34:10 | 2021-10-28 22:02:43 | 0:28:33 | 0:15:52 | 0:12:41 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi016 with status 1: "sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a12136de-3839-11ec-8c28-001a4aab830c -- bash -c 'ceph fs set cephfs max_mds 1'" |
||||||||||||||
dead | 6465406 | 2021-10-28 21:33:32 | 2021-10-28 21:34:10 | 2021-10-29 09:45:25 | 12:11:15 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465407 | 2021-10-28 21:33:32 | 2021-10-28 21:34:10 | 2021-10-28 22:05:57 | 0:31:47 | 0:25:37 | 0:06:10 | smithi | master | rhel | 8.4 | rados/cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.4_container_tools_3.0} 2-node-mgr orchestrator_cli} | 2 | |
fail | 6465408 | 2021-10-28 21:33:33 | 2021-10-28 21:34:11 | 2021-10-28 22:06:44 | 0:32:33 | 0:20:12 | 0:12:21 | smithi | master | centos | 8.2 | rados/cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465409 | 2021-10-28 21:33:34 | 2021-10-28 21:34:11 | 2021-10-28 22:07:04 | 0:32:53 | 0:19:41 | 0:13:12 | smithi | master | centos | 8.2 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi121 with status 5: 'sudo systemctl stop ceph-3047af42-3839-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465410 | 2021-10-28 21:33:35 | 2021-10-28 21:34:11 | 2021-10-28 21:51:47 | 0:17:36 | 0:07:47 | 0:09:49 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/divergent_priors mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6465411 | 2021-10-28 21:33:35 | 2021-10-28 21:34:12 | 2021-10-28 21:53:13 | 0:19:01 | 0:09:22 | 0:09:39 | smithi | master | centos | 8.3 | rados/cephadm/smoke-singlehost/{0-distro$/{centos_8.3_container_tools_3.0} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
Command failed on smithi057 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5ad5661f3e361e8c573b395b18740c607fdfcced shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d373e178-3838-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6465412 | 2021-10-28 21:33:36 | 2021-10-28 21:34:12 | 2021-10-28 22:08:34 | 0:34:22 | 0:22:22 | 0:12:00 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi077 with status 5: 'sudo systemctl stop ceph-7ca39bd0-3839-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465413 | 2021-10-28 21:33:37 | 2021-10-28 21:34:12 | 2021-10-28 21:57:45 | 0:23:33 | 0:10:52 | 0:12:41 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6465414 | 2021-10-28 21:33:38 | 2021-10-28 21:34:13 | 2021-10-28 22:14:33 | 0:40:20 | 0:33:06 | 0:07:14 | smithi | master | rhel | 8.4 | rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi132 with status 5: 'sudo systemctl stop ceph-6e7449dc-383a-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465415 | 2021-10-28 21:33:38 | 2021-10-28 21:34:13 | 2021-10-28 22:12:47 | 0:38:34 | 0:25:29 | 0:13:05 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/cache-snaps-balanced} | 2 | |
pass | 6465416 | 2021-10-28 21:33:39 | 2021-10-28 21:34:13 | 2021-10-28 21:54:27 | 0:20:14 | 0:10:40 | 0:09:34 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_adoption} | 1 | |
pass | 6465417 | 2021-10-28 21:33:40 | 2021-10-28 21:34:13 | 2021-10-28 22:00:21 | 0:26:08 | 0:19:31 | 0:06:37 | smithi | master | rhel | 8.4 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} tasks/rados_cls_all} | 2 | |
fail | 6465418 | 2021-10-28 21:33:41 | 2021-10-28 21:34:14 | 2021-10-28 22:06:20 | 0:32:06 | 0:24:58 | 0:07:08 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465419 | 2021-10-28 21:33:41 | 2021-10-28 21:34:14 | 2021-10-28 22:04:47 | 0:30:33 | 0:19:39 | 0:10:54 | smithi | master | centos | 8.3 | rados/cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465420 | 2021-10-28 21:33:42 | 2021-10-28 21:34:14 | 2021-10-28 22:00:36 | 0:26:22 | 0:15:26 | 0:10:56 | smithi | master | centos | 8.stream | rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6465421 | 2021-10-28 21:33:43 | 2021-10-28 21:34:15 | 2021-10-28 21:54:22 | 0:20:07 | 0:12:13 | 0:07:54 | smithi | master | centos | 8.stream | rados/singleton/{all/dump-stuck mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6465422 | 2021-10-28 21:33:43 | 2021-10-28 21:34:15 | 2021-10-28 22:11:49 | 0:37:34 | 0:23:43 | 0:13:51 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465423 | 2021-10-28 21:33:44 | 2021-10-28 21:34:15 | 2021-10-28 21:55:18 | 0:21:03 | 0:09:42 | 0:11:21 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} | 1 | |
fail | 6465424 | 2021-10-28 21:33:45 | 2021-10-28 21:34:15 | 2021-10-28 22:08:38 | 0:34:23 | 0:21:50 | 0:12:33 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi124 with status 5: 'sudo systemctl stop ceph-7b4b8c70-3839-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465425 | 2021-10-28 21:33:46 | 2021-10-28 21:34:16 | 2021-10-28 21:56:44 | 0:22:28 | 0:05:59 | 0:16:29 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6465426 | 2021-10-28 21:33:46 | 2021-10-28 21:41:27 | 2021-10-28 22:12:12 | 0:30:45 | 0:21:48 | 0:08:57 | smithi | master | centos | 8.3 | rados/standalone/{supported-random-distro$/{centos_8} workloads/crush} | 1 | |
pass | 6465427 | 2021-10-28 21:33:47 | 2021-10-28 21:41:27 | 2021-10-28 22:22:02 | 0:40:35 | 0:29:24 | 0:11:11 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |
fail | 6465428 | 2021-10-28 21:33:48 | 2021-10-28 21:42:28 | 2021-10-28 22:13:37 | 0:31:09 | 0:19:42 | 0:11:27 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465429 | 2021-10-28 21:33:49 | 2021-10-28 21:44:19 | 2021-10-28 22:09:02 | 0:24:43 | 0:12:40 | 0:12:03 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/cache} | 2 | |
fail | 6465430 | 2021-10-28 21:33:49 | 2021-10-28 21:45:59 | 2021-10-28 22:19:59 | 0:34:00 | 0:22:10 | 0:11:50 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
Command failed on smithi178 with status 5: 'sudo systemctl stop ceph-094b70d4-383b-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6465431 | 2021-10-28 21:33:50 | 2021-10-28 21:46:40 | 2021-10-28 22:17:52 | 0:31:12 | 0:23:02 | 0:08:10 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465432 | 2021-10-28 21:33:51 | 2021-10-28 21:48:50 | 2021-10-28 22:18:14 | 0:29:24 | 0:22:51 | 0:06:33 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi115 with status 5: 'sudo systemctl stop ceph-083d1b0c-383b-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465433 | 2021-10-28 21:33:51 | 2021-10-28 21:49:01 | 2021-10-28 22:12:39 | 0:23:38 | 0:13:20 | 0:10:18 | smithi | master | centos | 8.3 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-comp-zlib supported-random-distro$/{centos_8} tasks/prometheus} | 2 | |
pass | 6465434 | 2021-10-28 21:33:52 | 2021-10-28 21:49:21 | 2021-10-28 22:06:08 | 0:16:47 | 0:07:12 | 0:09:35 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_cephadm_repos} | 1 | |
pass | 6465435 | 2021-10-28 21:33:53 | 2021-10-28 21:49:21 | 2021-10-28 22:52:14 | 1:02:53 | 0:51:50 | 0:11:03 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/none msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}} | 1 | |
dead | 6465436 | 2021-10-28 21:33:54 | 2021-10-28 21:50:12 | 2021-10-29 10:03:59 | 12:13:47 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465437 | 2021-10-28 21:33:54 | 2021-10-28 21:51:52 | 2021-10-28 22:18:07 | 0:26:15 | 0:12:20 | 0:13:55 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 6465438 | 2021-10-28 21:33:55 | 2021-10-28 21:52:53 | 2021-10-28 22:08:51 | 0:15:58 | 0:06:31 | 0:09:27 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/export-after-evict mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6465439 | 2021-10-28 21:33:56 | 2021-10-28 21:53:03 | 2021-10-28 22:24:25 | 0:31:22 | 0:20:12 | 0:11:10 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465440 | 2021-10-28 21:33:57 | 2021-10-28 21:53:14 | 2021-10-28 22:25:27 | 0:32:13 | 0:24:07 | 0:08:06 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465441 | 2021-10-28 21:33:57 | 2021-10-28 21:53:24 | 2021-10-28 22:13:00 | 0:19:36 | 0:09:09 | 0:10:27 | smithi | master | centos | 8.3 | rados/objectstore/{backends/filejournal supported-random-distro$/{centos_8}} | 1 | |
pass | 6465442 | 2021-10-28 21:33:58 | 2021-10-28 21:53:34 | 2021-10-28 22:23:39 | 0:30:05 | 0:19:01 | 0:11:04 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados tasks/rados_cls_all validater/lockdep} | 2 | |
pass | 6465443 | 2021-10-28 21:33:59 | 2021-10-28 21:54:35 | 2021-10-28 22:17:10 | 0:22:35 | 0:06:59 | 0:15:36 | smithi | master | ubuntu | 20.04 | rados/multimon/{clusters/21 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} | 3 | |
pass | 6465444 | 2021-10-28 21:33:59 | 2021-10-28 21:56:45 | 2021-10-28 22:38:10 | 0:41:25 | 0:35:09 | 0:06:16 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-small-objects-balanced} | 2 | |
pass | 6465445 | 2021-10-28 21:34:00 | 2021-10-28 21:57:06 | 2021-10-28 22:39:29 | 0:42:23 | 0:29:38 | 0:12:45 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 6465446 | 2021-10-28 21:34:01 | 2021-10-28 21:58:16 | 2021-10-28 22:38:24 | 0:40:08 | 0:25:49 | 0:14:19 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 6465447 | 2021-10-28 21:34:02 | 2021-10-28 22:00:27 | 2021-10-28 23:20:20 | 1:19:53 | 1:12:51 | 0:07:02 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465448 | 2021-10-28 21:34:02 | 2021-10-28 22:00:37 | 2021-10-28 22:31:35 | 0:30:58 | 0:18:29 | 0:12:29 | smithi | master | centos | 8.3 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/one workloads/pool-create-delete} | 2 | |
pass | 6465449 | 2021-10-28 21:34:03 | 2021-10-28 22:02:48 | 2021-10-28 22:26:21 | 0:23:33 | 0:10:16 | 0:13:17 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/dedup-io-snaps} | 2 | |
pass | 6465450 | 2021-10-28 21:34:04 | 2021-10-28 22:04:48 | 2021-10-28 23:30:53 | 1:26:05 | 1:15:43 | 0:10:22 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/lost-unfound-delete mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 1 | |
dead | 6465451 | 2021-10-28 21:34:05 | 2021-10-28 22:04:49 | 2021-10-29 10:15:02 | 12:10:13 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6465452 | 2021-10-28 21:34:05 | 2021-10-28 22:39:12 | 1304 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | ||||
Failure Reason:
Command failed on smithi174 with status 5: 'sudo systemctl stop ceph-bc18d966-383d-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6465453 | 2021-10-28 21:34:06 | 2021-10-28 22:05:59 | 2021-10-28 22:44:08 | 0:38:09 | 0:26:13 | 0:11:56 | smithi | master | ubuntu | 20.04 | rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi172 with status 5: 'sudo systemctl stop ceph-8ed79564-383d-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465454 | 2021-10-28 21:34:07 | 2021-10-28 22:33:26 | 968 | smithi | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/rados_stress_watch} | 2 | ||||
pass | 6465455 | 2021-10-28 21:34:08 | 2021-10-28 22:06:10 | 2021-10-28 22:33:30 | 0:27:20 | 0:20:46 | 0:06:34 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/health-warnings mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6465456 | 2021-10-28 21:34:09 | 2021-10-28 22:06:30 | 2021-10-28 23:41:55 | 1:35:25 | 1:27:18 | 0:08:07 | smithi | master | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/erasure-code} | 1 | |
fail | 6465457 | 2021-10-28 21:34:09 | 2021-10-28 22:06:31 | 2021-10-28 22:42:40 | 0:36:09 | 0:23:59 | 0:12:10 | smithi | master | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465458 | 2021-10-28 21:34:10 | 2021-10-28 22:06:51 | 2021-10-28 22:41:10 | 0:34:19 | 0:22:23 | 0:11:56 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi121 with status 5: 'sudo systemctl stop ceph-50b4f772-383d-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6465459 | 2021-10-28 21:34:11 | 2021-10-28 22:07:11 | 2021-10-28 22:39:01 | 0:31:50 | 0:23:40 | 0:08:10 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465460 | 2021-10-28 21:34:12 | 2021-10-28 22:07:42 | 2021-10-28 22:30:43 | 0:23:01 | 0:11:06 | 0:11:55 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} | 1 | |
fail | 6465461 | 2021-10-28 21:34:13 | 2021-10-28 22:08:42 | 2021-10-28 22:27:41 | 0:18:59 | 0:10:53 | 0:08:06 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi124 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5ad5661f3e361e8c573b395b18740c607fdfcced shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid dac25b44-383d-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6465462 | 2021-10-28 21:34:13 | 2021-10-28 22:50:56 | 1737 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/host rook/master} | 3 | ||||
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
fail | 6465463 | 2021-10-28 21:34:14 | 2021-10-28 23:32:59 | 4384 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | ||||
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465464 | 2021-10-28 21:34:15 | 2021-10-28 22:09:03 | 2021-10-28 22:25:29 | 0:16:26 | 0:06:21 | 0:10:05 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/max-pg-per-osd.from-mon mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} | 1 | |
dead | 6465465 | 2021-10-28 21:34:16 | 2021-10-28 22:09:34 | 2021-10-29 10:21:53 | 12:12:19 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6465466 | 2021-10-28 21:34:17 | 2021-10-28 22:10:24 | 2021-10-29 10:23:47 | 12:13:23 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465467 | 2021-10-28 21:34:17 | 2021-10-28 22:11:35 | 2021-10-29 00:51:33 | 2:39:58 | 2:30:43 | 0:09:15 | smithi | master | ubuntu | 20.04 | rados/objectstore/{backends/filestore-idempotent-aio-journal supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6465468 | 2021-10-28 21:34:18 | 2021-10-28 22:11:35 | 2021-10-28 22:49:58 | 0:38:23 | 0:27:43 | 0:10:40 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/rados_api_tests} | 2 | |
fail | 6465469 | 2021-10-28 21:34:19 | 2021-10-28 22:11:55 | 2021-10-28 22:41:57 | 0:30:02 | 0:19:43 | 0:10:19 | smithi | master | centos | 8.2 | rados/cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465470 | 2021-10-28 21:34:20 | 2021-10-28 22:12:46 | 2021-10-28 22:31:42 | 0:18:56 | 0:10:58 | 0:07:58 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
fail | 6465471 | 2021-10-28 21:34:20 | 2021-10-28 22:12:46 | 2021-10-28 22:28:52 | 0:16:06 | 0:06:44 | 0:09:22 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-singlehost/{0-distro$/{ubuntu_20.04} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
Command failed on smithi167 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5ad5661f3e361e8c573b395b18740c607fdfcced shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f09c51e0-383d-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6465472 | 2021-10-28 21:34:21 | 2021-10-28 22:12:56 | 2021-10-28 22:46:34 | 0:33:38 | 0:22:12 | 0:11:26 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi169 with status 5: 'sudo systemctl stop ceph-a57aa12a-383e-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465473 | 2021-10-28 21:34:22 | 2021-10-28 22:13:07 | 2021-10-28 22:37:05 | 0:23:58 | 0:13:22 | 0:10:36 | smithi | master | centos | 8.3 | rados/singleton/{all/max-pg-per-osd.from-replica mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} | 1 | |
fail | 6465474 | 2021-10-28 21:34:22 | 2021-10-28 22:13:47 | 2021-10-28 22:43:51 | 0:30:04 | 0:19:13 | 0:10:51 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465475 | 2021-10-28 21:34:23 | 2021-10-28 22:14:37 | 2021-10-28 22:48:31 | 0:33:54 | 0:20:11 | 0:13:43 | smithi | master | centos | 8.3 | rados/cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465476 | 2021-10-28 21:34:24 | 2021-10-28 22:17:18 | 2021-10-28 22:40:15 | 0:22:57 | 0:13:16 | 0:09:41 | smithi | master | centos | 8.3 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-hybrid supported-random-distro$/{centos_8} tasks/crash} | 2 | |
fail | 6465477 | 2021-10-28 21:34:24 | 2021-10-28 22:17:18 | 2021-10-28 22:46:33 | 0:29:15 | 0:19:12 | 0:10:03 | smithi | master | centos | 8.3 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi143 with status 5: 'sudo systemctl stop ceph-efd36982-383e-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465478 | 2021-10-28 21:34:25 | 2021-10-28 22:17:29 | 2021-10-28 22:41:01 | 0:23:32 | 0:14:54 | 0:08:38 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6465479 | 2021-10-28 21:34:26 | 2021-10-28 22:17:49 | 2021-10-28 22:49:17 | 0:31:28 | 0:25:38 | 0:05:50 | smithi | master | rhel | 8.4 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/mgr} | 1 | |
pass | 6465480 | 2021-10-28 21:34:27 | 2021-10-28 22:17:49 | 2021-10-28 22:55:03 | 0:37:14 | 0:27:28 | 0:09:46 | smithi | master | centos | 8.3 | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
fail | 6465481 | 2021-10-28 21:34:27 | 2021-10-28 22:17:50 | 2021-10-28 22:39:03 | 0:21:13 | 0:12:27 | 0:08:46 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi197 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5ad5661f3e361e8c573b395b18740c607fdfcced TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 6465482 | 2021-10-28 21:34:28 | 2021-10-28 22:18:00 | 2021-10-28 23:17:25 | 0:59:25 | 0:49:55 | 0:09:30 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/radosbench} | 2 | |
fail | 6465483 | 2021-10-28 21:34:29 | 2021-10-28 22:18:10 | 2021-10-28 22:50:35 | 0:32:25 | 0:20:02 | 0:12:23 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
dead | 6465484 | 2021-10-28 21:34:29 | 2021-10-28 22:18:11 | 2021-10-29 10:28:27 | 12:10:16 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465485 | 2021-10-28 21:34:30 | 2021-10-28 22:18:21 | 2021-10-28 22:45:03 | 0:26:42 | 0:21:30 | 0:05:12 | smithi | master | rhel | 8.4 | rados/singleton/{all/mon-config-key-caps mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6465486 | 2021-10-28 21:34:31 | 2021-10-28 22:18:21 | 2021-10-28 22:44:57 | 0:26:36 | 0:10:31 | 0:16:05 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 6465487 | 2021-10-28 21:34:32 | 2021-10-28 22:20:02 | 2021-10-28 22:53:53 | 0:33:51 | 0:21:43 | 0:12:08 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi150 with status 5: 'sudo systemctl stop ceph-fccd51ce-383f-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465488 | 2021-10-28 21:34:33 | 2021-10-28 22:22:13 | 2021-10-29 00:47:48 | 2:25:35 | 2:15:48 | 0:09:47 | smithi | master | ubuntu | 20.04 | rados/objectstore/{backends/filestore-idempotent supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6465489 | 2021-10-28 21:34:33 | 2021-10-28 22:22:13 | 2021-10-28 22:58:52 | 0:36:39 | 0:22:58 | 0:13:41 | smithi | master | centos | 8.3 | rados/cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi146 with status 5: 'sudo systemctl stop ceph-5f0bc230-3840-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465490 | 2021-10-28 21:34:34 | 2021-10-28 22:23:43 | 2021-10-28 23:05:51 | 0:42:08 | 0:34:30 | 0:07:38 | smithi | master | rhel | 8.4 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_big} | 2 | |
fail | 6465491 | 2021-10-28 21:34:35 | 2021-10-28 22:24:34 | 2021-10-28 22:54:44 | 0:30:10 | 0:23:18 | 0:06:52 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465492 | 2021-10-28 21:34:36 | 2021-10-28 22:25:34 | 2021-10-28 23:03:41 | 0:38:07 | 0:26:25 | 0:11:42 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados tasks/rados_api_tests validater/lockdep} | 2 | |
fail | 6465493 | 2021-10-28 21:34:36 | 2021-10-28 22:26:25 | 2021-10-28 22:59:44 | 0:33:19 | 0:23:42 | 0:09:37 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465494 | 2021-10-28 21:34:37 | 2021-10-28 22:27:45 | 2021-10-28 22:48:18 | 0:20:33 | 0:10:02 | 0:10:31 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} | 1 | |
pass | 6465495 | 2021-10-28 21:34:38 | 2021-10-28 22:28:56 | 2021-10-28 23:00:13 | 0:31:17 | 0:20:20 | 0:10:57 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6465496 | 2021-10-28 21:34:38 | 2021-10-28 22:55:16 | 2021-10-28 23:22:44 | 0:27:28 | 0:11:34 | 0:15:54 | smithi | master | ubuntu | 20.04 | rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} | 2 | |
pass | 6465497 | 2021-10-28 21:34:39 | 2021-10-28 22:58:56 | 2021-10-28 23:39:50 | 0:40:54 | 0:31:59 | 0:08:55 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-many-deletes} | 2 | |
pass | 6465498 | 2021-10-28 21:34:40 | 2021-10-28 22:59:47 | 2021-10-28 23:43:46 | 0:43:59 | 0:27:20 | 0:16:39 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 6465499 | 2021-10-28 21:34:40 | 2021-10-28 23:03:48 | 2021-10-28 23:45:26 | 0:41:38 | 0:28:10 | 0:13:28 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6465500 | 2021-10-28 21:34:41 | 2021-10-28 23:05:59 | 2021-10-28 23:47:54 | 0:41:55 | 0:30:03 | 0:11:52 | smithi | master | centos | 8.stream | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8.stream} thrashers/sync workloads/rados_api_tests} | 2 | |
pass | 6465501 | 2021-10-28 21:34:42 | 2021-10-28 23:08:39 | 2021-10-28 23:58:27 | 0:49:48 | 0:36:15 | 0:13:33 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
pass | 6465502 | 2021-10-28 21:34:43 | 2021-10-28 23:15:01 | 2021-10-28 23:34:08 | 0:19:07 | 0:07:44 | 0:11:23 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/mon-config mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} | 1 | |
dead | 6465503 | 2021-10-28 21:34:43 | 2021-10-28 23:16:51 | 2021-10-29 11:30:06 | 12:13:15 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465504 | 2021-10-28 21:34:44 | 2021-10-28 23:17:32 | 2021-10-28 23:46:44 | 0:29:12 | 0:23:01 | 0:06:11 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/redirect_promote_tests} | 2 | |
fail | 6465505 | 2021-10-28 21:34:45 | 2021-10-28 23:17:32 | 2021-10-29 00:43:39 | 1:26:07 | 1:12:42 | 0:13:25 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/few rados thrashers/none thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
Command failed on smithi125 with status 5: 'sudo systemctl stop ceph-ee83f8b8-3847-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6465506 | 2021-10-28 21:34:46 | 2021-10-28 23:17:53 | 2021-10-28 23:50:09 | 0:32:16 | 0:24:12 | 0:08:04 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465507 | 2021-10-28 21:34:46 | 2021-10-28 23:18:43 | 2021-10-28 23:49:26 | 0:30:43 | 0:23:26 | 0:07:17 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465508 | 2021-10-28 21:34:47 | 2021-10-28 23:20:14 | 2021-10-28 23:49:50 | 0:29:36 | 0:23:06 | 0:06:30 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi191 with status 5: 'sudo systemctl stop ceph-ceadea8a-3847-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6465509 | 2021-10-28 21:34:48 | 2021-10-28 23:20:24 | 2021-10-28 23:40:04 | 0:19:40 | 0:10:44 | 0:08:56 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
Command failed on smithi090 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5ad5661f3e361e8c573b395b18740c607fdfcced shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fde15864-3847-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6465510 | 2021-10-28 21:34:49 | 2021-10-28 23:20:24 | 2021-10-28 23:57:59 | 0:37:35 | 0:28:51 | 0:08:44 | smithi | master | ubuntu | 20.04 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6465511 | 2021-10-28 21:34:49 | 2021-10-28 23:20:25 | 2021-10-29 00:14:12 | 0:53:47 | 0:43:16 | 0:10:31 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6465512 | 2021-10-28 21:34:50 | 2021-10-28 23:20:55 | 2021-10-29 00:33:59 | 1:13:04 | 1:07:45 | 0:05:19 | smithi | master | rhel | 8.4 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/misc} | 1 | |
fail | 6465513 | 2021-10-28 21:34:51 | 2021-10-28 23:20:55 | 2021-10-28 23:55:08 | 0:34:13 | 0:22:35 | 0:11:38 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi154 with status 5: 'sudo systemctl stop ceph-504e2082-3848-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465514 | 2021-10-28 21:34:51 | 2021-10-28 23:21:56 | 2021-10-28 23:40:12 | 0:18:16 | 0:08:33 | 0:09:43 | smithi | master | centos | 8.3 | rados/objectstore/{backends/fusestore supported-random-distro$/{centos_8}} | 1 | |
fail | 6465515 | 2021-10-28 21:34:52 | 2021-10-28 23:22:46 | 2021-10-28 23:59:07 | 0:36:21 | 0:24:10 | 0:12:11 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465516 | 2021-10-28 21:34:53 | 2021-10-28 23:22:56 | 2021-10-29 00:26:00 | 1:03:04 | 0:57:01 | 0:06:03 | smithi | master | rhel | 8.4 | rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 6465517 | 2021-10-28 21:34:54 | 2021-10-28 23:22:57 | 2021-10-28 23:57:00 | 0:34:03 | 0:23:23 | 0:10:40 | smithi | master | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465518 | 2021-10-28 21:34:54 | 2021-10-28 23:23:10 | 2021-10-28 23:53:32 | 0:30:22 | 0:18:23 | 0:11:59 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8.stream} thrashers/careful thrashosds-health workloads/set-chunks-read} | 2 | |
fail | 6465519 | 2021-10-28 21:34:55 | 2021-10-28 23:24:01 | 2021-10-28 23:55:27 | 0:31:26 | 0:20:00 | 0:11:26 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465520 | 2021-10-28 21:34:56 | 2021-10-28 23:24:21 | 2021-10-28 23:53:59 | 0:29:38 | 0:19:41 | 0:09:57 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465521 | 2021-10-28 21:34:57 | 2021-10-28 23:24:32 | 2021-10-29 00:09:24 | 0:44:52 | 0:28:30 | 0:16:22 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/1.7.0} | 3 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
dead | 6465522 | 2021-10-28 21:34:57 | 2021-10-28 23:27:13 | 2021-10-29 11:40:10 | 12:12:57 | smithi | master | centos | 8.3 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6465523 | 2021-10-28 21:34:58 | 2021-10-28 23:28:03 | 2021-10-29 11:40:14 | 12:12:11 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465524 | 2021-10-28 21:34:59 | 2021-10-28 23:28:03 | 2021-10-28 23:55:15 | 0:27:12 | 0:20:40 | 0:06:32 | smithi | master | rhel | 8.4 | rados/singleton/{all/peer mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 1 | |
dead | 6465525 | 2021-10-28 21:34:59 | 2021-10-28 23:28:04 | 2021-10-29 11:41:35 | 12:13:31 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465526 | 2021-10-28 21:35:00 | 2021-10-28 23:29:54 | 2021-10-28 23:50:19 | 0:20:25 | 0:09:31 | 0:10:54 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_write} | 1 | |
pass | 6465527 | 2021-10-28 21:35:01 | 2021-10-28 23:29:55 | 2021-10-28 23:58:34 | 0:28:39 | 0:16:37 | 0:12:02 | smithi | master | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-stupid supported-random-distro$/{centos_8.stream} tasks/insights} | 2 | |
pass | 6465528 | 2021-10-28 21:35:02 | 2021-10-28 23:30:55 | 2021-10-29 00:12:19 | 0:41:24 | 0:32:29 | 0:08:55 | smithi | master | rhel | 8.4 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mostlyread} | 2 | |
pass | 6465529 | 2021-10-28 21:35:02 | 2021-10-28 23:33:06 | 2021-10-29 00:03:11 | 0:30:05 | 0:14:32 | 0:15:33 | smithi | master | centos | 8.2 | rados/cephadm/orchestrator_cli/{0-random-distro$/{centos_8.2_container_tools_3.0} 2-node-mgr orchestrator_cli} | 2 | |
pass | 6465530 | 2021-10-28 21:35:03 | 2021-10-28 23:37:57 | 2021-10-28 23:58:57 | 0:21:00 | 0:12:06 | 0:08:54 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/balancer mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6465531 | 2021-10-28 21:35:04 | 2021-10-28 23:37:57 | 2021-10-29 00:08:16 | 0:30:19 | 0:19:17 | 0:11:02 | smithi | master | centos | 8.2 | rados/cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465532 | 2021-10-28 21:35:05 | 2021-10-28 23:38:57 | 2021-10-29 00:10:10 | 0:31:13 | 0:19:07 | 0:12:06 | smithi | master | centos | 8.2 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi124 with status 5: 'sudo systemctl stop ceph-920375ac-384a-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465533 | 2021-10-28 21:35:05 | 2021-10-28 23:39:58 | 2021-10-29 00:11:39 | 0:31:41 | 0:25:05 | 0:06:36 | smithi | master | rhel | 8.4 | rados/objectstore/{backends/keyvaluedb supported-random-distro$/{rhel_8}} | 1 | |
fail | 6465534 | 2021-10-28 21:35:06 | 2021-10-28 23:40:08 | 2021-10-28 23:59:21 | 0:19:13 | 0:12:39 | 0:06:34 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-singlehost/{0-distro$/{rhel_8.4_container_tools_rhel8} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
Command failed on smithi146 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5ad5661f3e361e8c573b395b18740c607fdfcced shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7ffc9262-384a-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6465535 | 2021-10-28 21:35:07 | 2021-10-28 23:40:19 | 2021-10-29 00:11:16 | 0:30:57 | 0:14:25 | 0:16:32 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 6465536 | 2021-10-28 21:35:07 | 2021-10-28 23:43:49 | 2021-10-29 00:18:57 | 0:35:08 | 0:23:22 | 0:11:46 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi140 with status 5: 'sudo systemctl stop ceph-c139566a-384b-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465537 | 2021-10-28 21:35:08 | 2021-10-28 23:45:30 | 2021-10-29 01:12:05 | 1:26:35 | 1:19:39 | 0:06:56 | smithi | master | rhel | 8.4 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/mon} | 1 | |
pass | 6465538 | 2021-10-28 21:35:09 | 2021-10-28 23:46:50 | 2021-10-29 00:19:17 | 0:32:27 | 0:21:18 | 0:11:09 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/small-objects-localized} | 2 | |
fail | 6465539 | 2021-10-28 21:35:10 | 2021-10-28 23:48:01 | 2021-10-29 00:30:22 | 0:42:21 | 0:34:55 | 0:07:26 | smithi | master | rhel | 8.4 | rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi129 with status 5: 'sudo systemctl stop ceph-3ff8735e-384d-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465540 | 2021-10-28 21:35:10 | 2021-10-28 23:49:11 | 2021-10-29 00:16:49 | 0:27:38 | 0:21:10 | 0:06:28 | smithi | master | rhel | 8.4 | rados/singleton/{all/pg-autoscaler mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} | 2 | |
pass | 6465541 | 2021-10-28 21:35:11 | 2021-10-28 23:49:32 | 2021-10-29 00:08:47 | 0:19:15 | 0:10:50 | 0:08:25 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_adoption} | 1 | |
fail | 6465542 | 2021-10-28 21:35:12 | 2021-10-28 23:49:32 | 2021-10-29 00:19:10 | 0:29:38 | 0:23:10 | 0:06:28 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465543 | 2021-10-28 21:35:13 | 2021-10-28 23:49:53 | 2021-10-29 00:13:14 | 0:23:21 | 0:13:44 | 0:09:37 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} | 2 | |
pass | 6465544 | 2021-10-28 21:35:13 | 2021-10-28 23:50:13 | 2021-10-29 00:11:31 | 0:21:18 | 0:11:21 | 0:09:57 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6465545 | 2021-10-28 21:35:14 | 2021-10-28 23:50:23 | 2021-10-29 00:23:07 | 0:32:44 | 0:19:25 | 0:13:19 | smithi | master | centos | 8.3 | rados/cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465546 | 2021-10-28 21:35:15 | 2021-10-28 23:52:04 | 2021-10-29 00:14:01 | 0:21:57 | 0:08:30 | 0:13:27 | smithi | master | ubuntu | 20.04 | rados/multimon/{clusters/21 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} | 3 | |
pass | 6465547 | 2021-10-28 21:35:15 | 2021-10-28 23:54:04 | 2021-10-29 00:34:34 | 0:40:30 | 0:29:47 | 0:10:43 | smithi | master | centos | 8.3 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 2 | |
pass | 6465548 | 2021-10-28 21:35:16 | 2021-10-28 23:55:15 | 2021-10-29 00:36:40 | 0:41:25 | 0:30:12 | 0:11:13 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 6465549 | 2021-10-28 21:35:17 | 2021-10-28 23:55:35 | 2021-10-29 00:34:10 | 0:38:35 | 0:27:01 | 0:11:34 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6465550 | 2021-10-28 21:35:18 | 2021-10-28 23:57:06 | 2021-10-29 00:37:33 | 0:40:27 | 0:29:18 | 0:11:09 | smithi | master | centos | 8.stream | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8.stream} thrashers/many workloads/rados_mon_workunits} | 2 | |
fail | 6465551 | 2021-10-28 21:35:19 | 2021-10-28 23:58:06 | 2021-10-29 00:32:15 | 0:34:09 | 0:25:56 | 0:08:13 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465552 | 2021-10-28 21:35:19 | 2021-10-28 23:58:36 | 2021-10-29 00:31:49 | 0:33:13 | 0:22:34 | 0:10:39 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi136 with status 5: 'sudo systemctl stop ceph-6ac8804c-384d-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465553 | 2021-10-28 21:35:20 | 2021-10-28 23:58:37 | 2021-10-29 00:21:57 | 0:23:20 | 0:13:39 | 0:09:41 | smithi | master | centos | 8.3 | rados/singleton/{all/radostool mon_election/classic msgr-failures/none msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6465554 | 2021-10-28 21:35:21 | 2021-10-28 23:58:57 | 2021-10-29 00:36:32 | 0:37:35 | 0:26:42 | 0:10:53 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8.stream} thrashers/none thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |
fail | 6465555 | 2021-10-28 21:35:21 | 2021-10-28 23:59:17 | 2021-10-29 00:32:30 | 0:33:13 | 0:22:57 | 0:10:16 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465556 | 2021-10-28 21:35:22 | 2021-10-29 00:03:18 | 2021-10-29 00:37:36 | 0:34:18 | 0:22:31 | 0:11:47 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi159 with status 5: 'sudo systemctl stop ceph-74bce8b2-384e-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465557 | 2021-10-28 21:35:23 | 2021-10-29 00:08:19 | 2021-10-29 00:46:09 | 0:37:50 | 0:28:40 | 0:09:10 | smithi | master | ubuntu | 20.04 | rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6465558 | 2021-10-28 21:35:24 | 2021-10-29 00:08:20 | 2021-10-29 00:28:01 | 0:19:41 | 0:09:46 | 0:09:55 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/crushdiff mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
fail | 6465559 | 2021-10-28 21:35:24 | 2021-10-29 00:08:50 | 2021-10-29 00:45:30 | 0:36:40 | 0:23:48 | 0:12:52 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465560 | 2021-10-28 21:35:25 | 2021-10-29 00:09:30 | 2021-10-29 00:24:45 | 0:15:15 | 0:07:02 | 0:08:13 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_cephadm_repos} | 1 | |
pass | 6465561 | 2021-10-28 21:35:26 | 2021-10-29 00:09:31 | 2021-10-29 00:34:06 | 0:24:35 | 0:12:43 | 0:11:52 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/sample_fio} | 1 | |
dead | 6465562 | 2021-10-28 21:35:27 | 2021-10-29 00:09:41 | 2021-10-29 12:22:57 | 12:13:16 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465563 | 2021-10-28 21:35:27 | 2021-10-29 00:10:11 | 2021-10-29 04:45:11 | 4:35:00 | 4:23:08 | 0:11:52 | smithi | master | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd-backfill} | 1 | |
pass | 6465564 | 2021-10-28 21:35:28 | 2021-10-29 00:10:12 | 2021-10-29 00:56:35 | 0:46:23 | 0:36:50 | 0:09:33 | smithi | master | centos | 8.3 | rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} | 1 | |
fail | 6465565 | 2021-10-28 21:35:29 | 2021-10-29 00:11:22 | 2021-10-29 00:42:35 | 0:31:13 | 0:19:45 | 0:11:28 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465566 | 2021-10-28 21:35:30 | 2021-10-29 00:11:22 | 2021-10-29 00:44:20 | 0:32:58 | 0:26:11 | 0:06:47 | smithi | master | rhel | 8.4 | rados/singleton/{all/rebuild-mondb mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 6465567 | 2021-10-28 21:35:30 | 2021-10-29 00:11:23 | 2021-10-29 00:43:21 | 0:31:58 | 0:23:35 | 0:08:23 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465568 | 2021-10-28 21:35:31 | 2021-10-29 00:11:43 | 2021-10-29 00:39:15 | 0:27:32 | 0:16:13 | 0:11:19 | smithi | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/repair_test} | 2 | |
pass | 6465569 | 2021-10-28 21:35:32 | 2021-10-29 00:11:43 | 2021-10-29 00:50:35 | 0:38:52 | 0:25:01 | 0:13:51 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-snaps-few-objects-overwrites} | 2 | |
pass | 6465570 | 2021-10-28 21:35:33 | 2021-10-29 00:12:24 | 2021-10-29 00:31:55 | 0:19:31 | 0:11:04 | 0:08:27 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/full-tiering mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6465571 | 2021-10-28 21:35:33 | 2021-10-29 00:12:44 | 2021-10-29 00:51:41 | 0:38:57 | 0:27:38 | 0:11:19 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/snaps-few-objects} | 2 | |
fail | 6465572 | 2021-10-28 21:35:34 | 2021-10-29 00:13:15 | 2021-10-29 00:47:30 | 0:34:15 | 0:22:17 | 0:11:58 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/fastclose rados thrashers/careful thrashosds-health workloads/rbd_cls} | 3 | |
Failure Reason:
Command failed on smithi186 with status 5: 'sudo systemctl stop ceph-a3802ae6-384f-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
dead | 6465573 | 2021-10-28 21:35:35 | 2021-10-29 00:14:05 | 2021-10-29 12:26:03 | 12:11:58 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6465574 | 2021-10-28 21:35:36 | 2021-10-29 00:14:15 | 2021-10-29 00:50:23 | 0:36:08 | 0:22:42 | 0:13:26 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi174 with status 5: 'sudo systemctl stop ceph-fbd6d708-384f-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6465575 | 2021-10-28 21:35:36 | 2021-10-29 00:16:56 | 2021-10-29 00:51:59 | 0:35:03 | 0:22:12 | 0:12:51 | smithi | master | centos | 8.2 | rados/cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi114 with status 5: 'sudo systemctl stop ceph-2b8c761a-3850-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465576 | 2021-10-28 21:35:37 | 2021-10-29 00:18:46 | 2021-10-29 00:50:48 | 0:32:02 | 0:19:53 | 0:12:09 | smithi | master | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/progress} | 2 | |
fail | 6465577 | 2021-10-28 21:35:38 | 2021-10-29 00:19:07 | 2021-10-29 00:48:31 | 0:29:24 | 0:19:45 | 0:09:39 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465578 | 2021-10-28 21:35:39 | 2021-10-29 00:19:17 | 2021-10-29 00:38:43 | 0:19:26 | 0:09:47 | 0:09:39 | smithi | master | centos | 8.3 | rados/singleton/{all/resolve_stuck_peering mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} | 2 | |
pass | 6465579 | 2021-10-28 21:35:39 | 2021-10-29 00:19:18 | 2021-10-29 02:49:03 | 2:29:45 | 2:05:43 | 0:24:02 | smithi | master | centos | 8.3 | rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8}} | 1 | |
fail | 6465580 | 2021-10-28 21:35:40 | 2021-10-29 00:21:58 | 2021-10-29 01:51:01 | 1:29:03 | 1:16:15 | 0:12:48 | smithi | master | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465581 | 2021-10-28 21:35:41 | 2021-10-29 00:23:09 | 2021-10-29 01:02:06 | 0:38:57 | 0:23:16 | 0:15:41 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi101 with status 5: 'sudo systemctl stop ceph-d7b943d2-3850-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6465582 | 2021-10-28 21:35:42 | 2021-10-29 00:26:09 | 2021-10-29 00:49:20 | 0:23:11 | 0:11:07 | 0:12:04 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi102 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5ad5661f3e361e8c573b395b18740c607fdfcced shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 955c8e76-3851-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6465583 | 2021-10-28 21:35:42 | 2021-10-29 00:28:10 | 2021-10-29 01:01:43 | 0:33:33 | 0:24:46 | 0:08:47 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465584 | 2021-10-28 21:35:43 | 2021-10-29 00:30:30 | 2021-10-29 01:03:09 | 0:32:39 | 0:23:29 | 0:09:10 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 6465585 | 2021-10-28 21:35:44 | 2021-10-29 00:31:51 | 2021-10-29 01:05:21 | 0:33:30 | 0:24:59 | 0:08:31 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/flannel rook/master} | 1 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
fail | 6465586 | 2021-10-28 21:35:45 | 2021-10-29 00:31:51 | 2021-10-29 00:55:48 | 0:23:57 | 0:12:08 | 0:11:49 | smithi | master | centos | 8.2 | rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-snappy} tasks/e2e} | 2 | |
Failure Reason:
Command failed on smithi025 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5ad5661f3e361e8c573b395b18740c607fdfcced shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 363796a6-3852-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6465587 | 2021-10-28 21:35:45 | 2021-10-29 00:32:22 | 2021-10-29 00:50:07 | 0:17:45 | 0:08:13 | 0:09:32 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6465588 | 2021-10-28 21:35:46 | 2021-10-29 00:32:22 | 2021-10-29 01:05:36 | 0:33:14 | 0:25:54 | 0:07:20 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
dead | 6465589 | 2021-10-28 21:35:47 | 2021-10-29 00:32:32 | 2021-10-29 12:45:26 | 12:12:54 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465590 | 2021-10-28 21:35:47 | 2021-10-29 00:34:03 | 2021-10-29 01:05:35 | 0:31:32 | 0:20:32 | 0:11:00 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
dead | 6465591 | 2021-10-28 21:35:48 | 2021-10-29 00:34:03 | 2021-10-29 12:45:44 | 12:11:41 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465592 | 2021-10-28 21:35:49 | 2021-10-29 00:34:14 | 2021-10-29 01:12:08 | 0:37:54 | 0:27:01 | 0:10:53 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/lockdep} | 2 | |
pass | 6465593 | 2021-10-28 21:35:50 | 2021-10-29 00:34:44 | 2021-10-29 01:01:13 | 0:26:29 | 0:15:59 | 0:10:30 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6465594 | 2021-10-28 21:35:51 | 2021-10-29 00:34:44 | 2021-10-29 01:07:51 | 0:33:07 | 0:20:15 | 0:12:52 | smithi | master | centos | 8.2 | rados/cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465595 | 2021-10-28 21:35:51 | 2021-10-29 00:36:35 | 2021-10-29 04:02:13 | 3:25:38 | 3:15:44 | 0:09:54 | smithi | master | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/osd} | 1 | |
pass | 6465596 | 2021-10-28 21:35:52 | 2021-10-29 00:36:45 | 2021-10-29 01:03:42 | 0:26:57 | 0:20:12 | 0:06:45 | smithi | master | rhel | 8.4 | rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} tasks/mon_clock_no_skews} | 2 | |
pass | 6465597 | 2021-10-28 21:35:53 | 2021-10-29 00:36:45 | 2021-10-29 01:22:48 | 0:46:03 | 0:33:23 | 0:12:40 | smithi | master | centos | 8.3 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} | 2 | |
pass | 6465598 | 2021-10-28 21:35:54 | 2021-10-29 00:37:36 | 2021-10-29 01:21:23 | 0:43:47 | 0:31:15 | 0:12:32 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 6465599 | 2021-10-28 21:35:54 | 2021-10-29 00:38:07 | 2021-10-29 01:24:00 | 0:45:53 | 0:33:45 | 0:12:08 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 6465600 | 2021-10-28 21:35:55 | 2021-10-29 00:38:47 | 2021-10-29 00:58:20 | 0:19:33 | 0:12:49 | 0:06:44 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-singlehost/{0-distro$/{rhel_8.4_container_tools_rhel8} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
Command failed on smithi148 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5ad5661f3e361e8c573b395b18740c607fdfcced shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bcfda64e-3852-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6465601 | 2021-10-28 21:35:56 | 2021-10-29 00:39:17 | 2021-10-29 01:17:34 | 0:38:17 | 0:27:09 | 0:11:08 | smithi | master | ubuntu | 20.04 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/snaps-few-objects} | 2 | |
pass | 6465602 | 2021-10-28 21:35:57 | 2021-10-29 00:39:48 | 2021-10-29 01:01:31 | 0:21:43 | 0:12:28 | 0:09:15 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_read} | 1 | |
fail | 6465603 | 2021-10-28 21:35:57 | 2021-10-29 00:39:58 | 2021-10-29 01:15:08 | 0:35:10 | 0:22:37 | 0:12:33 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi168 with status 5: 'sudo systemctl stop ceph-7932409a-3853-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465604 | 2021-10-28 21:35:58 | 2021-10-29 00:41:49 | 2021-10-29 01:04:25 | 0:22:36 | 0:14:03 | 0:08:33 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6465605 | 2021-10-28 21:35:59 | 2021-10-29 00:41:49 | 2021-10-29 03:26:58 | 2:45:09 | 2:14:53 | 0:30:16 | smithi | master | centos | 8.3 | rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8}} | 1 | |
fail | 6465606 | 2021-10-28 21:36:00 | 2021-10-29 00:41:49 | 2021-10-29 01:18:41 | 0:36:52 | 0:24:09 | 0:12:43 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465607 | 2021-10-28 21:36:00 | 2021-10-29 00:42:40 | 2021-10-29 01:15:01 | 0:32:21 | 0:20:11 | 0:12:10 | smithi | master | centos | 8.3 | rados/cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465608 | 2021-10-28 21:36:01 | 2021-10-29 00:43:30 | 2021-10-29 01:24:45 | 0:41:15 | 0:31:39 | 0:09:36 | smithi | master | centos | 8.stream | rados/singleton/{all/thrash-eio mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8.stream}} | 2 | |
pass | 6465609 | 2021-10-28 21:36:02 | 2021-10-29 00:43:41 | 2021-10-29 01:11:20 | 0:27:39 | 0:20:44 | 0:06:55 | smithi | master | rhel | 8.4 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} tasks/libcephsqlite} | 2 | |
fail | 6465610 | 2021-10-28 21:36:02 | 2021-10-29 00:44:21 | 2021-10-29 01:16:50 | 0:32:29 | 0:19:42 | 0:12:47 | smithi | master | centos | 8.3 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi172 with status 5: 'sudo systemctl stop ceph-c37abdf8-3853-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465611 | 2021-10-28 21:36:03 | 2021-10-29 00:45:31 | 2021-10-29 01:08:45 | 0:23:14 | 0:12:35 | 0:10:39 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/cache-agent-small} | 2 | |
fail | 6465612 | 2021-10-28 21:36:04 | 2021-10-29 00:45:52 | 2021-10-29 01:09:25 | 0:23:33 | 0:12:49 | 0:10:44 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi157 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5ad5661f3e361e8c573b395b18740c607fdfcced TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 6465613 | 2021-10-28 21:36:05 | 2021-10-29 00:45:52 | 2021-10-29 01:16:45 | 0:30:53 | 0:19:39 | 0:11:14 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
dead | 6465614 | 2021-10-28 21:36:05 | 2021-10-29 00:47:33 | 2021-10-29 12:59:19 | 12:11:46 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465615 | 2021-10-28 21:36:06 | 2021-10-29 00:47:33 | 2021-10-29 01:19:48 | 0:32:15 | 0:21:31 | 0:10:44 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/classic rados supported-random-distro$/{centos_8}} | 2 | |
fail | 6465616 | 2021-10-28 21:36:07 | 2021-10-29 00:48:33 | 2021-10-29 01:22:27 | 0:33:54 | 0:23:09 | 0:10:45 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi102 with status 5: 'sudo systemctl stop ceph-ab4fadc8-3854-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6465617 | 2021-10-28 21:36:08 | 2021-10-29 00:49:24 | 2021-10-29 01:29:38 | 0:40:14 | 0:32:36 | 0:07:38 | smithi | master | rhel | 8.4 | rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi174 with status 5: 'sudo systemctl stop ceph-ba9a5e94-3855-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6465618 | 2021-10-28 21:36:08 | 2021-10-29 00:50:24 | 2021-10-29 01:19:33 | 0:29:09 | 0:23:09 | 0:06:00 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465619 | 2021-10-28 21:36:09 | 2021-10-29 00:50:45 | 2021-10-29 01:26:50 | 0:36:05 | 0:24:06 | 0:11:59 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} | 2 | |
pass | 6465620 | 2021-10-28 21:36:10 | 2021-10-29 00:50:55 | 2021-10-29 03:44:21 | 2:53:26 | 2:46:11 | 0:07:15 | smithi | master | rhel | 8.4 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/scrub} | 1 | |
fail | 6465621 | 2021-10-28 21:36:10 | 2021-10-29 00:50:55 | 2021-10-29 01:20:58 | 0:30:03 | 0:19:52 | 0:10:11 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465622 | 2021-10-28 21:36:11 | 2021-10-29 00:51:46 | 2021-10-29 01:13:09 | 0:21:23 | 0:11:11 | 0:10:12 | smithi | master | centos | 8.3 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-comp-snappy supported-random-distro$/{centos_8} tasks/workunits} | 2 | |
pass | 6465623 | 2021-10-28 21:36:12 | 2021-10-29 00:52:06 | 2021-10-29 01:33:34 | 0:41:28 | 0:32:26 | 0:09:02 | smithi | master | centos | 8.stream | rados/objectstore/{backends/objectstore-filestore-memstore supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6465624 | 2021-10-28 21:36:13 | 2021-10-29 00:52:06 | 2021-10-29 01:33:58 | 0:41:52 | 0:25:11 | 0:16:41 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/cache-pool-snaps} | 2 | |
dead | 6465625 | 2021-10-28 21:36:14 | 2021-10-29 00:55:57 | 2021-10-29 13:10:34 | 12:14:37 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465626 | 2021-10-28 21:36:14 | 2021-10-29 00:58:28 | 2021-10-29 01:22:48 | 0:24:20 | 0:12:17 | 0:12:03 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/pool-access mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6465627 | 2021-10-28 21:36:15 | 2021-10-29 01:01:19 | 2021-10-29 01:31:06 | 0:29:47 | 0:23:16 | 0:06:31 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465628 | 2021-10-28 21:36:16 | 2021-10-29 01:01:49 | 2021-10-29 01:45:02 | 0:43:13 | 0:33:51 | 0:09:22 | smithi | master | centos | 8.3 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} | 1 | |
fail | 6465629 | 2021-10-28 21:36:17 | 2021-10-29 01:01:49 | 2021-10-29 01:33:23 | 0:31:34 | 0:24:27 | 0:07:07 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465630 | 2021-10-28 21:36:17 | 2021-10-29 01:02:10 | 2021-10-29 01:24:37 | 0:22:27 | 0:11:19 | 0:11:08 | smithi | master | centos | 8.stream | rados/singleton/{all/admin-socket mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6465631 | 2021-10-28 21:36:18 | 2021-10-29 01:03:10 | 2021-10-29 01:34:36 | 0:31:26 | 0:23:36 | 0:07:50 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi152 with status 5: 'sudo systemctl stop ceph-282e5a8c-3856-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465632 | 2021-10-28 21:36:19 | 2021-10-29 01:03:10 | 2021-10-29 01:22:37 | 0:19:27 | 0:10:40 | 0:08:47 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_read} | 1 | |
fail | 6465633 | 2021-10-28 21:36:19 | 2021-10-29 01:03:11 | 2021-10-29 01:24:47 | 0:21:36 | 0:11:44 | 0:09:52 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
Command failed on smithi022 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5ad5661f3e361e8c573b395b18740c607fdfcced shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 78a8b28c-3856-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6465634 | 2021-10-28 21:36:20 | 2021-10-29 01:03:51 | 2021-10-29 01:36:51 | 0:33:00 | 0:24:42 | 0:08:18 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 6465635 | 2021-10-28 21:36:21 | 2021-10-29 01:05:42 | 2021-10-29 01:40:58 | 0:35:16 | 0:23:41 | 0:11:35 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi089 with status 5: 'sudo systemctl stop ceph-0f1e83b8-3857-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6465636 | 2021-10-28 21:36:22 | 2021-10-29 01:05:42 | 2021-10-29 01:39:06 | 0:33:24 | 0:24:52 | 0:08:32 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465637 | 2021-10-28 21:36:22 | 2021-10-29 01:07:53 | 2021-10-29 01:43:50 | 0:35:57 | 0:24:16 | 0:11:41 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |
fail | 6465638 | 2021-10-28 21:36:23 | 2021-10-29 01:08:53 | 2021-10-29 01:46:39 | 0:37:46 | 0:23:43 | 0:14:03 | smithi | master | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465639 | 2021-10-28 21:36:24 | 2021-10-29 01:09:33 | 2021-10-29 01:34:35 | 0:25:02 | 0:13:09 | 0:11:53 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/version-number-sanity mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6465640 | 2021-10-28 21:36:25 | 2021-10-29 01:11:24 | 2021-10-29 01:37:30 | 0:26:06 | 0:14:10 | 0:11:56 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/lockdep} | 2 | |
pass | 6465641 | 2021-10-28 21:36:25 | 2021-10-29 01:12:14 | 2021-10-29 01:51:31 | 0:39:17 | 0:28:20 | 0:10:57 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8.stream} thrashers/mapgap thrashosds-health workloads/cache-snaps} | 2 |