User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
nojha | 2021-10-28 22:48:27 | 2021-10-29 01:13:07 | 2021-10-29 18:02:33 | 16:49:26 | rados | wip-10-28-2021 | smithi | c4f5ee6 | 142 | 94 | 19 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6465677 | 2021-10-28 22:50:39 | 2021-10-29 01:13:07 | 2021-10-29 01:38:28 | 0:25:21 | 0:14:15 | 0:11:06 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados tasks/mon_recovery validater/lockdep} | 2 | |
pass | 6465678 | 2021-10-28 22:50:40 | 2021-10-29 01:13:17 | 2021-10-29 01:36:34 | 0:23:17 | 0:11:03 | 0:12:14 | smithi | master | centos | 8.stream | rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8.stream} tasks/mon_clock_with_skews} | 2 | |
pass | 6465679 | 2021-10-28 22:50:41 | 2021-10-29 01:15:08 | 2021-10-29 01:48:36 | 0:33:28 | 0:24:31 | 0:08:57 | smithi | master | centos | 8.3 | rados/singleton/{all/backfill-toofull mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6465680 | 2021-10-28 22:50:42 | 2021-10-29 01:15:18 | 2021-10-29 02:02:14 | 0:46:56 | 0:33:36 | 0:13:20 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 6465681 | 2021-10-28 22:50:43 | 2021-10-29 01:16:48 | 2021-10-29 01:52:44 | 0:35:56 | 0:24:12 | 0:11:44 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6465682 | 2021-10-28 22:50:43 | 2021-10-29 01:16:59 | 2021-10-29 01:56:56 | 0:39:57 | 0:27:57 | 0:12:00 | smithi | master | centos | 8.stream | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8.stream} thrashers/force-sync-many workloads/rados_mon_workunits} | 2 | |
fail | 6465683 | 2021-10-28 22:50:44 | 2021-10-29 01:17:39 | 2021-10-29 01:48:03 | 0:30:24 | 0:22:28 | 0:07:56 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465684 | 2021-10-28 22:50:45 | 2021-10-29 01:18:50 | 2021-10-29 02:03:26 | 0:44:36 | 0:32:43 | 0:11:53 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} | 2 | |
fail | 6465685 | 2021-10-28 22:50:46 | 2021-10-29 01:19:40 | 2021-10-29 01:55:43 | 0:36:03 | 0:24:48 | 0:11:15 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/1-node k8s/1.21 net/flannel rook/1.7.0} | 1 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
fail | 6465686 | 2021-10-28 22:50:47 | 2021-10-29 01:19:50 | 2021-10-29 01:56:02 | 0:36:12 | 0:24:47 | 0:11:25 | smithi | master | centos | 8.2 | rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_ganesha (unittest.loader._FailedTest) |
||||||||||||||
pass | 6465687 | 2021-10-28 22:50:48 | 2021-10-29 01:21:01 | 2021-10-29 01:50:05 | 0:29:04 | 0:21:50 | 0:07:14 | smithi | master | rhel | 8.4 | rados/objectstore/{backends/alloc-hint supported-random-distro$/{rhel_8}} | 1 | |
pass | 6465688 | 2021-10-28 22:50:48 | 2021-10-29 01:21:01 | 2021-10-29 01:43:32 | 0:22:31 | 0:12:36 | 0:09:55 | smithi | master | ubuntu | 20.04 | rados/rest/{mgr-restful supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6465689 | 2021-10-28 22:50:49 | 2021-10-29 01:21:31 | 2021-10-29 01:58:59 | 0:37:28 | 0:31:35 | 0:05:53 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6465690 | 2021-10-28 22:50:50 | 2021-10-29 01:21:32 | 2021-10-29 01:46:29 | 0:24:57 | 0:15:23 | 0:09:34 | smithi | master | centos | 8.3 | rados/standalone/{supported-random-distro$/{centos_8} workloads/c2c} | 1 | |
fail | 6465691 | 2021-10-28 22:50:51 | 2021-10-29 01:21:32 | 2021-10-29 05:11:40 | 3:50:08 | 3:43:05 | 0:07:03 | smithi | master | rhel | 8.4 | rados/upgrade/parallel/{0-distro$/{rhel_8.4_container_tools_rhel8} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi043 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh' |
||||||||||||||
pass | 6465692 | 2021-10-28 22:50:52 | 2021-10-29 01:22:33 | 2021-10-29 02:41:43 | 1:19:10 | 1:10:49 | 0:08:21 | smithi | master | centos | 8.3 | rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} | 1 | |
dead | 6465693 | 2021-10-28 22:50:52 | 2021-10-29 01:22:43 | 2021-10-29 13:34:26 | 12:11:43 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6465694 | 2021-10-28 22:50:53 | 2021-10-29 01:22:53 | 2021-10-29 13:35:47 | 12:12:54 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6465695 | 2021-10-28 22:50:54 | 2021-10-29 01:24:04 | 2021-10-29 13:36:23 | 12:12:19 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465696 | 2021-10-28 22:50:55 | 2021-10-29 01:24:44 | 2021-10-29 02:00:35 | 0:35:51 | 0:27:42 | 0:08:09 | smithi | master | rhel | 8.4 | rados/cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.4_container_tools_rhel8} 2-node-mgr orchestrator_cli} | 2 | |
fail | 6465697 | 2021-10-28 22:50:56 | 2021-10-29 01:24:54 | 2021-10-29 01:56:26 | 0:31:32 | 0:19:28 | 0:12:04 | smithi | master | centos | 8.2 | rados/cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465698 | 2021-10-28 22:50:57 | 2021-10-29 01:26:55 | 2021-10-29 01:56:46 | 0:29:51 | 0:16:20 | 0:13:31 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/cache} | 2 | |
fail | 6465699 | 2021-10-28 22:50:58 | 2021-10-29 01:29:46 | 2021-10-29 02:00:20 | 0:30:34 | 0:18:51 | 0:11:43 | smithi | master | centos | 8.2 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi139 with status 5: 'sudo systemctl stop ceph-ed0f016e-3859-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6465700 | 2021-10-28 22:50:58 | 2021-10-29 01:31:16 | 2021-10-29 01:48:27 | 0:17:11 | 0:08:53 | 0:08:18 | smithi | master | centos | 8.3 | rados/cephadm/smoke-singlehost/{0-distro$/{centos_8.3_container_tools_3.0} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
Command failed on smithi112 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c4f5ee6329d692d4d8aa24f05840092126475d5d shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ddaea6fc-3859-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6465701 | 2021-10-28 22:50:59 | 2021-10-29 01:31:16 | 2021-10-29 01:52:44 | 0:21:28 | 0:10:16 | 0:11:12 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/divergent_priors mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6465702 | 2021-10-28 22:51:00 | 2021-10-29 01:33:27 | 2021-10-29 02:06:33 | 0:33:06 | 0:22:16 | 0:10:50 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi155 with status 5: 'sudo systemctl stop ceph-bde26330-385a-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465703 | 2021-10-28 22:51:01 | 2021-10-29 01:33:37 | 2021-10-29 01:59:34 | 0:25:57 | 0:16:39 | 0:09:18 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6465704 | 2021-10-28 22:51:02 | 2021-10-29 01:34:08 | 2021-10-29 02:15:50 | 0:41:42 | 0:33:35 | 0:08:07 | smithi | master | rhel | 8.4 | rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi152 with status 5: 'sudo systemctl stop ceph-04d2d3be-385c-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465705 | 2021-10-28 22:51:02 | 2021-10-29 01:34:38 | 2021-10-29 01:55:49 | 0:21:11 | 0:12:01 | 0:09:10 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_adoption} | 1 | |
pass | 6465706 | 2021-10-28 22:51:03 | 2021-10-29 01:34:38 | 2021-10-29 02:04:18 | 0:29:40 | 0:17:11 | 0:12:29 | smithi | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} tasks/rados_cls_all} | 2 | |
fail | 6465707 | 2021-10-28 22:51:04 | 2021-10-29 01:36:39 | 2021-10-29 02:08:05 | 0:31:26 | 0:23:57 | 0:07:29 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465708 | 2021-10-28 22:51:05 | 2021-10-29 01:36:59 | 2021-10-29 02:07:57 | 0:30:58 | 0:20:34 | 0:10:24 | smithi | master | centos | 8.3 | rados/cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465709 | 2021-10-28 22:51:05 | 2021-10-29 01:37:00 | 2021-10-29 02:00:56 | 0:23:56 | 0:13:31 | 0:10:25 | smithi | master | ubuntu | 20.04 | rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6465710 | 2021-10-28 22:51:06 | 2021-10-29 01:37:00 | 2021-10-29 01:58:56 | 0:21:56 | 0:12:08 | 0:09:48 | smithi | master | centos | 8.3 | rados/singleton/{all/dump-stuck mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6465711 | 2021-10-28 22:51:07 | 2021-10-29 01:37:30 | 2021-10-29 02:03:42 | 0:26:12 | 0:13:45 | 0:12:27 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/dedup-io-snaps} | 2 | |
fail | 6465712 | 2021-10-28 22:51:08 | 2021-10-29 01:38:31 | 2021-10-29 02:15:14 | 0:36:43 | 0:23:19 | 0:13:24 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465713 | 2021-10-28 22:51:09 | 2021-10-29 01:39:11 | 2021-10-29 02:00:53 | 0:21:42 | 0:11:54 | 0:09:48 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} | 1 | |
fail | 6465714 | 2021-10-28 22:51:10 | 2021-10-29 01:39:11 | 2021-10-29 02:16:15 | 0:37:04 | 0:24:00 | 0:13:04 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi089 with status 5: 'sudo systemctl stop ceph-1c6733da-385c-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465715 | 2021-10-28 22:51:10 | 2021-10-29 01:41:02 | 2021-10-29 02:04:58 | 0:23:56 | 0:11:50 | 0:12:06 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6465716 | 2021-10-28 22:51:11 | 2021-10-29 01:43:33 | 2021-10-29 02:23:35 | 0:40:02 | 0:32:40 | 0:07:22 | smithi | master | rhel | 8.4 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/crush} | 1 | |
pass | 6465717 | 2021-10-28 22:51:12 | 2021-10-29 01:43:53 | 2021-10-29 02:25:59 | 0:42:06 | 0:29:45 | 0:12:21 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |
fail | 6465718 | 2021-10-28 22:51:13 | 2021-10-29 01:45:03 | 2021-10-29 02:18:15 | 0:33:12 | 0:20:06 | 0:13:06 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465719 | 2021-10-28 22:51:14 | 2021-10-29 01:46:44 | 2021-10-29 02:17:17 | 0:30:33 | 0:23:00 | 0:07:33 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465720 | 2021-10-28 22:51:14 | 2021-10-29 01:48:04 | 2021-10-29 02:17:37 | 0:29:33 | 0:22:39 | 0:06:54 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi112 with status 5: 'sudo systemctl stop ceph-69154384-385c-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6465721 | 2021-10-28 22:51:15 | 2021-10-29 01:48:35 | 2021-10-29 02:26:20 | 0:37:45 | 0:23:24 | 0:14:21 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
Command failed on smithi098 with status 5: 'sudo systemctl stop ceph-5d9b24dc-385d-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465722 | 2021-10-28 22:51:16 | 2021-10-29 01:51:05 | 2021-10-29 02:15:32 | 0:24:27 | 0:12:42 | 0:11:45 | smithi | master | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-comp-zlib supported-random-distro$/{ubuntu_latest} tasks/prometheus} | 2 | |
pass | 6465723 | 2021-10-28 22:51:17 | 2021-10-29 01:51:36 | 2021-10-29 02:09:02 | 0:17:26 | 0:07:05 | 0:10:21 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_cephadm_repos} | 1 | |
pass | 6465724 | 2021-10-28 22:51:18 | 2021-10-29 01:51:36 | 2021-10-29 03:09:51 | 1:18:15 | 1:07:23 | 0:10:52 | smithi | master | centos | 8.stream | rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/none msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream}} | 1 | |
dead | 6465725 | 2021-10-28 22:51:19 | 2021-10-29 01:52:47 | 2021-10-29 14:04:24 | 12:11:37 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465726 | 2021-10-28 22:51:19 | 2021-10-29 01:52:47 | 2021-10-29 02:33:12 | 0:40:25 | 0:26:57 | 0:13:28 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/rados_api_tests} | 2 | |
pass | 6465727 | 2021-10-28 22:51:20 | 2021-10-29 01:55:58 | 2021-10-29 02:23:44 | 0:27:46 | 0:14:41 | 0:13:05 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8.stream} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 6465728 | 2021-10-28 22:51:21 | 2021-10-29 01:56:28 | 2021-10-29 02:23:59 | 0:27:31 | 0:20:15 | 0:07:16 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/export-after-evict mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 6465729 | 2021-10-28 22:51:22 | 2021-10-29 01:56:48 | 2021-10-29 02:28:10 | 0:31:22 | 0:19:49 | 0:11:33 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465730 | 2021-10-28 22:51:23 | 2021-10-29 01:56:59 | 2021-10-29 02:30:30 | 0:33:31 | 0:24:16 | 0:09:15 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465731 | 2021-10-28 22:51:24 | 2021-10-29 01:58:59 | 2021-10-29 02:26:04 | 0:27:05 | 0:20:38 | 0:06:27 | smithi | master | rhel | 8.4 | rados/objectstore/{backends/filejournal supported-random-distro$/{rhel_8}} | 1 | |
pass | 6465732 | 2021-10-28 22:51:24 | 2021-10-29 01:59:00 | 2021-10-29 02:29:34 | 0:30:34 | 0:19:14 | 0:11:20 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados tasks/rados_cls_all validater/lockdep} | 2 | |
pass | 6465733 | 2021-10-28 22:51:25 | 2021-10-29 02:00:30 | 2021-10-29 02:22:37 | 0:22:07 | 0:09:24 | 0:12:43 | smithi | master | ubuntu | 20.04 | rados/multimon/{clusters/21 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} | 3 | |
pass | 6465734 | 2021-10-28 22:51:26 | 2021-10-29 02:00:40 | 2021-10-29 02:45:49 | 0:45:09 | 0:31:51 | 0:13:18 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 6465735 | 2021-10-28 22:51:27 | 2021-10-29 02:02:21 | 2021-10-29 02:43:57 | 0:41:36 | 0:31:10 | 0:10:26 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8.stream} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6465736 | 2021-10-28 22:51:28 | 2021-10-29 02:02:21 | 2021-10-29 02:33:29 | 0:31:08 | 0:18:36 | 0:12:32 | smithi | master | ubuntu | 20.04 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/one workloads/pool-create-delete} | 2 | |
pass | 6465737 | 2021-10-28 22:51:28 | 2021-10-29 02:03:32 | 2021-10-29 02:37:09 | 0:33:37 | 0:22:48 | 0:10:49 | smithi | master | centos | 8.3 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-small-objects-balanced} | 2 | |
fail | 6465738 | 2021-10-28 22:51:29 | 2021-10-29 02:03:52 | 2021-10-29 02:34:01 | 0:30:09 | 0:23:06 | 0:07:03 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
dead | 6465739 | 2021-10-28 22:51:30 | 2021-10-29 02:04:23 | 2021-10-29 14:18:23 | 12:14:00 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465740 | 2021-10-28 22:51:31 | 2021-10-29 02:06:43 | 2021-10-29 03:10:02 | 1:03:19 | 0:54:28 | 0:08:51 | smithi | master | centos | 8.stream | rados/singleton/{all/lost-unfound-delete mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6465741 | 2021-10-28 22:51:32 | 2021-10-29 02:06:43 | 2021-10-29 02:40:48 | 0:34:05 | 0:23:46 | 0:10:19 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi053 with status 5: 'sudo systemctl stop ceph-97e13670-385f-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6465742 | 2021-10-28 22:51:32 | 2021-10-29 02:08:04 | 2021-10-29 02:46:03 | 0:37:59 | 0:27:02 | 0:10:57 | smithi | master | ubuntu | 20.04 | rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi138 with status 5: 'sudo systemctl stop ceph-9420f41c-385f-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465743 | 2021-10-28 22:51:33 | 2021-10-29 02:08:14 | 2021-10-29 02:44:30 | 0:36:16 | 0:18:04 | 0:18:12 | smithi | master | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8.stream} tasks/rados_stress_watch} | 2 | |
pass | 6465744 | 2021-10-28 22:51:34 | 2021-10-29 02:15:15 | 2021-10-29 02:36:24 | 0:21:09 | 0:12:58 | 0:08:11 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/health-warnings mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6465745 | 2021-10-28 22:51:35 | 2021-10-29 02:15:16 | 2021-10-29 03:53:23 | 1:38:07 | 1:28:39 | 0:09:28 | smithi | master | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/erasure-code} | 1 | |
fail | 6465746 | 2021-10-28 22:51:36 | 2021-10-29 02:15:36 | 2021-10-29 02:52:14 | 0:36:38 | 0:23:49 | 0:12:49 | smithi | master | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465747 | 2021-10-28 22:51:37 | 2021-10-29 02:15:56 | 2021-10-29 03:21:23 | 1:05:27 | 0:54:22 | 0:11:05 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8.stream} thrashers/mapgap thrashosds-health workloads/radosbench} | 2 | |
fail | 6465748 | 2021-10-28 22:51:37 | 2021-10-29 02:16:17 | 2021-10-29 02:51:12 | 0:34:55 | 0:22:45 | 0:12:10 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi120 with status 5: 'sudo systemctl stop ceph-57114256-3860-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6465749 | 2021-10-28 22:51:38 | 2021-10-29 02:17:27 | 2021-10-29 02:48:40 | 0:31:13 | 0:24:06 | 0:07:07 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465750 | 2021-10-28 22:51:39 | 2021-10-29 02:17:38 | 2021-10-29 02:41:04 | 0:23:26 | 0:13:40 | 0:09:46 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} | 1 | |
fail | 6465751 | 2021-10-28 22:51:40 | 2021-10-29 02:17:38 | 2021-10-29 02:39:05 | 0:21:27 | 0:11:48 | 0:09:39 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi018 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c4f5ee6329d692d4d8aa24f05840092126475d5d shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d175b806-3860-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6465752 | 2021-10-28 22:51:41 | 2021-10-29 02:18:18 | 2021-10-29 03:04:37 | 0:46:19 | 0:28:21 | 0:17:58 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/host rook/master} | 3 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
fail | 6465753 | 2021-10-28 22:51:42 | 2021-10-29 02:22:39 | 2021-10-29 02:58:45 | 0:36:06 | 0:23:20 | 0:12:46 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465754 | 2021-10-28 22:51:42 | 2021-10-29 02:23:39 | 2021-10-29 02:51:01 | 0:27:22 | 0:20:28 | 0:06:54 | smithi | master | rhel | 8.4 | rados/singleton/{all/max-pg-per-osd.from-mon mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8}} | 1 | |
dead | 6465755 | 2021-10-28 22:51:43 | 2021-10-29 02:23:50 | 2021-10-29 14:36:11 | 12:12:21 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6465756 | 2021-10-28 22:51:44 | 2021-10-29 02:23:50 | 2021-10-29 14:35:26 | 12:11:36 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465757 | 2021-10-28 22:51:45 | 2021-10-29 02:24:00 | 2021-10-29 04:53:51 | 2:29:51 | 2:18:09 | 0:11:42 | smithi | master | ubuntu | 20.04 | rados/objectstore/{backends/filestore-idempotent-aio-journal supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6465758 | 2021-10-28 22:51:46 | 2021-10-29 02:26:01 | 2021-10-29 02:57:42 | 0:31:41 | 0:19:45 | 0:11:56 | smithi | master | centos | 8.2 | rados/cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465759 | 2021-10-28 22:51:47 | 2021-10-29 02:26:11 | 2021-10-29 02:46:38 | 0:20:27 | 0:10:06 | 0:10:21 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6465760 | 2021-10-28 22:51:47 | 2021-10-29 02:26:22 | 2021-10-29 02:45:39 | 0:19:17 | 0:12:39 | 0:06:38 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-singlehost/{0-distro$/{rhel_8.4_container_tools_rhel8} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
Command failed on smithi094 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c4f5ee6329d692d4d8aa24f05840092126475d5d shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ba57eee0-3861-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6465761 | 2021-10-28 22:51:48 | 2021-10-29 02:26:22 | 2021-10-29 02:59:16 | 0:32:54 | 0:23:42 | 0:09:12 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/redirect_promote_tests} | 2 | |
fail | 6465762 | 2021-10-28 22:51:49 | 2021-10-29 02:28:12 | 2021-10-29 03:02:55 | 0:34:43 | 0:22:15 | 0:12:28 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi139 with status 5: 'sudo systemctl stop ceph-b2435518-3862-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465763 | 2021-10-28 22:51:50 | 2021-10-29 02:29:43 | 2021-10-29 02:55:01 | 0:25:18 | 0:15:34 | 0:09:44 | smithi | master | centos | 8.stream | rados/singleton/{all/max-pg-per-osd.from-replica mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6465764 | 2021-10-28 22:51:51 | 2021-10-29 02:29:43 | 2021-10-29 02:59:52 | 0:30:09 | 0:19:08 | 0:11:01 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465765 | 2021-10-28 22:51:51 | 2021-10-29 02:30:34 | 2021-10-29 03:04:41 | 0:34:07 | 0:20:06 | 0:14:01 | smithi | master | centos | 8.3 | rados/cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465766 | 2021-10-28 22:51:52 | 2021-10-29 02:33:14 | 2021-10-29 02:56:45 | 0:23:31 | 0:12:57 | 0:10:34 | smithi | master | centos | 8.3 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-hybrid supported-random-distro$/{centos_8} tasks/crash} | 2 | |
fail | 6465767 | 2021-10-28 22:51:53 | 2021-10-29 02:33:35 | 2021-10-29 03:03:36 | 0:30:01 | 0:19:00 | 0:11:01 | smithi | master | centos | 8.3 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi099 with status 5: 'sudo systemctl stop ceph-dc0e4970-3862-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465768 | 2021-10-28 22:51:54 | 2021-10-29 02:34:05 | 2021-10-29 03:04:50 | 0:30:45 | 0:17:07 | 0:13:38 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6465769 | 2021-10-28 22:51:55 | 2021-10-29 02:36:25 | 2021-10-29 03:10:27 | 0:34:02 | 0:26:38 | 0:07:24 | smithi | master | rhel | 8.4 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/mgr} | 1 | |
pass | 6465770 | 2021-10-28 22:51:55 | 2021-10-29 02:37:16 | 2021-10-29 03:26:11 | 0:48:55 | 0:39:19 | 0:09:36 | smithi | master | centos | 8.3 | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
fail | 6465771 | 2021-10-28 22:51:56 | 2021-10-29 02:37:16 | 2021-10-29 03:02:05 | 0:24:49 | 0:14:12 | 0:10:37 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi018 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c4f5ee6329d692d4d8aa24f05840092126475d5d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 6465772 | 2021-10-28 22:51:57 | 2021-10-29 02:39:07 | 2021-10-29 03:11:45 | 0:32:38 | 0:20:30 | 0:12:08 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
dead | 6465773 | 2021-10-28 22:51:58 | 2021-10-29 02:40:57 | 2021-10-29 14:53:16 | 12:12:19 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465774 | 2021-10-28 22:51:59 | 2021-10-29 02:41:48 | 2021-10-29 03:13:22 | 0:31:34 | 0:18:25 | 0:13:09 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8.stream} thrashers/careful thrashosds-health workloads/set-chunks-read} | 2 | |
pass | 6465775 | 2021-10-28 22:52:00 | 2021-10-29 02:43:58 | 2021-10-29 03:13:06 | 0:29:08 | 0:11:13 | 0:17:55 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 6465776 | 2021-10-28 22:52:01 | 2021-10-29 02:45:59 | 2021-10-29 03:07:14 | 0:21:15 | 0:11:30 | 0:09:45 | smithi | master | centos | 8.3 | rados/singleton/{all/mon-config-key-caps mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 1 | |
fail | 6465777 | 2021-10-28 22:52:01 | 2021-10-29 02:45:59 | 2021-10-29 03:19:14 | 0:33:15 | 0:21:30 | 0:11:45 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi138 with status 5: 'sudo systemctl stop ceph-e478d6aa-3864-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465778 | 2021-10-28 22:52:02 | 2021-10-29 02:46:10 | 2021-10-29 05:42:34 | 2:56:24 | 2:46:08 | 0:10:16 | smithi | master | ubuntu | 20.04 | rados/objectstore/{backends/filestore-idempotent supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6465779 | 2021-10-28 22:52:03 | 2021-10-29 02:46:10 | 2021-10-29 03:21:53 | 0:35:43 | 0:22:56 | 0:12:47 | smithi | master | centos | 8.3 | rados/cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi112 with status 5: 'sudo systemctl stop ceph-437f9a94-3865-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465780 | 2021-10-28 22:52:04 | 2021-10-29 02:48:50 | 2021-10-29 03:33:37 | 0:44:47 | 0:31:30 | 0:13:17 | smithi | master | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} tasks/rados_workunit_loadgen_big} | 2 | |
fail | 6465781 | 2021-10-28 22:52:05 | 2021-10-29 02:49:11 | 2021-10-29 03:20:34 | 0:31:23 | 0:23:07 | 0:08:16 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465782 | 2021-10-28 22:52:06 | 2021-10-29 02:51:21 | 2021-10-29 03:33:41 | 0:42:20 | 0:30:21 | 0:11:59 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados tasks/rados_api_tests validater/lockdep} | 2 | |
fail | 6465783 | 2021-10-28 22:52:06 | 2021-10-29 02:52:22 | 2021-10-29 03:24:12 | 0:31:50 | 0:23:15 | 0:08:35 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465784 | 2021-10-28 22:52:07 | 2021-10-29 02:55:03 | 2021-10-29 03:16:53 | 0:21:50 | 0:10:00 | 0:11:50 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} | 1 | |
pass | 6465785 | 2021-10-28 22:52:08 | 2021-10-29 02:56:53 | 2021-10-29 03:26:04 | 0:29:11 | 0:20:18 | 0:08:53 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6465786 | 2021-10-28 22:52:09 | 2021-10-29 02:56:53 | 2021-10-29 03:25:02 | 0:28:09 | 0:14:37 | 0:13:32 | smithi | master | centos | 8.stream | rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8.stream} tasks/mon_recovery} | 2 | |
pass | 6465787 | 2021-10-28 22:52:10 | 2021-10-29 02:57:44 | 2021-10-29 03:38:37 | 0:40:53 | 0:27:22 | 0:13:31 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 6465788 | 2021-10-28 22:52:11 | 2021-10-29 02:59:24 | 2021-10-29 03:44:52 | 0:45:28 | 0:39:26 | 0:06:02 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6465789 | 2021-10-28 22:52:11 | 2021-10-29 02:59:55 | 2021-10-29 03:41:05 | 0:41:10 | 0:28:23 | 0:12:47 | smithi | master | centos | 8.3 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/sync workloads/rados_api_tests} | 2 | |
pass | 6465790 | 2021-10-28 22:52:12 | 2021-10-29 03:01:35 | 2021-10-29 03:40:13 | 0:38:38 | 0:31:05 | 0:07:33 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-many-deletes} | 2 | |
pass | 6465791 | 2021-10-28 22:52:13 | 2021-10-29 03:02:56 | 2021-10-29 03:41:16 | 0:38:20 | 0:26:01 | 0:12:19 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
dead | 6465792 | 2021-10-28 22:52:14 | 2021-10-29 15:16:23 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465793 | 2021-10-28 22:52:15 | 2021-10-29 03:04:47 | 2021-10-29 03:22:15 | 0:17:28 | 0:07:30 | 0:09:58 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/mon-config mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6465794 | 2021-10-28 22:52:16 | 2021-10-29 03:04:47 | 2021-10-29 03:35:50 | 0:31:03 | 0:24:42 | 0:06:21 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465795 | 2021-10-28 22:52:17 | 2021-10-29 03:04:47 | 2021-10-29 03:38:23 | 0:33:36 | 0:22:47 | 0:10:49 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream} thrashers/mapgap thrashosds-health workloads/small-objects-localized} | 2 | |
fail | 6465796 | 2021-10-28 22:52:17 | 2021-10-29 03:04:58 | 2021-10-29 03:41:39 | 0:36:41 | 0:24:24 | 0:12:17 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465797 | 2021-10-28 22:52:18 | 2021-10-29 03:09:59 | 2021-10-29 03:45:01 | 0:35:02 | 0:22:34 | 0:12:28 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/few rados thrashers/none thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
Command failed on smithi053 with status 5: 'sudo systemctl stop ceph-63b412e2-3868-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6465798 | 2021-10-28 22:52:19 | 2021-10-29 03:11:49 | 2021-10-29 03:44:36 | 0:32:47 | 0:23:33 | 0:09:14 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi204 with status 5: 'sudo systemctl stop ceph-51fd24da-3868-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6465799 | 2021-10-28 22:52:20 | 2021-10-29 03:13:10 | 2021-10-29 04:22:40 | 1:09:30 | 1:00:19 | 0:09:11 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
Command failed on smithi168 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c4f5ee6329d692d4d8aa24f05840092126475d5d shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6a8c45a8-3868-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6465800 | 2021-10-28 22:52:21 | 2021-10-29 03:13:10 | 2021-10-29 03:54:22 | 0:41:12 | 0:31:18 | 0:09:54 | smithi | master | centos | 8.3 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6465801 | 2021-10-28 22:52:21 | 2021-10-29 03:13:10 | 2021-10-29 03:52:00 | 0:38:50 | 0:29:23 | 0:09:27 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6465802 | 2021-10-28 22:52:22 | 2021-10-29 03:13:11 | 2021-10-29 04:20:45 | 1:07:34 | 0:57:19 | 0:10:15 | smithi | master | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/misc} | 1 | |
fail | 6465803 | 2021-10-28 22:52:23 | 2021-10-29 03:13:31 | 2021-10-29 03:48:28 | 0:34:57 | 0:21:55 | 0:13:02 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi190 with status 5: 'sudo systemctl stop ceph-1e69d676-3869-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465804 | 2021-10-28 22:52:24 | 2021-10-29 03:17:02 | 2021-10-29 03:35:58 | 0:18:56 | 0:08:13 | 0:10:43 | smithi | master | centos | 8.3 | rados/objectstore/{backends/fusestore supported-random-distro$/{centos_8}} | 1 | |
fail | 6465805 | 2021-10-28 22:52:24 | 2021-10-29 03:19:22 | 2021-10-29 03:54:41 | 0:35:19 | 0:22:13 | 0:13:06 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465806 | 2021-10-28 22:52:25 | 2021-10-29 03:20:43 | 2021-10-29 04:16:29 | 0:55:46 | 0:45:31 | 0:10:15 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6465807 | 2021-10-28 22:52:26 | 2021-10-29 03:20:43 | 2021-10-29 03:57:35 | 0:36:52 | 0:23:04 | 0:13:48 | smithi | master | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465808 | 2021-10-28 22:52:27 | 2021-10-29 03:21:33 | 2021-10-29 04:03:13 | 0:41:40 | 0:34:54 | 0:06:46 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |
fail | 6465809 | 2021-10-28 22:52:28 | 2021-10-29 03:21:54 | 2021-10-29 03:53:17 | 0:31:23 | 0:19:03 | 0:12:20 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465810 | 2021-10-28 22:52:28 | 2021-10-29 03:24:14 | 2021-10-29 03:56:35 | 0:32:21 | 0:20:38 | 0:11:43 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465811 | 2021-10-28 22:52:29 | 2021-10-29 03:25:05 | 2021-10-29 04:08:13 | 0:43:08 | 0:28:39 | 0:14:29 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/1.7.0} | 3 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
dead | 6465812 | 2021-10-28 22:52:30 | 2021-10-29 03:26:15 | 2021-10-29 15:46:23 | 12:20:08 | smithi | master | centos | 8.3 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6465813 | 2021-10-28 22:52:31 | 2021-10-29 03:33:46 | 2021-10-29 15:46:16 | 12:12:30 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465814 | 2021-10-28 22:52:32 | 2021-10-29 03:33:47 | 2021-10-29 03:52:58 | 0:19:11 | 0:10:35 | 0:08:36 | smithi | master | centos | 8.stream | rados/singleton/{all/peer mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream}} | 1 | |
dead | 6465815 | 2021-10-28 22:52:33 | 2021-10-29 03:33:47 | 2021-10-29 15:47:39 | 12:13:52 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465816 | 2021-10-28 22:52:33 | 2021-10-29 03:35:58 | 2021-10-29 03:57:06 | 0:21:08 | 0:09:06 | 0:12:02 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_write} | 1 | |
pass | 6465817 | 2021-10-28 22:52:34 | 2021-10-29 03:35:58 | 2021-10-29 04:05:40 | 0:29:42 | 0:15:53 | 0:13:49 | smithi | master | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-stupid supported-random-distro$/{centos_8.stream} tasks/insights} | 2 | |
pass | 6465818 | 2021-10-28 22:52:35 | 2021-10-29 03:38:29 | 2021-10-29 04:02:24 | 0:23:55 | 0:11:07 | 0:12:48 | smithi | master | ubuntu | 20.04 | rados/cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_20.04} 2-node-mgr orchestrator_cli} | 2 | |
pass | 6465819 | 2021-10-28 22:52:36 | 2021-10-29 03:38:39 | 2021-10-29 04:22:37 | 0:43:58 | 0:31:25 | 0:12:33 | smithi | master | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} tasks/rados_workunit_loadgen_mostlyread} | 2 | |
pass | 6465820 | 2021-10-28 22:52:37 | 2021-10-29 03:38:39 | 2021-10-29 04:01:46 | 0:23:07 | 0:11:35 | 0:11:32 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/balancer mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6465821 | 2021-10-28 22:52:37 | 2021-10-29 03:40:20 | 2021-10-29 04:12:49 | 0:32:29 | 0:19:24 | 0:13:05 | smithi | master | centos | 8.2 | rados/cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465822 | 2021-10-28 22:52:38 | 2021-10-29 03:41:10 | 2021-10-29 04:10:59 | 0:29:49 | 0:18:57 | 0:10:52 | smithi | master | centos | 8.2 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi099 with status 5: 'sudo systemctl stop ceph-3dabf8f4-386c-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465823 | 2021-10-28 22:52:39 | 2021-10-29 03:41:20 | 2021-10-29 04:06:32 | 0:25:12 | 0:15:22 | 0:09:50 | smithi | master | centos | 8.stream | rados/objectstore/{backends/keyvaluedb supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6465824 | 2021-10-28 22:52:40 | 2021-10-29 03:41:21 | 2021-10-29 04:00:54 | 0:19:33 | 0:12:26 | 0:07:07 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-singlehost/{0-distro$/{rhel_8.4_container_tools_3.0} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
Command failed on smithi084 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c4f5ee6329d692d4d8aa24f05840092126475d5d shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 365bf9f0-386c-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6465825 | 2021-10-28 22:52:41 | 2021-10-29 03:41:41 | 2021-10-29 04:25:42 | 0:44:01 | 0:30:26 | 0:13:35 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/snaps-few-objects} | 2 | |
pass | 6465826 | 2021-10-28 22:52:42 | 2021-10-29 03:44:32 | 2021-10-29 04:14:28 | 0:29:56 | 0:22:41 | 0:07:15 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 6465827 | 2021-10-28 22:52:42 | 2021-10-29 03:45:02 | 2021-10-29 04:18:02 | 0:33:00 | 0:22:19 | 0:10:41 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi114 with status 5: 'sudo systemctl stop ceph-274fde62-386d-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465828 | 2021-10-28 22:52:43 | 2021-10-29 03:45:02 | 2021-10-29 05:03:50 | 1:18:48 | 1:09:29 | 0:09:19 | smithi | master | centos | 8.3 | rados/standalone/{supported-random-distro$/{centos_8} workloads/mon} | 1 | |
fail | 6465829 | 2021-10-28 22:52:44 | 2021-10-29 03:45:03 | 2021-10-29 04:30:05 | 0:45:02 | 0:33:48 | 0:11:14 | smithi | master | rhel | 8.4 | rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi190 with status 5: 'sudo systemctl stop ceph-b8486550-386e-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465830 | 2021-10-28 22:52:45 | 2021-10-29 03:48:34 | 2021-10-29 04:11:15 | 0:22:41 | 0:10:09 | 0:12:32 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_adoption} | 1 | |
pass | 6465831 | 2021-10-28 22:52:46 | 2021-10-29 03:52:04 | 2021-10-29 04:13:23 | 0:21:19 | 0:08:22 | 0:12:57 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/pg-autoscaler mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} | 2 | |
fail | 6465832 | 2021-10-28 22:52:47 | 2021-10-29 03:53:25 | 2021-10-29 04:22:36 | 0:29:11 | 0:22:39 | 0:06:32 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465833 | 2021-10-28 22:52:47 | 2021-10-29 03:53:25 | 2021-10-29 04:17:48 | 0:24:23 | 0:13:35 | 0:10:48 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} | 2 | |
pass | 6465834 | 2021-10-28 22:52:48 | 2021-10-29 03:54:46 | 2021-10-29 04:20:08 | 0:25:22 | 0:19:45 | 0:05:37 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
dead | 6465835 | 2021-10-28 22:52:49 | 2021-10-29 03:54:46 | 2021-10-29 16:05:02 | 12:10:16 | smithi | master | centos | 8.3 | rados/cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465836 | 2021-10-28 22:52:50 | 2021-10-29 03:56:37 | 2021-10-29 04:18:55 | 0:22:18 | 0:06:38 | 0:15:40 | smithi | master | ubuntu | 20.04 | rados/multimon/{clusters/21 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} | 3 | |
pass | 6465837 | 2021-10-28 22:52:51 | 2021-10-29 03:57:37 | 2021-10-29 04:42:05 | 0:44:28 | 0:27:01 | 0:17:27 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 6465838 | 2021-10-28 22:52:52 | 2021-10-29 04:02:18 | 2021-10-29 04:34:25 | 0:32:07 | 0:20:43 | 0:11:24 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6465839 | 2021-10-28 22:52:52 | 2021-10-29 04:02:28 | 2021-10-29 04:40:44 | 0:38:16 | 0:27:55 | 0:10:21 | smithi | master | centos | 8.3 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/many workloads/rados_mon_workunits} | 2 | |
pass | 6465840 | 2021-10-28 22:52:53 | 2021-10-29 04:03:19 | 2021-10-29 04:50:19 | 0:47:00 | 0:32:20 | 0:14:40 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 2 | |
fail | 6465841 | 2021-10-28 22:52:54 | 2021-10-29 04:05:49 | 2021-10-29 04:39:22 | 0:33:33 | 0:23:36 | 0:09:57 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465842 | 2021-10-28 22:52:55 | 2021-10-29 04:08:20 | 2021-10-29 04:41:47 | 0:33:27 | 0:21:57 | 0:11:30 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi129 with status 5: 'sudo systemctl stop ceph-5ac32a76-3870-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465843 | 2021-10-28 22:52:56 | 2021-10-29 04:08:20 | 2021-10-29 04:44:32 | 0:36:12 | 0:21:54 | 0:14:18 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
pass | 6465844 | 2021-10-28 22:52:56 | 2021-10-29 04:11:01 | 2021-10-29 04:34:51 | 0:23:50 | 0:14:00 | 0:09:50 | smithi | master | centos | 8.stream | rados/singleton/{all/radostool mon_election/classic msgr-failures/none msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6465845 | 2021-10-28 22:52:57 | 2021-10-29 04:11:21 | 2021-10-29 04:42:24 | 0:31:03 | 0:22:51 | 0:08:12 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465846 | 2021-10-28 22:52:58 | 2021-10-29 04:12:52 | 2021-10-29 04:42:56 | 0:30:04 | 0:22:51 | 0:07:13 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi105 with status 5: 'sudo systemctl stop ceph-ba37f7ac-3870-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465847 | 2021-10-28 22:52:59 | 2021-10-29 04:13:32 | 2021-10-29 04:49:59 | 0:36:27 | 0:25:28 | 0:10:59 | smithi | master | centos | 8.stream | rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6465848 | 2021-10-28 22:53:00 | 2021-10-29 04:14:33 | 2021-10-29 04:35:59 | 0:21:26 | 0:10:56 | 0:10:30 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/crushdiff mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6465849 | 2021-10-28 22:53:01 | 2021-10-29 04:14:33 | 2021-10-29 04:49:02 | 0:34:29 | 0:23:21 | 0:11:08 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465850 | 2021-10-28 22:53:01 | 2021-10-29 04:14:33 | 2021-10-29 04:31:44 | 0:17:11 | 0:07:03 | 0:10:08 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_cephadm_repos} | 1 | |
pass | 6465851 | 2021-10-28 22:53:02 | 2021-10-29 04:16:34 | 2021-10-29 04:39:35 | 0:23:01 | 0:12:36 | 0:10:25 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/sample_fio} | 1 | |
dead | 6465852 | 2021-10-28 22:53:03 | 2021-10-29 04:17:54 | 2021-10-29 16:29:44 | 12:11:50 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465853 | 2021-10-28 22:53:04 | 2021-10-29 04:18:04 | 2021-10-29 08:54:19 | 4:36:15 | 4:24:59 | 0:11:16 | smithi | master | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd-backfill} | 1 | |
pass | 6465854 | 2021-10-28 22:53:05 | 2021-10-29 04:18:05 | 2021-10-29 04:54:51 | 0:36:46 | 0:24:40 | 0:12:06 | smithi | master | centos | 8.3 | rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} | 1 | |
fail | 6465855 | 2021-10-28 22:53:05 | 2021-10-29 04:19:05 | 2021-10-29 04:50:17 | 0:31:12 | 0:19:34 | 0:11:38 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465856 | 2021-10-28 22:53:06 | 2021-10-29 04:19:05 | 2021-10-29 04:42:22 | 0:23:17 | 0:12:35 | 0:10:42 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/rebuild-mondb mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6465857 | 2021-10-28 22:53:07 | 2021-10-29 04:20:16 | 2021-10-29 04:51:40 | 0:31:24 | 0:23:29 | 0:07:55 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465858 | 2021-10-28 22:53:08 | 2021-10-29 04:22:36 | 2021-10-29 04:48:43 | 0:26:07 | 0:14:39 | 0:11:28 | smithi | master | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/repair_test} | 2 | |
pass | 6465859 | 2021-10-28 22:53:09 | 2021-10-29 04:22:47 | 2021-10-29 04:52:12 | 0:29:25 | 0:22:13 | 0:07:12 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/cache-agent-small} | 2 | |
pass | 6465860 | 2021-10-28 22:53:10 | 2021-10-29 04:22:47 | 2021-10-29 05:08:54 | 0:46:07 | 0:36:26 | 0:09:41 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-snaps-few-objects-overwrites} | 2 | |
pass | 6465861 | 2021-10-28 22:53:10 | 2021-10-29 04:25:48 | 2021-10-29 04:49:11 | 0:23:23 | 0:09:27 | 0:13:56 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/full-tiering mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
dead | 6465862 | 2021-10-28 22:53:11 | 2021-10-29 04:30:09 | 2021-10-29 16:42:55 | 12:12:46 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6465863 | 2021-10-28 22:53:12 | 2021-10-29 04:31:49 | 2021-10-29 05:05:42 | 0:33:53 | 0:21:14 | 0:12:39 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi109 with status 5: 'sudo systemctl stop ceph-dcf26eb4-3873-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6465864 | 2021-10-28 22:53:13 | 2021-10-29 04:34:30 | 2021-10-29 05:12:58 | 0:38:28 | 0:22:32 | 0:15:56 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/fastclose rados thrashers/careful thrashosds-health workloads/rbd_cls} | 3 | |
Failure Reason:
Command failed on smithi104 with status 5: 'sudo systemctl stop ceph-c182e586-3874-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6465865 | 2021-10-28 22:53:14 | 2021-10-29 04:39:31 | 2021-10-29 05:10:57 | 0:31:26 | 0:21:59 | 0:09:27 | smithi | master | centos | 8.2 | rados/cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi166 with status 5: 'sudo systemctl stop ceph-abf74892-3874-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465866 | 2021-10-28 22:53:14 | 2021-10-29 04:39:41 | 2021-10-29 05:10:48 | 0:31:07 | 0:19:03 | 0:12:04 | smithi | master | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/progress} | 2 | |
fail | 6465867 | 2021-10-28 22:53:15 | 2021-10-29 04:40:52 | 2021-10-29 05:12:56 | 0:32:04 | 0:20:05 | 0:11:59 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465868 | 2021-10-28 22:53:16 | 2021-10-29 04:41:52 | 2021-10-29 07:26:23 | 2:44:31 | 2:20:56 | 0:23:35 | smithi | master | ubuntu | 20.04 | rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6465869 | 2021-10-28 22:53:17 | 2021-10-29 04:42:12 | 2021-10-29 05:16:19 | 0:34:07 | 0:23:10 | 0:10:57 | smithi | master | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465870 | 2021-10-28 22:53:18 | 2021-10-29 04:42:13 | 2021-10-29 05:06:06 | 0:23:53 | 0:11:27 | 0:12:26 | smithi | master | centos | 8.stream | rados/singleton/{all/resolve_stuck_peering mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8.stream}} | 2 | |
fail | 6465871 | 2021-10-28 22:53:18 | 2021-10-29 04:42:33 | 2021-10-29 05:16:48 | 0:34:15 | 0:22:42 | 0:11:33 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi105 with status 5: 'sudo systemctl stop ceph-93e6cfe8-3874-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6465872 | 2021-10-28 22:53:19 | 2021-10-29 04:43:03 | 2021-10-29 05:02:14 | 0:19:11 | 0:10:24 | 0:08:47 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi094 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c4f5ee6329d692d4d8aa24f05840092126475d5d shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0a7defc-3874-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6465873 | 2021-10-28 22:53:20 | 2021-10-29 04:43:04 | 2021-10-29 05:21:49 | 0:38:45 | 0:25:28 | 0:13:17 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/cache-pool-snaps} | 2 | |
fail | 6465874 | 2021-10-28 22:53:21 | 2021-10-29 04:44:34 | 2021-10-29 05:20:09 | 0:35:35 | 0:23:41 | 0:11:54 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465875 | 2021-10-28 22:53:22 | 2021-10-29 04:48:45 | 2021-10-29 05:18:39 | 0:29:54 | 0:23:02 | 0:06:52 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 6465876 | 2021-10-28 22:53:22 | 2021-10-29 04:49:16 | 2021-10-29 05:26:03 | 0:36:47 | 0:25:22 | 0:11:25 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/flannel rook/master} | 1 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
fail | 6465877 | 2021-10-28 22:53:23 | 2021-10-29 04:50:06 | 2021-10-29 05:11:56 | 0:21:50 | 0:11:53 | 0:09:57 | smithi | master | centos | 8.2 | rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} tasks/e2e} | 2 | |
Failure Reason:
Command failed on smithi018 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c4f5ee6329d692d4d8aa24f05840092126475d5d shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 38a7abe6-3876-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6465878 | 2021-10-28 22:53:24 | 2021-10-29 04:50:26 | 2021-10-29 05:17:29 | 0:27:03 | 0:20:23 | 0:06:40 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 6465879 | 2021-10-28 22:53:25 | 2021-10-29 04:50:27 | 2021-10-29 05:20:51 | 0:30:24 | 0:23:10 | 0:07:14 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
dead | 6465880 | 2021-10-28 22:53:26 | 2021-10-29 04:51:47 | 2021-10-29 17:03:56 | 12:12:09 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6465881 | 2021-10-28 22:53:27 | 2021-10-29 04:52:18 | 2021-10-29 17:05:35 | 12:13:17 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465882 | 2021-10-28 22:53:27 | 2021-10-29 04:53:58 | 2021-10-29 05:40:02 | 0:46:04 | 0:25:55 | 0:20:09 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/lockdep} | 2 | |
fail | 6465883 | 2021-10-28 22:53:28 | 2021-10-29 05:02:20 | 2021-10-29 05:34:54 | 0:32:34 | 0:18:55 | 0:13:39 | smithi | master | centos | 8.2 | rados/cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465884 | 2021-10-28 22:53:29 | 2021-10-29 05:05:50 | 2021-10-29 08:29:00 | 3:23:10 | 3:14:43 | 0:08:27 | smithi | master | centos | 8.3 | rados/standalone/{supported-random-distro$/{centos_8} workloads/osd} | 1 | |
pass | 6465885 | 2021-10-28 22:53:30 | 2021-10-29 05:05:51 | 2021-10-29 05:29:52 | 0:24:01 | 0:14:10 | 0:09:51 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6465886 | 2021-10-28 22:53:31 | 2021-10-29 05:06:11 | 2021-10-29 05:30:16 | 0:24:05 | 0:10:39 | 0:13:26 | smithi | master | centos | 8.stream | rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8.stream} tasks/mon_clock_no_skews} | 2 | |
pass | 6465887 | 2021-10-28 22:53:31 | 2021-10-29 05:09:02 | 2021-10-29 05:58:33 | 0:49:31 | 0:40:20 | 0:09:11 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 6465888 | 2021-10-28 22:53:32 | 2021-10-29 05:10:52 | 2021-10-29 05:50:05 | 0:39:13 | 0:29:54 | 0:09:19 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 6465889 | 2021-10-28 22:53:33 | 2021-10-29 05:11:03 | 2021-10-29 05:30:30 | 0:19:27 | 0:09:34 | 0:09:53 | smithi | master | centos | 8.2 | rados/cephadm/smoke-singlehost/{0-distro$/{centos_8.2_container_tools_3.0} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
Command failed on smithi043 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c4f5ee6329d692d4d8aa24f05840092126475d5d shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a6e1d53a-3878-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6465890 | 2021-10-28 22:53:34 | 2021-10-29 05:11:43 | 2021-10-29 05:54:58 | 0:43:15 | 0:36:46 | 0:06:29 | smithi | master | rhel | 8.4 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/sync-many workloads/snaps-few-objects} | 2 | |
pass | 6465891 | 2021-10-28 22:53:35 | 2021-10-29 05:12:03 | 2021-10-29 05:58:09 | 0:46:06 | 0:35:30 | 0:10:36 | smithi | master | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8.stream} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} | 2 | |
fail | 6465892 | 2021-10-28 22:53:35 | 2021-10-29 05:13:04 | 2021-10-29 05:46:23 | 0:33:19 | 0:22:40 | 0:10:39 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi178 with status 5: 'sudo systemctl stop ceph-53095a54-3879-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465893 | 2021-10-28 22:53:36 | 2021-10-29 05:13:04 | 2021-10-29 05:34:48 | 0:21:44 | 0:11:10 | 0:10:34 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_read} | 1 | |
pass | 6465894 | 2021-10-28 22:53:37 | 2021-10-29 05:13:04 | 2021-10-29 05:36:07 | 0:23:03 | 0:14:08 | 0:08:55 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6465895 | 2021-10-28 22:53:38 | 2021-10-29 05:13:05 | 2021-10-29 05:51:29 | 0:38:24 | 0:25:26 | 0:12:58 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/cache-snaps} | 2 | |
pass | 6465896 | 2021-10-28 22:53:39 | 2021-10-29 05:16:25 | 2021-10-29 08:12:09 | 2:55:44 | 2:25:51 | 0:29:53 | smithi | master | rhel | 8.4 | rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{rhel_8}} | 1 | |
fail | 6465897 | 2021-10-28 22:53:39 | 2021-10-29 05:16:56 | 2021-10-29 05:53:29 | 0:36:33 | 0:24:11 | 0:12:22 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465898 | 2021-10-28 22:53:40 | 2021-10-29 05:17:36 | 2021-10-29 05:48:13 | 0:30:37 | 0:19:30 | 0:11:07 | smithi | master | centos | 8.3 | rados/cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465899 | 2021-10-28 22:53:41 | 2021-10-29 05:18:47 | 2021-10-29 05:46:11 | 0:27:24 | 0:17:38 | 0:09:46 | smithi | master | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8.stream} tasks/libcephsqlite} | 2 | |
pass | 6465900 | 2021-10-28 22:53:42 | 2021-10-29 05:18:47 | 2021-10-29 05:56:21 | 0:37:34 | 0:25:25 | 0:12:09 | smithi | master | centos | 8.3 | rados/singleton/{all/thrash-eio mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 2 | |
fail | 6465901 | 2021-10-28 22:53:43 | 2021-10-29 05:20:17 | 2021-10-29 05:50:01 | 0:29:44 | 0:19:02 | 0:10:42 | smithi | master | centos | 8.3 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi197 with status 5: 'sudo systemctl stop ceph-0fb295da-387a-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6465902 | 2021-10-28 22:53:43 | 2021-10-29 05:20:58 | 2021-10-29 05:42:45 | 0:21:47 | 0:12:30 | 0:09:17 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi099 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c4f5ee6329d692d4d8aa24f05840092126475d5d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 6465903 | 2021-10-28 22:53:44 | 2021-10-29 05:21:58 | 2021-10-29 05:58:09 | 0:36:11 | 0:19:06 | 0:17:05 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
dead | 6465904 | 2021-10-28 22:53:45 | 2021-10-29 05:26:09 | 2021-10-29 17:41:56 | 12:15:47 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465905 | 2021-10-28 22:53:46 | 2021-10-29 05:30:20 | 2021-10-29 06:22:04 | 0:51:44 | 0:45:08 | 0:06:36 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/classic rados supported-random-distro$/{rhel_8}} | 2 | |
fail | 6465906 | 2021-10-28 22:53:47 | 2021-10-29 05:30:40 | 2021-10-29 06:06:04 | 0:35:24 | 0:21:47 | 0:13:37 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi109 with status 5: 'sudo systemctl stop ceph-53a55f1e-387c-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6465907 | 2021-10-28 22:53:47 | 2021-10-29 05:35:01 | 2021-10-29 06:15:27 | 0:40:26 | 0:32:13 | 0:08:13 | smithi | master | rhel | 8.4 | rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi129 with status 5: 'sudo systemctl stop ceph-903e6b36-387d-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465908 | 2021-10-28 22:53:48 | 2021-10-29 05:36:12 | 2021-10-29 06:03:39 | 0:27:27 | 0:12:30 | 0:14:57 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream} thrashers/none thrashosds-health workloads/dedup-io-mixed} | 2 | |
fail | 6465909 | 2021-10-28 22:53:49 | 2021-10-29 06:12:06 | 1402 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | ||||
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465910 | 2021-10-28 22:53:50 | 2021-10-29 05:42:53 | 2021-10-29 06:19:33 | 0:36:40 | 0:21:37 | 0:15:03 | smithi | master | centos | 8.3 | rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} | 2 | |
pass | 6465911 | 2021-10-28 22:53:51 | 2021-10-29 05:46:14 | 2021-10-29 08:39:37 | 2:53:23 | 2:47:12 | 0:06:11 | smithi | master | rhel | 8.4 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/scrub} | 1 | |
fail | 6465912 | 2021-10-28 22:53:52 | 2021-10-29 05:46:24 | 2021-10-29 06:07:32 | 0:21:08 | 0:09:14 | 0:11:54 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
Command failed on smithi190 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c4f5ee6329d692d4d8aa24f05840092126475d5d shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid eca2a13a-387d-11ec-8c28-001a4aab830c -- ceph mon dump -f json' |
||||||||||||||
pass | 6465913 | 2021-10-28 22:53:52 | 2021-10-29 05:48:15 | 2021-10-29 06:11:08 | 0:22:53 | 0:11:11 | 0:11:42 | smithi | master | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-comp-snappy supported-random-distro$/{centos_8.stream} tasks/workunits} | 2 | |
pass | 6465914 | 2021-10-28 22:53:53 | 2021-10-29 05:50:05 | 2021-10-29 06:31:18 | 0:41:13 | 0:30:56 | 0:10:17 | smithi | master | centos | 8.3 | rados/objectstore/{backends/objectstore-filestore-memstore supported-random-distro$/{centos_8}} | 1 | |
dead | 6465915 | 2021-10-28 22:53:54 | 2021-10-29 05:50:05 | 2021-10-29 18:02:33 | 12:12:28 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6465916 | 2021-10-28 22:53:55 | 2021-10-29 05:50:06 | 2021-10-29 06:10:41 | 0:20:35 | 0:10:57 | 0:09:38 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/pool-access mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6465917 | 2021-10-28 22:53:56 | 2021-10-29 05:51:36 | 2021-10-29 06:22:50 | 0:31:14 | 0:22:55 | 0:08:19 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465918 | 2021-10-28 22:53:56 | 2021-10-29 05:53:37 | 2021-10-29 06:34:56 | 0:41:19 | 0:32:38 | 0:08:41 | smithi | master | centos | 8.stream | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6465919 | 2021-10-28 22:53:57 | 2021-10-29 05:53:37 | 2021-10-29 06:26:16 | 0:32:39 | 0:24:21 | 0:08:18 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6465920 | 2021-10-28 22:53:58 | 2021-10-29 05:55:08 | 2021-10-29 06:25:51 | 0:30:43 | 0:23:16 | 0:07:27 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi168 with status 5: 'sudo systemctl stop ceph-20556c8c-387f-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
pass | 6465921 | 2021-10-28 22:53:59 | 2021-10-29 05:56:28 | 2021-10-29 06:15:03 | 0:18:35 | 0:08:28 | 0:10:07 | smithi | master | centos | 8.3 | rados/singleton/{all/admin-socket mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6465922 | 2021-10-28 22:53:59 | 2021-10-29 05:58:19 | 2021-10-29 06:18:18 | 0:19:59 | 0:09:19 | 0:10:40 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_read} | 1 | |
pass | 6465923 | 2021-10-28 22:54:00 | 2021-10-29 05:58:19 | 2021-10-29 06:43:36 | 0:45:17 | 0:33:42 | 0:11:35 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8.stream} thrashers/careful thrashosds-health workloads/pool-snaps-few-objects} | 2 | |
fail | 6465924 | 2021-10-28 22:54:01 | 2021-10-29 05:58:19 | 2021-10-29 07:08:24 | 1:10:05 | 1:00:20 | 0:09:45 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
Command failed on smithi157 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c4f5ee6329d692d4d8aa24f05840092126475d5d shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 98711dba-387f-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6465925 | 2021-10-28 22:54:02 | 2021-10-29 05:58:40 | 2021-10-29 06:31:00 | 0:32:20 | 0:10:32 | 0:21:48 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 6465926 | 2021-10-28 22:54:03 | 2021-10-29 06:03:41 | 2021-10-29 06:39:15 | 0:35:34 | 0:21:58 | 0:13:36 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi109 with status 5: 'sudo systemctl stop ceph-d131bde8-3880-11ec-8c28-001a4aab830c@mon.b' |
||||||||||||||
fail | 6465927 | 2021-10-28 22:54:04 | 2021-10-29 06:06:11 | 2021-10-29 06:39:00 | 0:32:49 | 0:24:18 | 0:08:31 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465928 | 2021-10-28 22:54:04 | 2021-10-29 06:07:42 | 2021-10-29 06:52:29 | 0:44:47 | 0:34:54 | 0:09:53 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |
fail | 6465929 | 2021-10-28 22:54:05 | 2021-10-29 06:11:13 | 2021-10-29 06:48:15 | 0:37:02 | 0:24:09 | 0:12:53 | smithi | master | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6465930 | 2021-10-28 22:54:06 | 2021-10-29 06:12:13 | 2021-10-29 06:33:02 | 0:20:49 | 0:11:38 | 0:09:11 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/version-number-sanity mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6465931 | 2021-10-28 22:54:07 | 2021-10-29 06:12:13 | 2021-10-29 06:38:37 | 0:26:24 | 0:13:07 | 0:13:17 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/lockdep} | 2 |