User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
akupczyk | 2021-11-04 09:59:58 | 2021-11-10 13:31:17 | 2021-11-11 15:04:47 | 1 day, 1:33:30 | rados | wip-bluefs-fine-grain-locking-4 | smithi | 929e5fa | 273 | 120 | 28 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6484581 | 2021-11-04 10:00:57 | 2021-11-08 12:42:53 | 2021-11-08 13:12:51 | 0:29:58 | 0:17:51 | 0:12:07 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/rados_cls_all validater/lockdep} | 2 | |
fail | 6484582 | 2021-11-04 10:00:58 | 2021-11-08 12:43:33 | 2021-11-08 13:18:49 | 0:35:16 | 0:22:14 | 0:13:02 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484583 | 2021-11-04 10:00:58 | 2021-11-08 12:44:24 | 2021-11-08 13:32:49 | 0:48:25 | 0:40:27 | 0:07:58 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |
pass | 6484584 | 2021-11-04 10:00:59 | 2021-11-08 12:45:44 | 2021-11-08 13:04:58 | 0:19:14 | 0:09:08 | 0:10:06 | smithi | master | centos | 8.3 | rados/objectstore/{backends/filejournal supported-random-distro$/{centos_8}} | 1 | |
pass | 6484585 | 2021-11-04 10:01:00 | 2021-11-08 12:45:45 | 2021-11-08 13:27:31 | 0:41:46 | 0:31:08 | 0:10:38 | smithi | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} tasks/rados_workunit_loadgen_big} | 2 | |
pass | 6484586 | 2021-11-04 10:01:01 | 2021-11-08 12:46:25 | 2021-11-08 13:04:35 | 0:18:10 | 0:07:38 | 0:10:32 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/divergent_priors mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6484587 | 2021-11-04 10:01:02 | 2021-11-08 12:46:35 | 2021-11-08 13:15:51 | 0:29:16 | 0:23:27 | 0:05:49 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi157 with status 5: 'sudo systemctl stop ceph-1dee278c-4094-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484588 | 2021-11-04 10:01:02 | 2021-11-08 12:46:36 | 2021-11-08 13:30:08 | 0:43:32 | 0:31:44 | 0:11:48 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/cache-pool-snaps} | 2 | |
pass | 6484589 | 2021-11-04 10:01:03 | 2021-11-08 12:47:06 | 2021-11-08 13:12:31 | 0:25:25 | 0:14:54 | 0:10:31 | smithi | master | centos | 8.3 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-hybrid supported-random-distro$/{centos_8} tasks/insights} | 2 | |
fail | 6484590 | 2021-11-04 10:01:04 | 2021-11-08 12:47:16 | 2021-11-08 13:07:08 | 0:19:52 | 0:10:27 | 0:09:25 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_nfs} | 1 | |
Failure Reason:
Command failed on smithi072 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:929e5fa48ebd63d8eb58ef42b902a49e13a2cd48 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 54a61082-4094-11ec-8c2c-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6484591 | 2021-11-04 10:01:05 | 2021-11-08 12:47:27 | 2021-11-08 13:08:27 | 0:21:00 | 0:11:20 | 0:09:40 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/full-tiering mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} | 1 | |
dead | 6484592 | 2021-11-04 10:01:06 | 2021-11-08 12:47:27 | 2021-11-09 00:59:50 | 12:12:23 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6484593 | 2021-11-04 10:01:06 | 2021-11-08 12:47:47 | 2021-11-08 13:24:09 | 0:36:22 | 0:25:04 | 0:11:18 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8.stream} thrashers/mapgap thrashosds-health workloads/cache-snaps-balanced} | 2 | |
pass | 6484594 | 2021-11-04 10:01:07 | 2021-11-08 12:48:38 | 2021-11-08 13:12:27 | 0:23:49 | 0:12:03 | 0:11:46 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_read} | 1 | |
fail | 6484595 | 2021-11-04 10:01:08 | 2021-11-08 12:50:08 | 2021-11-08 13:21:43 | 0:31:35 | 0:21:54 | 0:09:41 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi162 with status 5: 'sudo systemctl stop ceph-ca9e59e8-4094-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484596 | 2021-11-04 10:01:09 | 2021-11-08 12:50:29 | 2021-11-08 13:08:45 | 0:18:16 | 0:08:05 | 0:10:11 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/divergent_priors2 mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6484597 | 2021-11-04 10:01:10 | 2021-11-08 12:50:29 | 2021-11-08 13:22:10 | 0:31:41 | 0:21:08 | 0:10:33 | smithi | master | centos | 8.2 | rados/cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/packaged mon_election/connectivity msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi096 with status 5: 'sudo systemctl stop ceph-db27af26-4094-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484598 | 2021-11-04 10:01:11 | 2021-11-08 12:50:39 | 2021-11-08 13:08:37 | 0:17:58 | 0:07:58 | 0:10:00 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/health-warnings mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6484599 | 2021-11-04 10:01:11 | 2021-11-08 12:50:50 | 2021-11-08 14:35:44 | 1:44:54 | 1:39:17 | 0:05:37 | smithi | master | rhel | 8.4 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/erasure-code} | 1 | |
fail | 6484600 | 2021-11-04 10:01:12 | 2021-11-08 12:50:50 | 2021-11-08 13:19:40 | 0:28:50 | 0:19:22 | 0:09:28 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484601 | 2021-11-04 10:01:13 | 2021-11-08 12:51:00 | 2021-11-08 13:29:01 | 0:38:01 | 0:27:53 | 0:10:08 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/cache-snaps} | 2 | |
pass | 6484602 | 2021-11-04 10:01:14 | 2021-11-08 12:52:01 | 2021-11-08 13:17:58 | 0:25:57 | 0:13:31 | 0:12:26 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 6484603 | 2021-11-04 10:01:15 | 2021-11-08 12:52:31 | 2021-11-08 13:25:49 | 0:33:18 | 0:24:27 | 0:08:51 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484604 | 2021-11-04 10:01:15 | 2021-11-08 12:54:22 | 2021-11-08 13:13:37 | 0:19:15 | 0:10:33 | 0:08:42 | smithi | master | centos | 8.3 | rados/singleton/{all/dump-stuck mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} | 1 | |
fail | 6484605 | 2021-11-04 10:01:16 | 2021-11-08 12:54:22 | 2021-11-08 13:29:54 | 0:35:32 | 0:22:27 | 0:13:05 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi043 with status 5: 'sudo systemctl stop ceph-250abce6-4095-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484606 | 2021-11-04 10:01:17 | 2021-11-08 12:56:13 | 2021-11-08 13:19:24 | 0:23:11 | 0:12:16 | 0:10:55 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/cache} | 2 | |
fail | 6484607 | 2021-11-04 10:01:18 | 2021-11-08 12:56:13 | 2021-11-08 13:15:27 | 0:19:14 | 0:09:43 | 0:09:31 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi079 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:929e5fa48ebd63d8eb58ef42b902a49e13a2cd48 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6cd393ae-4095-11ec-8c2c-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6484608 | 2021-11-04 10:01:19 | 2021-11-08 12:56:13 | 2021-11-08 13:17:25 | 0:21:12 | 0:11:07 | 0:10:05 | smithi | master | centos | 8.2 | rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{filestore-xfs} tasks/e2e} | 2 | |
Failure Reason:
Command failed on smithi031 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:929e5fa48ebd63d8eb58ef42b902a49e13a2cd48 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9645f8da-4095-11ec-8c2c-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6484609 | 2021-11-04 10:01:20 | 2021-11-08 12:56:14 | 2021-11-08 13:32:11 | 0:35:57 | 0:25:27 | 0:10:30 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} | 1 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
pass | 6484610 | 2021-11-04 10:01:20 | 2021-11-08 12:56:14 | 2021-11-08 13:17:07 | 0:20:53 | 0:09:31 | 0:11:22 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
fail | 6484611 | 2021-11-04 10:01:21 | 2021-11-08 12:58:04 | 2021-11-08 13:29:30 | 0:31:26 | 0:20:03 | 0:11:23 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484612 | 2021-11-04 10:01:22 | 2021-11-08 12:58:15 | 2021-11-08 13:53:58 | 0:55:43 | 0:49:17 | 0:06:26 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-radosbench} | 2 | |
pass | 6484613 | 2021-11-04 10:01:23 | 2021-11-08 12:58:25 | 2021-11-08 13:29:42 | 0:31:17 | 0:22:01 | 0:09:16 | smithi | master | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream} tasks/rados_workunit_loadgen_mix} | 2 | |
dead | 6484614 | 2021-11-04 10:01:24 | 2021-11-08 12:58:45 | 2021-11-09 01:12:26 | 12:13:41 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6484615 | 2021-11-04 10:01:24 | 2021-11-08 13:00:46 | 2021-11-09 01:09:12 | 12:08:26 | smithi | master | rhel | 8.4 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/one workloads/snaps-few-objects} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6484616 | 2021-11-04 10:01:25 | 2021-11-08 13:00:47 | 2021-11-08 14:04:19 | 1:03:32 | 0:54:14 | 0:09:18 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/ec-inconsistent-hinfo mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6484617 | 2021-11-04 10:01:26 | 2021-11-08 13:00:47 | 2021-11-08 13:47:59 | 0:47:12 | 0:38:26 | 0:08:46 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 6484618 | 2021-11-04 10:01:27 | 2021-11-08 13:02:48 | 2021-11-08 13:43:37 | 0:40:49 | 0:28:18 | 0:12:31 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6484619 | 2021-11-04 10:01:28 | 2021-11-08 13:03:08 | 2021-11-08 13:26:02 | 0:22:54 | 0:10:02 | 0:12:52 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/dedup-io-mixed} | 2 | |
pass | 6484620 | 2021-11-04 10:01:28 | 2021-11-08 13:04:19 | 2021-11-08 13:26:15 | 0:21:56 | 0:10:00 | 0:11:56 | smithi | master | centos | 8.stream | rados/multimon/{clusters/9 mon_election/connectivity msgr-failures/few msgr/async no_pools objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream} tasks/mon_clock_with_skews} | 3 | |
fail | 6484621 | 2021-11-04 10:01:29 | 2021-11-08 13:04:49 | 2021-11-08 13:34:11 | 0:29:22 | 0:19:02 | 0:10:20 | smithi | master | centos | 8.2 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi137 with status 5: 'sudo systemctl stop ceph-782dd20e-4096-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484622 | 2021-11-04 10:01:30 | 2021-11-08 13:04:59 | 2021-11-08 13:57:13 | 0:52:14 | 0:39:25 | 0:12:49 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/filestore-xfs rados tasks/mon_recovery validater/valgrind} | 2 | |
pass | 6484623 | 2021-11-04 10:01:31 | 2021-11-08 13:05:30 | 2021-11-08 15:28:53 | 2:23:23 | 2:16:44 | 0:06:39 | smithi | master | rhel | 8.4 | rados/objectstore/{backends/filestore-idempotent-aio-journal supported-random-distro$/{rhel_8}} | 1 | |
fail | 6484624 | 2021-11-04 10:01:32 | 2021-11-08 13:05:30 | 2021-11-08 13:40:38 | 0:35:08 | 0:22:24 | 0:12:44 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/connectivity msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
Command failed on smithi100 with status 5: 'sudo systemctl stop ceph-41e8275c-4097-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
fail | 6484625 | 2021-11-04 10:01:32 | 2021-11-08 13:07:11 | 2021-11-08 13:25:17 | 0:18:06 | 0:07:56 | 0:10:10 | smithi | master | centos | 8.2 | rados/cephadm/smoke-singlehost/{0-distro$/{centos_8.2_container_tools_3.0} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
Command failed on smithi118 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:929e5fa48ebd63d8eb58ef42b902a49e13a2cd48 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c43e0006-4096-11ec-8c2c-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6484626 | 2021-11-04 10:01:33 | 2021-11-08 13:07:51 | 2021-11-08 13:29:24 | 0:21:33 | 0:12:50 | 0:08:43 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} | 1 | |
pass | 6484627 | 2021-11-04 10:01:34 | 2021-11-08 13:07:51 | 2021-11-08 13:29:58 | 0:22:07 | 0:11:16 | 0:10:51 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
dead | 6484628 | 2021-11-04 10:01:35 | 2021-11-08 13:08:32 | 2021-11-09 01:18:00 | 12:09:28 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/dedup-io-snaps} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6484629 | 2021-11-04 10:01:35 | 2021-11-08 13:08:52 | 2021-11-08 13:40:16 | 0:31:24 | 0:20:55 | 0:10:29 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi187 with status 5: 'sudo systemctl stop ceph-61c0b9e0-4097-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484630 | 2021-11-04 10:01:36 | 2021-11-08 13:09:13 | 2021-11-08 13:50:27 | 0:41:14 | 0:31:19 | 0:09:55 | smithi | master | centos | 8.3 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-low-osd-mem-target supported-random-distro$/{centos_8} tasks/module_selftest} | 2 | |
pass | 6484631 | 2021-11-04 10:01:37 | 2021-11-08 13:09:13 | 2021-11-08 14:16:00 | 1:06:47 | 0:55:51 | 0:10:56 | smithi | master | centos | 8.stream | rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream}} | 1 | |
dead | 6484632 | 2021-11-04 10:01:38 | 2021-11-08 13:09:53 | 2021-11-09 01:23:46 | 12:13:53 | smithi | master | centos | 8.3 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6484633 | 2021-11-04 10:01:39 | 2021-11-08 13:10:54 | 2021-11-08 13:44:14 | 0:33:20 | 0:22:44 | 0:10:36 | smithi | master | centos | 8.3 | rados/cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/root mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi125 with status 5: 'sudo systemctl stop ceph-c4bafec0-4097-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484634 | 2021-11-04 10:01:39 | 2021-11-08 13:11:14 | 2021-11-08 13:51:29 | 0:40:15 | 0:29:07 | 0:11:08 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/pool-snaps-few-objects} | 2 | |
pass | 6484635 | 2021-11-04 10:01:40 | 2021-11-08 13:11:55 | 2021-11-08 13:30:57 | 0:19:02 | 0:09:57 | 0:09:05 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_adoption} | 1 | |
pass | 6484636 | 2021-11-04 10:01:41 | 2021-11-08 13:11:55 | 2021-11-08 13:32:31 | 0:20:36 | 0:09:41 | 0:10:55 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6484637 | 2021-11-04 10:01:42 | 2021-11-08 13:12:35 | 2021-11-08 13:44:03 | 0:31:28 | 0:24:11 | 0:07:17 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484638 | 2021-11-04 10:01:42 | 2021-11-08 13:12:36 | 2021-11-08 13:29:59 | 0:17:23 | 0:08:30 | 0:08:53 | smithi | master | centos | 8.3 | rados/singleton/{all/erasure-code-nonregression mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} | 1 | |
fail | 6484639 | 2021-11-04 10:01:43 | 2021-11-08 13:12:56 | 2021-11-08 13:47:07 | 0:34:11 | 0:22:47 | 0:11:24 | smithi | master | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484640 | 2021-11-04 10:01:44 | 2021-11-08 13:13:06 | 2021-11-08 13:51:14 | 0:38:08 | 0:27:16 | 0:10:52 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream} thrashers/mapgap thrashosds-health workloads/rados_api_tests} | 2 | |
pass | 6484641 | 2021-11-04 10:01:45 | 2021-11-08 13:13:47 | 2021-11-08 13:47:12 | 0:33:25 | 0:22:48 | 0:10:37 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/few rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |
pass | 6484642 | 2021-11-04 10:01:46 | 2021-11-08 13:13:57 | 2021-11-08 13:48:39 | 0:34:42 | 0:23:31 | 0:11:11 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 6484643 | 2021-11-04 10:01:46 | 2021-11-08 13:17:08 | 2021-11-08 13:48:26 | 0:31:18 | 0:24:36 | 0:06:42 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484644 | 2021-11-04 10:01:47 | 2021-11-08 13:17:28 | 2021-11-08 13:40:57 | 0:23:29 | 0:14:39 | 0:08:50 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6484645 | 2021-11-04 10:01:48 | 2021-11-08 13:18:09 | 2021-11-08 13:49:20 | 0:31:11 | 0:25:46 | 0:05:25 | smithi | master | rhel | 8.4 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/mgr} | 1 | |
pass | 6484646 | 2021-11-04 10:01:49 | 2021-11-08 13:18:09 | 2021-11-08 13:53:36 | 0:35:27 | 0:24:59 | 0:10:28 | smithi | master | centos | 8.3 | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
fail | 6484647 | 2021-11-04 10:01:50 | 2021-11-08 13:18:09 | 2021-11-08 13:48:18 | 0:30:09 | 0:19:16 | 0:10:53 | smithi | master | centos | 8.3 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi151 with status 5: 'sudo systemctl stop ceph-892288f0-4098-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484648 | 2021-11-04 10:01:50 | 2021-11-08 13:19:00 | 2021-11-08 14:03:41 | 0:44:41 | 0:31:58 | 0:12:43 | smithi | master | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/rados_workunit_loadgen_mostlyread} | 2 | |
pass | 6484649 | 2021-11-04 10:01:51 | 2021-11-08 13:19:30 | 2021-11-08 14:45:27 | 1:25:57 | 1:16:05 | 0:09:52 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/lost-unfound-delete mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6484650 | 2021-11-04 10:01:52 | 2021-11-08 13:19:30 | 2021-11-08 14:42:58 | 1:23:28 | 1:15:53 | 0:07:35 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
fail | 6484651 | 2021-11-04 10:01:53 | 2021-11-08 13:19:51 | 2021-11-08 13:41:47 | 0:21:56 | 0:12:07 | 0:09:49 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi150 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=929e5fa48ebd63d8eb58ef42b902a49e13a2cd48 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 6484652 | 2021-11-04 10:01:54 | 2021-11-08 13:20:51 | 2021-11-08 13:40:24 | 0:19:33 | 0:09:10 | 0:10:23 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_read} | 1 | |
fail | 6484653 | 2021-11-04 10:01:54 | 2021-11-08 13:20:52 | 2021-11-08 13:55:26 | 0:34:34 | 0:22:48 | 0:11:46 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484654 | 2021-11-04 10:01:55 | 2021-11-08 13:21:52 | 2021-11-08 14:01:06 | 0:39:14 | 0:31:19 | 0:07:55 | smithi | master | centos | 8.3 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} | 1 | |
dead | 6484655 | 2021-11-04 10:01:56 | 2021-11-08 13:22:02 | 2021-11-09 01:34:18 | 12:12:16 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6484656 | 2021-11-04 10:01:57 | 2021-11-08 13:22:13 | 2021-11-08 13:55:37 | 0:33:24 | 0:22:50 | 0:10:34 | smithi | master | centos | 8.3 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-small-objects-balanced} | 2 | |
pass | 6484657 | 2021-11-04 10:01:57 | 2021-11-08 13:22:33 | 2021-11-08 14:30:41 | 1:08:08 | 0:56:35 | 0:11:33 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/radosbench} | 2 | |
pass | 6484658 | 2021-11-04 10:01:58 | 2021-11-08 13:23:24 | 2021-11-08 13:59:40 | 0:36:16 | 0:27:39 | 0:08:37 | smithi | master | rhel | 8.4 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/sync-many workloads/pool-create-delete} | 2 | |
pass | 6484659 | 2021-11-04 10:01:59 | 2021-11-08 13:24:14 | 2021-11-08 14:00:32 | 0:36:18 | 0:22:12 | 0:14:06 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 2 | |
pass | 6484660 | 2021-11-04 10:02:00 | 2021-11-08 13:25:25 | 2021-11-08 13:55:31 | 0:30:06 | 0:22:15 | 0:07:51 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 6484661 | 2021-11-04 10:02:01 | 2021-11-08 13:26:05 | 2021-11-08 14:01:40 | 0:35:35 | 0:24:11 | 0:11:24 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6484662 | 2021-11-04 10:02:01 | 2021-11-08 13:26:05 | 2021-11-08 13:47:52 | 0:21:47 | 0:09:18 | 0:12:29 | smithi | master | centos | 8.3 | rados/multimon/{clusters/21 mon_election/classic msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 3 | |
pass | 6484663 | 2021-11-04 10:02:02 | 2021-11-08 13:26:26 | 2021-11-08 15:59:54 | 2:33:28 | 2:23:01 | 0:10:27 | smithi | master | centos | 8.stream | rados/objectstore/{backends/filestore-idempotent supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6484664 | 2021-11-04 10:02:03 | 2021-11-08 13:26:26 | 2021-11-08 13:58:10 | 0:31:44 | 0:19:37 | 0:12:07 | smithi | master | centos | 8.2 | rados/cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484665 | 2021-11-04 10:02:04 | 2021-11-08 13:27:07 | 2021-11-08 15:05:09 | 1:38:02 | 1:31:17 | 0:06:45 | smithi | master | rhel | 8.4 | rados/singleton/{all/lost-unfound mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6484666 | 2021-11-04 10:02:05 | 2021-11-08 13:27:37 | 2021-11-08 13:52:20 | 0:24:43 | 0:13:04 | 0:11:39 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/mon_recovery validater/lockdep} | 2 | |
fail | 6484667 | 2021-11-04 10:02:05 | 2021-11-08 13:29:08 | 2021-11-08 14:00:44 | 0:31:36 | 0:21:15 | 0:10:21 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi082 with status 5: 'sudo systemctl stop ceph-448c84be-409a-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484668 | 2021-11-04 10:02:06 | 2021-11-08 13:29:28 | 2021-11-08 14:06:56 | 0:37:28 | 0:30:46 | 0:06:42 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/redirect} | 2 | |
fail | 6484669 | 2021-11-04 10:02:07 | 2021-11-08 13:29:38 | 2021-11-08 14:11:00 | 0:41:22 | 0:33:32 | 0:07:50 | smithi | master | rhel | 8.4 | rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/packaged mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi120 with status 5: 'sudo systemctl stop ceph-8ae83ff6-409b-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484670 | 2021-11-04 10:02:08 | 2021-11-08 13:29:49 | 2021-11-08 13:59:38 | 0:29:49 | 0:18:58 | 0:10:51 | smithi | master | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-stupid supported-random-distro$/{ubuntu_latest} tasks/progress} | 2 | |
fail | 6484671 | 2021-11-04 10:02:09 | 2021-11-08 13:29:59 | 2021-11-08 14:01:35 | 0:31:36 | 0:23:52 | 0:07:44 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi138 with status 5: 'sudo systemctl stop ceph-3db9383a-409a-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484672 | 2021-11-04 10:02:09 | 2021-11-08 13:30:09 | 2021-11-08 13:59:14 | 0:29:05 | 0:19:17 | 0:09:48 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6484673 | 2021-11-04 10:02:10 | 2021-11-08 13:30:10 | 2021-11-08 13:57:09 | 0:26:59 | 0:20:08 | 0:06:51 | smithi | master | rhel | 8.4 | rados/singleton/{all/max-pg-per-osd.from-mon mon_election/classic msgr-failures/none msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6484674 | 2021-11-04 10:02:11 | 2021-11-08 13:30:10 | 2021-11-08 13:46:11 | 0:16:01 | 0:06:38 | 0:09:23 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_cephadm_repos} | 1 | |
pass | 6484675 | 2021-11-04 10:02:12 | 2021-11-08 13:31:01 | 2021-11-08 13:59:30 | 0:28:29 | 0:16:11 | 0:12:18 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8.stream} thrashers/careful thrashosds-health workloads/redirect_promote_tests} | 2 | |
fail | 6484676 | 2021-11-04 10:02:13 | 2021-11-08 13:32:41 | 2021-11-08 14:02:03 | 0:29:22 | 0:19:00 | 0:10:22 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
dead | 6484677 | 2021-11-04 10:02:13 | 2021-11-08 13:32:51 | 2021-11-09 01:45:59 | 12:13:08 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6484678 | 2021-11-04 10:02:14 | 2021-11-08 13:34:12 | 2021-11-08 14:02:07 | 0:27:55 | 0:10:33 | 0:17:22 | smithi | master | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/readwrite} | 2 | |
pass | 6484679 | 2021-11-04 10:02:15 | 2021-11-10 13:31:17 | 2021-11-10 13:52:03 | 0:20:46 | 0:09:42 | 0:11:04 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} | 1 | |
fail | 6484680 | 2021-11-04 10:02:16 | 2021-11-10 13:32:07 | 2021-11-10 14:10:05 | 0:37:58 | 0:23:03 | 0:14:55 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/rbd_cls} | 3 | |
Failure Reason:
Command failed on smithi115 with status 5: 'sudo systemctl stop ceph-99a0f6f8-422d-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
dead | 6484681 | 2021-11-04 10:02:17 | 2021-11-10 13:34:38 | 2021-11-11 01:46:26 | 12:11:48 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls mon_election/connectivity} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6484682 | 2021-11-04 10:02:18 | 2021-11-10 13:34:48 | 2021-11-10 13:53:04 | 0:18:16 | 0:07:09 | 0:11:07 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/pool-access mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6484683 | 2021-11-04 10:02:18 | 2021-11-10 13:35:08 | 2021-11-10 14:02:31 | 0:27:23 | 0:16:46 | 0:10:37 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/redirect_set_object} | 2 | |
pass | 6484684 | 2021-11-04 10:02:19 | 2021-11-10 13:35:09 | 2021-11-10 13:59:40 | 0:24:31 | 0:13:14 | 0:11:17 | smithi | master | centos | 8.3 | rados/singleton/{all/max-pg-per-osd.from-primary mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6484685 | 2021-11-04 10:02:20 | 2021-11-10 13:36:29 | 2021-11-10 14:00:57 | 0:24:28 | 0:10:59 | 0:13:29 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 6484686 | 2021-11-04 10:02:21 | 2021-11-10 13:37:00 | 2021-11-10 14:07:38 | 0:30:38 | 0:19:48 | 0:10:50 | smithi | master | centos | 8.3 | rados/cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6484687 | 2021-11-04 10:02:22 | 2021-11-10 13:38:20 | 2021-11-10 14:10:16 | 0:31:56 | 0:23:33 | 0:08:23 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi180 with status 5: 'sudo systemctl stop ceph-bec02a8a-422d-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484688 | 2021-11-04 10:02:23 | 2021-11-10 13:39:01 | 2021-11-10 14:09:05 | 0:30:04 | 0:17:17 | 0:12:47 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8.stream} thrashers/mapgap thrashosds-health workloads/set-chunks-read} | 2 | |
fail | 6484689 | 2021-11-04 10:02:23 | 2021-11-10 13:39:01 | 2021-11-10 14:11:27 | 0:32:26 | 0:20:22 | 0:12:04 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484690 | 2021-11-04 10:02:24 | 2021-11-10 13:39:41 | 2021-11-10 14:18:54 | 0:39:13 | 0:30:17 | 0:08:56 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6484691 | 2021-11-04 10:02:25 | 2021-11-10 13:39:42 | 2021-11-10 14:44:56 | 1:05:14 | 0:56:07 | 0:09:07 | smithi | master | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/misc} | 1 | |
pass | 6484692 | 2021-11-04 10:02:26 | 2021-11-10 13:39:42 | 2021-11-10 14:03:58 | 0:24:16 | 0:14:10 | 0:10:06 | smithi | master | centos | 8.stream | rados/singleton/{all/max-pg-per-osd.from-replica mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6484693 | 2021-11-04 10:02:27 | 2021-11-10 13:40:32 | 2021-11-10 13:59:43 | 0:19:11 | 0:10:09 | 0:09:02 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
Command failed on smithi100 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:929e5fa48ebd63d8eb58ef42b902a49e13a2cd48 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fa97ab6e-422d-11ec-8c2c-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6484694 | 2021-11-04 10:02:27 | 2021-11-10 13:40:33 | 2021-11-10 14:14:22 | 0:33:49 | 0:21:21 | 0:12:28 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi157 with status 5: 'sudo systemctl stop ceph-5fbd9c56-422e-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484695 | 2021-11-04 10:02:28 | 2021-11-10 13:40:43 | 2021-11-10 14:07:24 | 0:26:41 | 0:20:12 | 0:06:29 | smithi | master | rhel | 8.4 | rados/objectstore/{backends/fusestore supported-random-distro$/{rhel_8}} | 1 | |
pass | 6484696 | 2021-11-04 10:02:29 | 2021-11-10 13:40:43 | 2021-11-10 14:13:41 | 0:32:58 | 0:20:11 | 0:12:47 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/small-objects-balanced} | 2 | |
pass | 6484697 | 2021-11-04 10:02:30 | 2021-11-10 13:41:34 | 2021-11-10 14:23:27 | 0:41:53 | 0:34:15 | 0:07:38 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-fast-read} | 2 | |
pass | 6484698 | 2021-11-04 10:02:31 | 2021-11-10 13:42:24 | 2021-11-10 14:12:44 | 0:30:20 | 0:21:51 | 0:08:29 | smithi | master | rhel | 8.4 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/sync workloads/rados_5925} | 2 | |
fail | 6484699 | 2021-11-04 10:02:32 | 2021-11-10 13:43:15 | 2021-11-10 14:25:43 | 0:42:28 | 0:34:35 | 0:07:53 | smithi | master | rhel | 8.4 | rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/root mon_election/classic msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi114 with status 5: 'sudo systemctl stop ceph-07050d68-4230-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
dead | 6484700 | 2021-11-04 10:02:32 | 2021-11-10 13:44:45 | 2021-11-11 01:57:55 | 12:13:10 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6484701 | 2021-11-04 10:02:33 | 2021-11-10 13:45:16 | 2021-11-10 14:33:04 | 0:47:48 | 0:34:03 | 0:13:45 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6484702 | 2021-11-04 10:02:34 | 2021-11-10 13:47:36 | 2021-11-10 14:23:14 | 0:35:38 | 0:23:27 | 0:12:11 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/connectivity msgr-failures/osd-delay rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
pass | 6484703 | 2021-11-04 10:02:35 | 2021-11-10 13:47:47 | 2021-11-10 14:11:11 | 0:23:24 | 0:10:55 | 0:12:29 | smithi | master | centos | 8.3 | rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/mon_recovery} | 2 | |
fail | 6484704 | 2021-11-04 10:02:36 | 2021-11-10 13:50:07 | 2021-11-10 14:21:38 | 0:31:31 | 0:24:08 | 0:07:23 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484705 | 2021-11-04 10:02:36 | 2021-11-10 13:50:08 | 2021-11-10 15:25:10 | 1:35:02 | 1:22:52 | 0:12:10 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados tasks/rados_api_tests validater/valgrind} | 2 | |
pass | 6484706 | 2021-11-04 10:02:37 | 2021-11-10 13:52:08 | 2021-11-10 14:12:14 | 0:20:06 | 0:10:38 | 0:09:28 | smithi | master | centos | 8.3 | rados/singleton/{all/mon-auth-caps mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6484707 | 2021-11-04 10:02:38 | 2021-11-10 13:53:09 | 2021-11-10 14:13:00 | 0:19:51 | 0:09:08 | 0:10:43 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/version-number-sanity mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6484708 | 2021-11-04 10:02:39 | 2021-11-10 13:53:19 | 2021-11-10 14:30:55 | 0:37:36 | 0:30:32 | 0:07:04 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/small-objects-localized} | 2 | |
fail | 6484709 | 2021-11-04 10:02:40 | 2021-11-10 13:53:29 | 2021-11-10 14:29:19 | 0:35:50 | 0:22:47 | 0:13:03 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi125 with status 5: 'sudo systemctl stop ceph-cf8ff9ce-422f-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
dead | 6484710 | 2021-11-04 10:02:41 | 2021-11-10 13:53:40 | 2021-11-10 14:08:36 | 0:14:56 | 0:03:25 | 0:11:31 | smithi | master | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/filestore-xfs supported-random-distro$/{ubuntu_latest} tasks/prometheus} | 2 | |
Failure Reason:
Failure object was: {'smithi156.front.sepia.ceph.com': {'msg': '\'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install \'docker.io\'\' failed: E: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 7969 (apt-get)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?\n', 'stdout': '', 'stderr': 'E: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 7969 (apt-get)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?\n', 'rc': 100, 'cache_updated': False, 'cache_update_time': 1636553168, 'invocation': {'module_args': {'name': ['docker.io'], 'state': 'latest', 'package': ['docker.io'], 'cache_valid_time': 0, 'purge': False, 'force': False, 'dpkg_options': 'force-confdef,force-confold', 'autoremove': False, 'autoclean': False, 'only_upgrade': False, 'force_apt_get': False, 'allow_unauthenticated': False, 'update_cache': None, 'deb': None, 'default_release': None, 'install_recommends': None, 'upgrade': None, 'policy_rc_d': None}}, 'stdout_lines': [], 'stderr_lines': ['E: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 7969 (apt-get)', 'E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?'], '_ansible_no_log': False, 'changed': False}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 117, in represent_mapping node_key = self.represent_data(item_key) File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)yaml.representer.RepresenterError: ('cannot represent an object', 'cache_update_time') |
||||||||||||||
pass | 6484711 | 2021-11-04 10:02:42 | 2021-11-10 13:54:40 | 2021-11-10 14:18:57 | 0:24:17 | 0:13:36 | 0:10:41 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write} | 1 | |
fail | 6484712 | 2021-11-04 10:02:42 | 2021-11-10 13:55:01 | 2021-11-10 14:14:16 | 0:19:15 | 0:10:10 | 0:09:05 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi073 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:929e5fa48ebd63d8eb58ef42b902a49e13a2cd48 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0534b614-4230-11ec-8c2c-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6484713 | 2021-11-04 10:02:43 | 2021-11-10 13:55:01 | 2021-11-10 14:24:47 | 0:29:46 | 0:21:10 | 0:08:36 | smithi | master | rhel | 8.4 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/repair_test} | 2 | |
fail | 6484714 | 2021-11-04 10:02:44 | 2021-11-10 13:57:22 | 2021-11-10 14:29:53 | 0:32:31 | 0:23:38 | 0:08:53 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484715 | 2021-11-04 10:02:45 | 2021-11-10 13:58:42 | 2021-11-10 14:37:08 | 0:38:26 | 0:31:33 | 0:06:53 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/small-objects} | 2 | |
pass | 6484716 | 2021-11-04 10:02:46 | 2021-11-10 13:59:43 | 2021-11-10 14:20:56 | 0:21:13 | 0:11:21 | 0:09:52 | smithi | master | centos | 8.3 | rados/singleton/{all/mon-config-key-caps mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} | 1 | |
fail | 6484717 | 2021-11-04 10:02:47 | 2021-11-10 13:59:53 | 2021-11-10 14:31:31 | 0:31:38 | 0:24:07 | 0:07:31 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6484718 | 2021-11-04 10:02:47 | 2021-11-10 14:00:13 | 2021-11-10 14:33:38 | 0:33:25 | 0:23:35 | 0:09:50 | smithi | master | centos | 8.2 | rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-stupid} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_ganesha (unittest.loader._FailedTest) |
||||||||||||||
fail | 6484719 | 2021-11-04 10:02:48 | 2021-11-10 14:00:34 | 2021-11-10 14:40:57 | 0:40:23 | 0:27:41 | 0:12:42 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/flannel rook/1.7.0} | 3 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
pass | 6484720 | 2021-11-04 10:02:49 | 2021-11-10 14:00:54 | 2021-11-10 14:36:09 | 0:35:15 | 0:28:55 | 0:06:20 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 6484721 | 2021-11-04 10:02:50 | 2021-11-10 14:01:04 | 2021-11-10 17:40:32 | 3:39:28 | 3:30:17 | 0:09:11 | smithi | master | centos | 8.2 | rados/upgrade/parallel/{0-distro$/{centos_8.2_container_tools_3.0} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi089 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh' |
||||||||||||||
dead | 6484722 | 2021-11-04 10:02:50 | 2021-11-10 14:01:05 | 2021-11-11 02:14:48 | 12:13:43 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6484723 | 2021-11-04 10:02:51 | 2021-11-10 14:02:35 | 2021-11-11 02:16:12 | 12:13:37 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6484724 | 2021-11-04 10:02:52 | 2021-11-10 14:04:06 | 2021-11-10 14:56:22 | 0:52:16 | 0:43:51 | 0:08:25 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |
pass | 6484725 | 2021-11-04 10:02:53 | 2021-11-10 14:04:56 | 2021-11-10 14:32:33 | 0:27:37 | 0:12:29 | 0:15:08 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 6484726 | 2021-11-04 10:02:54 | 2021-11-10 14:07:17 | 2021-11-10 14:32:51 | 0:25:34 | 0:14:51 | 0:10:43 | smithi | master | centos | 8.3 | rados/cephadm/orchestrator_cli/{0-random-distro$/{centos_8.3_container_tools_3.0} 2-node-mgr orchestrator_cli} | 2 | |
pass | 6484727 | 2021-11-04 10:02:54 | 2021-11-10 14:07:48 | 2021-11-10 14:41:00 | 0:33:12 | 0:26:38 | 0:06:34 | smithi | master | rhel | 8.4 | rados/singleton/{all/mon-config-keys mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6484728 | 2021-11-04 10:02:55 | 2021-11-10 14:07:48 | 2021-11-10 14:27:16 | 0:19:28 | 0:10:27 | 0:09:01 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/balancer mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
fail | 6484729 | 2021-11-04 10:02:56 | 2021-11-10 14:07:58 | 2021-11-10 14:37:53 | 0:29:55 | 0:18:41 | 0:11:14 | smithi | master | centos | 8.2 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi156 with status 5: 'sudo systemctl stop ceph-b13cd918-4231-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484730 | 2021-11-04 10:02:57 | 2021-11-10 14:08:39 | 2021-11-10 14:49:37 | 0:40:58 | 0:30:29 | 0:10:29 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/snaps-few-objects-localized} | 2 | |
fail | 6484731 | 2021-11-04 10:02:58 | 2021-11-10 14:08:39 | 2021-11-10 14:25:42 | 0:17:03 | 0:08:02 | 0:09:01 | smithi | master | centos | 8.2 | rados/cephadm/smoke-singlehost/{0-distro$/{centos_8.2_container_tools_3.0} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
Command failed on smithi129 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:929e5fa48ebd63d8eb58ef42b902a49e13a2cd48 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 88e58e2e-4231-11ec-8c2c-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6484732 | 2021-11-04 10:02:58 | 2021-11-10 14:08:39 | 2021-11-10 14:44:32 | 0:35:53 | 0:23:49 | 0:12:04 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi090 with status 5: 'sudo systemctl stop ceph-4a4f7c0a-4232-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484733 | 2021-11-04 10:02:59 | 2021-11-10 14:08:50 | 2021-11-10 14:34:38 | 0:25:48 | 0:15:24 | 0:10:24 | smithi | master | centos | 8.stream | rados/objectstore/{backends/keyvaluedb supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6484734 | 2021-11-04 10:03:00 | 2021-11-10 14:09:10 | 2021-11-10 14:41:18 | 0:32:08 | 0:24:36 | 0:07:32 | smithi | master | rhel | 8.4 | rados/singleton/{all/mon-config mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} | 1 | |
dead | 6484735 | 2021-11-04 10:03:01 | 2021-11-10 14:09:10 | 2021-11-11 02:20:57 | 12:11:47 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6484736 | 2021-11-04 10:03:02 | 2021-11-10 14:09:31 | 2021-11-10 14:31:29 | 0:21:58 | 0:12:08 | 0:09:50 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} | 1 | |
pass | 6484737 | 2021-11-04 10:03:02 | 2021-11-10 14:10:11 | 2021-11-10 14:48:25 | 0:38:14 | 0:26:26 | 0:11:48 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/snaps-few-objects} | 2 | |
pass | 6484738 | 2021-11-04 10:03:03 | 2021-11-10 14:10:11 | 2021-11-10 14:41:25 | 0:31:14 | 0:23:56 | 0:07:18 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6484739 | 2021-11-04 10:03:04 | 2021-11-10 14:10:22 | 2021-11-10 15:29:29 | 1:19:07 | 1:09:28 | 0:09:39 | smithi | master | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/mon} | 1 | |
fail | 6484740 | 2021-11-04 10:03:05 | 2021-11-10 14:10:22 | 2021-11-10 14:47:10 | 0:36:48 | 0:25:19 | 0:11:29 | smithi | master | ubuntu | 20.04 | rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi155 with status 5: 'sudo systemctl stop ceph-3fdf764e-4232-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484741 | 2021-11-04 10:03:06 | 2021-11-10 14:11:12 | 2021-11-10 14:45:07 | 0:33:55 | 0:22:39 | 0:11:16 | smithi | master | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-many-deletes} | 2 | |
fail | 6484742 | 2021-11-04 10:03:06 | 2021-11-10 14:11:33 | 2021-11-10 14:45:58 | 0:34:25 | 0:22:00 | 0:12:25 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/pacific backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/connectivity msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
Command failed on smithi188 with status 5: 'sudo systemctl stop ceph-c1cc62d4-4232-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484743 | 2021-11-04 10:03:07 | 2021-11-10 14:12:53 | 2021-11-10 14:52:49 | 0:39:56 | 0:28:22 | 0:11:34 | smithi | master | centos | 8.stream | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream} thrashers/force-sync-many workloads/rados_api_tests} | 2 | |
pass | 6484744 | 2021-11-04 10:03:08 | 2021-11-10 14:13:44 | 2021-11-10 14:37:29 | 0:23:45 | 0:09:49 | 0:13:56 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 6484745 | 2021-11-04 10:03:09 | 2021-11-10 14:14:24 | 2021-11-10 15:04:06 | 0:49:42 | 0:38:33 | 0:11:09 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6484746 | 2021-11-04 10:03:10 | 2021-11-10 14:18:55 | 2021-11-10 14:37:01 | 0:18:06 | 0:06:26 | 0:11:40 | smithi | master | ubuntu | 20.04 | rados/multimon/{clusters/6 mon_election/classic msgr-failures/many msgr/async no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} | 2 | |
pass | 6484747 | 2021-11-04 10:03:10 | 2021-11-10 14:19:05 | 2021-11-10 14:45:14 | 0:26:09 | 0:17:29 | 0:08:40 | smithi | master | rhel | 8.4 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} tasks/scrub_test} | 2 | |
pass | 6484748 | 2021-11-04 10:03:11 | 2021-11-10 14:21:46 | 2021-11-10 14:41:01 | 0:19:15 | 0:09:44 | 0:09:31 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_adoption} | 1 | |
pass | 6484749 | 2021-11-04 10:03:12 | 2021-11-10 14:21:46 | 2021-11-10 14:50:44 | 0:28:58 | 0:17:11 | 0:11:47 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/rados_cls_all validater/lockdep} | 2 | |
pass | 6484750 | 2021-11-04 10:03:13 | 2021-11-10 14:23:17 | 2021-11-10 14:50:38 | 0:27:21 | 0:21:28 | 0:05:53 | smithi | master | rhel | 8.4 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-bitmap supported-random-distro$/{rhel_8} tasks/workunits} | 2 | |
pass | 6484751 | 2021-11-04 10:03:14 | 2021-11-10 14:23:27 | 2021-11-10 14:58:14 | 0:34:47 | 0:27:11 | 0:07:36 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/write_fadvise_dontneed} | 2 | |
fail | 6484752 | 2021-11-04 10:03:14 | 2021-11-10 14:24:48 | 2021-11-10 15:01:33 | 0:36:45 | 0:23:47 | 0:12:58 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484753 | 2021-11-04 10:03:15 | 2021-11-10 14:25:48 | 2021-11-10 15:18:54 | 0:53:06 | 0:47:01 | 0:06:05 | smithi | master | rhel | 8.4 | rados/singleton/{all/osd-backfill mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 6484754 | 2021-11-04 10:03:16 | 2021-11-10 14:25:49 | 2021-11-10 14:58:15 | 0:32:26 | 0:24:17 | 0:08:09 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484755 | 2021-11-04 10:03:17 | 2021-11-10 14:27:19 | 2021-11-10 14:47:13 | 0:19:54 | 0:07:32 | 0:12:22 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6484756 | 2021-11-04 10:03:18 | 2021-11-10 14:29:30 | 2021-11-10 14:58:59 | 0:29:29 | 0:18:55 | 0:10:34 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484757 | 2021-11-04 10:03:18 | 2021-11-10 14:30:00 | 2021-11-10 15:04:13 | 0:34:13 | 0:22:29 | 0:11:44 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream} thrashers/none thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
pass | 6484758 | 2021-11-04 10:03:19 | 2021-11-10 15:29:31 | 2021-11-10 16:08:15 | 0:38:44 | 0:28:25 | 0:10:19 | smithi | master | ubuntu | 20.04 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6484759 | 2021-11-04 10:03:20 | 2021-11-10 15:30:41 | 2021-11-10 16:00:57 | 0:30:16 | 0:18:59 | 0:11:17 | smithi | master | centos | 8.3 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi102 with status 5: 'sudo systemctl stop ceph-7a2f1862-423d-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484760 | 2021-11-04 10:03:21 | 2021-11-10 15:31:42 | 2021-11-10 16:02:56 | 0:31:14 | 0:22:30 | 0:08:44 | smithi | master | centos | 8.3 | rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 1 | |
fail | 6484761 | 2021-11-04 10:03:22 | 2021-11-10 15:31:42 | 2021-11-10 15:53:03 | 0:21:21 | 0:12:07 | 0:09:14 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi191 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=929e5fa48ebd63d8eb58ef42b902a49e13a2cd48 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 6484762 | 2021-11-04 10:03:23 | 2021-11-10 15:31:52 | 2021-11-10 16:14:17 | 0:42:25 | 0:30:56 | 0:11:29 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{default} supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/ec-snaps-few-objects-overwrites} | 2 | |
fail | 6484763 | 2021-11-04 10:03:23 | 2021-11-10 15:33:13 | 2021-11-10 16:04:32 | 0:31:19 | 0:20:17 | 0:11:02 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484764 | 2021-11-04 10:03:24 | 2021-11-10 15:33:13 | 2021-11-10 16:29:27 | 0:56:14 | 0:44:11 | 0:12:03 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/cache-agent-big} | 2 | |
pass | 6484765 | 2021-11-04 10:03:25 | 2021-11-10 15:34:14 | 2021-11-10 16:03:05 | 0:28:51 | 0:18:56 | 0:09:55 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6484766 | 2021-11-04 10:03:26 | 2021-11-10 15:37:24 | 2021-11-10 16:03:12 | 0:25:48 | 0:11:33 | 0:14:15 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 6484767 | 2021-11-04 10:03:27 | 2021-11-10 15:39:45 | 2021-11-10 16:14:11 | 0:34:26 | 0:21:12 | 0:13:14 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi157 with status 5: 'sudo systemctl stop ceph-11ee635a-423f-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484768 | 2021-11-04 10:03:27 | 2021-11-10 15:40:46 | 2021-11-10 16:00:46 | 0:20:00 | 0:09:31 | 0:10:29 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4K_seq_read} | 1 | |
fail | 6484769 | 2021-11-04 10:03:28 | 2021-11-10 15:40:46 | 2021-11-10 16:14:53 | 0:34:07 | 0:21:27 | 0:12:40 | smithi | master | centos | 8.2 | rados/cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/root mon_election/classic msgr/async-v2only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi204 with status 5: 'sudo systemctl stop ceph-4b1ab94e-423f-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484770 | 2021-11-04 10:03:29 | 2021-11-10 15:43:27 | 2021-11-10 16:15:25 | 0:31:58 | 0:21:28 | 0:10:30 | smithi | master | centos | 8.3 | rados/singleton/{all/osd-recovery mon_election/connectivity msgr-failures/none msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6484771 | 2021-11-04 10:03:30 | 2021-11-10 15:44:17 | 2021-11-10 16:09:59 | 0:25:42 | 0:15:24 | 0:10:18 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/cache-agent-small} | 2 | |
fail | 6484772 | 2021-11-04 10:03:31 | 2021-11-10 15:44:17 | 2021-11-10 16:20:40 | 0:36:23 | 0:23:08 | 0:13:15 | smithi | master | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484773 | 2021-11-04 10:03:32 | 2021-11-10 15:44:58 | 2021-11-10 16:20:34 | 0:35:36 | 0:26:07 | 0:09:29 | smithi | master | centos | 8.stream | rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6484774 | 2021-11-04 10:03:32 | 2021-11-10 15:45:18 | 2021-11-10 16:02:44 | 0:17:26 | 0:07:45 | 0:09:41 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/crushdiff mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6484775 | 2021-11-04 10:03:33 | 2021-11-10 15:45:18 | 2021-11-10 16:19:49 | 0:34:31 | 0:23:36 | 0:10:55 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi134 with status 5: 'sudo systemctl stop ceph-d919228a-423f-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484776 | 2021-11-04 10:03:34 | 2021-11-10 15:48:39 | 2021-11-10 16:13:48 | 0:25:09 | 0:11:47 | 0:13:22 | smithi | master | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} tasks/libcephsqlite} | 2 | |
pass | 6484777 | 2021-11-04 10:03:35 | 2021-11-10 15:50:00 | 2021-11-10 16:05:13 | 0:15:13 | 0:06:43 | 0:08:30 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_cephadm_repos} | 1 | |
pass | 6484778 | 2021-11-04 10:03:36 | 2021-11-10 15:50:00 | 2021-11-10 16:28:55 | 0:38:55 | 0:26:20 | 0:12:35 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/cache-pool-snaps-readproxy} | 2 | |
pass | 6484779 | 2021-11-04 10:03:36 | 2021-11-10 15:51:40 | 2021-11-10 16:09:34 | 0:17:54 | 0:07:15 | 0:10:39 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/peer mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6484780 | 2021-11-04 10:03:37 | 2021-11-10 15:51:41 | 2021-11-10 16:22:17 | 0:30:36 | 0:23:08 | 0:07:28 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484781 | 2021-11-04 10:03:38 | 2021-11-10 15:52:51 | 2021-11-10 16:37:39 | 0:44:48 | 0:33:25 | 0:11:23 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-small-objects} | 2 | |
pass | 6484782 | 2021-11-04 10:03:39 | 2021-11-10 15:58:22 | 2021-11-10 17:10:22 | 1:12:00 | 1:03:25 | 0:08:35 | smithi | master | rhel | 8.4 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/many workloads/rados_mon_osdmap_prune} | 2 | |
pass | 6484783 | 2021-11-04 10:03:40 | 2021-11-10 16:00:53 | 2021-11-10 16:50:12 | 0:49:19 | 0:39:45 | 0:09:34 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 6484784 | 2021-11-04 10:03:40 | 2021-11-10 16:02:54 | 2021-11-10 16:42:41 | 0:39:47 | 0:28:30 | 0:11:17 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6484785 | 2021-11-04 10:03:41 | 2021-11-10 16:03:14 | 2021-11-10 16:21:11 | 0:17:57 | 0:06:31 | 0:11:26 | smithi | master | ubuntu | 20.04 | rados/multimon/{clusters/9 mon_election/connectivity msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} | 3 | |
dead | 6484786 | 2021-11-04 10:03:42 | 2021-11-10 16:03:14 | 2021-11-11 04:13:34 | 12:10:20 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6484787 | 2021-11-04 10:03:43 | 2021-11-10 16:04:35 | 2021-11-10 16:28:23 | 0:23:48 | 0:13:47 | 0:10:01 | smithi | master | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-comp-lz4 supported-random-distro$/{centos_8.stream} tasks/crash} | 2 | |
pass | 6484788 | 2021-11-04 10:03:44 | 2021-11-10 16:05:05 | 2021-11-10 16:30:23 | 0:25:18 | 0:19:42 | 0:05:36 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6484789 | 2021-11-04 10:03:44 | 2021-11-10 16:05:06 | 2021-11-10 20:38:51 | 4:33:45 | 4:23:35 | 0:10:10 | smithi | master | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/osd-backfill} | 1 | |
pass | 6484790 | 2021-11-04 10:03:45 | 2021-11-10 16:05:06 | 2021-11-10 16:40:43 | 0:35:37 | 0:26:27 | 0:09:10 | smithi | master | centos | 8.3 | rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} | 1 | |
pass | 6484791 | 2021-11-04 10:03:46 | 2021-11-10 16:05:16 | 2021-11-10 17:08:22 | 1:03:06 | 0:50:11 | 0:12:55 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados tasks/mon_recovery validater/valgrind} | 2 | |
pass | 6484792 | 2021-11-04 10:03:47 | 2021-11-10 16:06:37 | 2021-11-10 16:39:04 | 0:32:27 | 0:19:46 | 0:12:41 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/cache-pool-snaps} | 2 | |
dead | 6484793 | 2021-11-04 10:03:48 | 2021-11-10 16:06:57 | 2021-11-11 04:19:17 | 12:12:20 | smithi | master | centos | 8.3 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls mon_election/connectivity} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6484794 | 2021-11-04 10:03:48 | 2021-11-10 16:08:18 | 2021-11-10 16:39:52 | 0:31:34 | 0:23:19 | 0:08:15 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi125 with status 5: 'sudo systemctl stop ceph-ed6b74e2-4242-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484795 | 2021-11-04 10:03:49 | 2021-11-10 16:10:08 | 2021-11-10 16:35:40 | 0:25:32 | 0:11:33 | 0:13:59 | smithi | master | centos | 8.3 | rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 2 | |
fail | 6484796 | 2021-11-04 10:03:50 | 2021-11-10 16:12:29 | 2021-11-10 17:21:28 | 1:08:59 | 1:00:24 | 0:08:35 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_nfs} | 1 | |
Failure Reason:
Command failed on smithi038 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:929e5fa48ebd63d8eb58ef42b902a49e13a2cd48 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2841fa00-4243-11ec-8c2c-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6484797 | 2021-11-04 10:03:51 | 2021-11-10 16:12:29 | 2021-11-10 16:55:17 | 0:42:48 | 0:31:30 | 0:11:18 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8.stream} thrashers/morepggrow thrashosds-health workloads/cache-snaps-balanced} | 2 | |
pass | 6484798 | 2021-11-04 10:03:52 | 2021-11-10 16:13:50 | 2021-11-10 16:33:59 | 0:20:09 | 0:10:06 | 0:10:03 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} | 1 | |
fail | 6484799 | 2021-11-04 10:03:52 | 2021-11-10 16:13:50 | 2021-11-10 16:45:32 | 0:31:42 | 0:23:59 | 0:07:43 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484800 | 2021-11-04 10:03:53 | 2021-11-10 16:14:20 | 2021-11-10 16:32:23 | 0:18:03 | 0:07:13 | 0:10:50 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/full-tiering mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
dead | 6484801 | 2021-11-04 10:03:54 | 2021-11-10 16:14:21 | 2021-11-11 04:24:36 | 12:10:15 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6484802 | 2021-11-04 10:03:55 | 2021-11-10 16:15:01 | 2021-11-10 16:52:30 | 0:37:29 | 0:22:17 | 0:15:12 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/test_rbd_api} | 3 | |
Failure Reason:
Command failed on smithi132 with status 5: 'sudo systemctl stop ceph-37b124ba-4244-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484803 | 2021-11-04 10:03:56 | 2021-11-10 16:16:42 | 2021-11-10 16:47:06 | 0:30:24 | 0:21:33 | 0:08:51 | smithi | master | rhel | 8.4 | rados/singleton/{all/pg-autoscaler mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} | 2 | |
fail | 6484804 | 2021-11-04 10:03:56 | 2021-11-10 16:19:53 | 2021-11-10 16:51:47 | 0:31:54 | 0:20:16 | 0:11:38 | smithi | master | centos | 8.2 | rados/cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484805 | 2021-11-04 10:03:57 | 2021-11-10 16:20:43 | 2021-11-10 17:00:05 | 0:39:22 | 0:32:13 | 0:07:09 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/cache-snaps} | 2 | |
pass | 6484806 | 2021-11-04 10:03:58 | 2021-11-10 16:20:43 | 2021-11-10 16:46:57 | 0:26:14 | 0:12:42 | 0:13:32 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 6484807 | 2021-11-04 10:03:59 | 2021-11-10 16:21:14 | 2021-11-10 16:53:28 | 0:32:14 | 0:20:50 | 0:11:24 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi180 with status 5: 'sudo systemctl stop ceph-cad06594-4244-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484808 | 2021-11-04 10:04:00 | 2021-11-10 16:22:24 | 2021-11-10 16:59:49 | 0:37:25 | 0:28:10 | 0:09:15 | smithi | master | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8.stream} tasks/rados_api_tests} | 2 | |
fail | 6484809 | 2021-11-04 10:04:01 | 2021-11-10 16:22:45 | 2021-11-10 16:57:02 | 0:34:17 | 0:21:35 | 0:12:42 | smithi | master | centos | 8.3 | rados/cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/packaged mon_election/connectivity msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi150 with status 5: 'sudo systemctl stop ceph-45630c08-4245-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484810 | 2021-11-04 10:04:01 | 2021-11-10 16:25:36 | 2021-11-10 16:49:27 | 0:23:51 | 0:11:52 | 0:11:59 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/health-warnings mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6484811 | 2021-11-04 10:04:02 | 2021-11-10 16:28:16 | 2021-11-10 16:53:50 | 0:25:34 | 0:15:13 | 0:10:21 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/cache} | 2 | |
pass | 6484812 | 2021-11-04 10:04:03 | 2021-11-10 16:28:27 | 2021-11-10 18:59:32 | 2:31:05 | 2:09:42 | 0:21:23 | smithi | master | centos | 8.stream | rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6484813 | 2021-11-04 10:04:04 | 2021-11-10 16:28:27 | 2021-11-10 17:02:38 | 0:34:11 | 0:23:16 | 0:10:55 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484814 | 2021-11-04 10:04:05 | 2021-11-10 16:28:57 | 2021-11-10 16:48:53 | 0:19:56 | 0:10:28 | 0:09:28 | smithi | master | centos | 8.stream | rados/singleton/{all/pg-removal-interruption mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6484815 | 2021-11-04 10:04:05 | 2021-11-10 16:29:28 | 2021-11-10 17:04:29 | 0:35:01 | 0:23:38 | 0:11:23 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi170 with status 5: 'sudo systemctl stop ceph-747a8bb0-4245-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
fail | 6484816 | 2021-11-04 10:04:06 | 2021-11-10 16:30:28 | 2021-11-10 16:51:56 | 0:21:28 | 0:09:56 | 0:11:32 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi157 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:929e5fa48ebd63d8eb58ef42b902a49e13a2cd48 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0db7f6dc-4246-11ec-8c2c-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6484817 | 2021-11-04 10:04:07 | 2021-11-10 16:32:29 | 2021-11-10 17:11:27 | 0:38:58 | 0:25:35 | 0:13:23 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-snaps-few-objects-overwrites} | 2 | |
pass | 6484818 | 2021-11-04 10:04:08 | 2021-11-10 16:35:30 | 2021-11-10 16:57:11 | 0:21:41 | 0:09:57 | 0:11:44 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/dedup-io-mixed} | 2 | |
fail | 6484819 | 2021-11-04 10:04:09 | 2021-11-10 16:35:30 | 2021-11-10 16:57:16 | 0:21:46 | 0:11:06 | 0:10:40 | smithi | master | centos | 8.2 | rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} tasks/e2e} | 2 | |
Failure Reason:
Command failed on smithi085 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:929e5fa48ebd63d8eb58ef42b902a49e13a2cd48 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a1246130-4246-11ec-8c2c-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6484820 | 2021-11-04 10:04:09 | 2021-11-10 16:35:50 | 2021-11-10 17:13:38 | 0:37:48 | 0:26:01 | 0:11:47 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/host rook/master} | 1 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
pass | 6484821 | 2021-11-04 10:04:10 | 2021-11-10 16:39:31 | 2021-11-10 16:58:42 | 0:19:11 | 0:09:51 | 0:09:20 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
fail | 6484822 | 2021-11-04 10:04:11 | 2021-11-10 16:39:31 | 2021-11-10 17:09:32 | 0:30:01 | 0:20:09 | 0:09:52 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484823 | 2021-11-04 10:04:12 | 2021-11-10 16:40:02 | 2021-11-10 17:24:32 | 0:44:30 | 0:32:31 | 0:11:59 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 2 | |
fail | 6484824 | 2021-11-04 10:04:13 | 2021-11-10 16:40:42 | 2021-11-10 17:12:19 | 0:31:37 | 0:19:38 | 0:11:59 | smithi | master | centos | 8.3 | rados/cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484825 | 2021-11-04 10:04:13 | 2021-11-10 16:40:52 | 2021-11-10 17:08:09 | 0:27:17 | 0:16:48 | 0:10:29 | smithi | master | centos | 8.stream | rados/singleton/{all/radostool mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6484826 | 2021-11-04 10:04:14 | 2021-11-10 16:42:43 | 2021-11-10 17:24:34 | 0:41:51 | 0:29:05 | 0:12:46 | smithi | master | centos | 8.stream | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8.stream} thrashers/one workloads/rados_mon_workunits} | 2 | |
pass | 6484827 | 2021-11-04 10:04:15 | 2021-11-10 16:45:34 | 2021-11-10 17:12:31 | 0:26:57 | 0:13:23 | 0:13:34 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8.stream} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 6484828 | 2021-11-04 10:04:16 | 2021-11-10 17:46:23 | 2021-11-10 18:32:09 | 0:45:46 | 0:38:31 | 0:07:15 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6484829 | 2021-11-04 10:04:17 | 2021-11-10 17:46:44 | 2021-11-10 18:06:50 | 0:20:06 | 0:10:41 | 0:09:25 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4M_seq_read} | 1 | |
pass | 6484830 | 2021-11-04 10:04:17 | 2021-11-10 17:46:54 | 2021-11-10 18:23:02 | 0:36:08 | 0:23:54 | 0:12:14 | smithi | master | ubuntu | 20.04 | rados/multimon/{clusters/21 mon_election/classic msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} | 3 | |
pass | 6484831 | 2021-11-04 10:04:18 | 2021-11-10 17:46:55 | 2021-11-10 18:18:27 | 0:31:32 | 0:25:10 | 0:06:22 | smithi | master | rhel | 8.4 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-comp-snappy supported-random-distro$/{rhel_8} tasks/failover} | 2 | |
pass | 6484832 | 2021-11-04 10:04:19 | 2021-11-10 17:47:15 | 2021-11-10 18:19:23 | 0:32:08 | 0:20:27 | 0:11:41 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream} thrashers/default thrashosds-health workloads/dedup-io-snaps} | 2 | |
dead | 6484833 | 2021-11-04 10:04:20 | 2021-11-10 17:47:55 | 2021-11-11 05:57:30 | 12:09:35 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6484834 | 2021-11-04 10:04:21 | 2021-11-10 17:47:56 | 2021-11-10 18:23:23 | 0:35:27 | 0:25:06 | 0:10:21 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_api_tests validater/lockdep} | 2 | |
fail | 6484835 | 2021-11-04 10:04:21 | 2021-11-10 17:47:56 | 2021-11-10 18:17:38 | 0:29:42 | 0:18:45 | 0:10:57 | smithi | master | centos | 8.2 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi123 with status 5: 'sudo systemctl stop ceph-84c0d244-4250-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484836 | 2021-11-04 10:04:22 | 2021-11-10 17:52:10 | 2021-11-10 18:13:55 | 0:21:45 | 0:11:10 | 0:10:35 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6484837 | 2021-11-04 10:04:23 | 2021-11-10 17:52:40 | 2021-11-10 21:22:52 | 3:30:12 | 3:23:23 | 0:06:49 | smithi | master | rhel | 8.4 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/osd} | 1 | |
fail | 6484838 | 2021-11-04 10:04:24 | 2021-11-10 17:53:51 | 2021-11-10 18:09:31 | 0:15:40 | 0:06:34 | 0:09:06 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-singlehost/{0-distro$/{ubuntu_20.04} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
Command failed on smithi094 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:929e5fa48ebd63d8eb58ef42b902a49e13a2cd48 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid db42a034-4250-11ec-8c2c-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6484839 | 2021-11-04 10:04:25 | 2021-11-10 17:53:51 | 2021-11-10 18:32:34 | 0:38:43 | 0:26:38 | 0:12:05 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/pool-snaps-few-objects} | 2 | |
pass | 6484840 | 2021-11-04 10:04:25 | 2021-11-10 17:54:22 | 2021-11-10 18:38:10 | 0:43:48 | 0:25:56 | 0:17:52 | smithi | master | centos | 8.stream | rados/singleton/{all/random-eio mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8.stream}} | 2 | |
fail | 6484841 | 2021-11-04 10:04:26 | 2021-11-10 18:01:47 | 2021-11-10 18:33:03 | 0:31:16 | 0:21:45 | 0:09:31 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi134 with status 5: 'sudo systemctl stop ceph-a90aef34-4252-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484842 | 2021-11-04 10:04:27 | 2021-11-10 18:01:48 | 2021-11-10 18:29:55 | 0:28:07 | 0:16:34 | 0:11:33 | smithi | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/rados_cls_all} | 2 | |
dead | 6484843 | 2021-11-04 10:04:28 | 2021-11-10 18:02:58 | 2021-11-11 06:15:55 | 12:12:57 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6484844 | 2021-11-04 10:04:28 | 2021-11-10 18:04:09 | 2021-11-10 18:46:57 | 0:42:48 | 0:34:00 | 0:08:48 | smithi | master | rhel | 8.4 | rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/root mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi168 with status 5: 'sudo systemctl stop ceph-89a71e18-4254-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484845 | 2021-11-04 10:04:29 | 2021-11-10 18:05:39 | 2021-11-10 18:45:19 | 0:39:40 | 0:26:23 | 0:13:17 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/rados_api_tests} | 2 | |
pass | 6484846 | 2021-11-04 10:04:30 | 2021-11-10 18:06:30 | 2021-11-10 18:27:00 | 0:20:30 | 0:10:00 | 0:10:30 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6484847 | 2021-11-04 10:04:31 | 2021-11-10 18:07:00 | 2021-11-10 18:35:15 | 0:28:15 | 0:14:04 | 0:14:11 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{centos_8.stream} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 6484848 | 2021-11-04 10:04:32 | 2021-11-10 18:08:00 | 2021-11-10 18:27:16 | 0:19:16 | 0:09:57 | 0:09:19 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_adoption} | 1 | |
pass | 6484849 | 2021-11-04 10:04:32 | 2021-11-10 18:08:01 | 2021-11-10 18:41:13 | 0:33:12 | 0:26:54 | 0:06:18 | smithi | master | rhel | 8.4 | rados/singleton/{all/rebuild-mondb mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 6484850 | 2021-11-04 10:04:33 | 2021-11-10 18:08:01 | 2021-11-10 18:38:17 | 0:30:16 | 0:19:29 | 0:10:47 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484851 | 2021-11-04 10:04:34 | 2021-11-10 18:09:01 | 2021-11-10 21:05:12 | 2:56:11 | 2:26:00 | 0:30:11 | smithi | master | ubuntu | 20.04 | rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6484852 | 2021-11-04 10:04:35 | 2021-11-10 18:09:02 | 2021-11-10 19:20:41 | 1:11:39 | 1:02:10 | 0:09:29 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
fail | 6484853 | 2021-11-04 10:04:36 | 2021-11-10 18:09:22 | 2021-11-10 18:43:28 | 0:34:06 | 0:23:39 | 0:10:27 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484854 | 2021-11-04 10:04:36 | 2021-11-10 18:12:03 | 2021-11-10 18:32:12 | 0:20:09 | 0:09:08 | 0:11:01 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_write} | 1 | |
fail | 6484855 | 2021-11-04 10:04:37 | 2021-11-10 18:12:03 | 2021-11-10 18:41:34 | 0:29:31 | 0:19:09 | 0:10:22 | smithi | master | centos | 8.3 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi115 with status 5: 'sudo systemctl stop ceph-e72487b6-4253-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484856 | 2021-11-04 10:04:38 | 2021-11-10 18:12:03 | 2021-11-10 18:44:49 | 0:32:46 | 0:24:53 | 0:07:53 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/msgr mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6484857 | 2021-11-04 10:04:39 | 2021-11-10 18:14:04 | 2021-11-10 18:57:19 | 0:43:15 | 0:29:04 | 0:14:11 | smithi | master | ubuntu | 20.04 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6484858 | 2021-11-04 10:04:40 | 2021-11-10 18:17:35 | 2021-11-10 19:24:40 | 1:07:05 | 1:00:16 | 0:06:49 | smithi | master | rhel | 8.4 | rados/singleton/{all/recovery-preemption mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 6484859 | 2021-11-04 10:04:40 | 2021-11-10 18:17:35 | 2021-11-10 18:48:48 | 0:31:13 | 0:23:50 | 0:07:23 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484860 | 2021-11-04 10:04:41 | 2021-11-10 18:17:45 | 2021-11-10 18:55:33 | 0:37:48 | 0:27:19 | 0:10:29 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/radosbench} | 2 | |
fail | 6484861 | 2021-11-04 10:04:42 | 2021-11-10 18:18:06 | 2021-11-10 18:52:54 | 0:34:48 | 0:22:06 | 0:12:42 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
Command failed on smithi135 with status 5: 'sudo systemctl stop ceph-2cb01eac-4255-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
fail | 6484862 | 2021-11-04 10:04:43 | 2021-11-10 23:20:35 | 2021-11-10 23:41:49 | 0:21:14 | 0:12:26 | 0:08:48 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi146 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=929e5fa48ebd63d8eb58ef42b902a49e13a2cd48 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
dead | 6484863 | 2021-11-04 10:04:44 | 2021-11-10 23:20:35 | 2021-11-11 11:32:08 | 12:11:33 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6484864 | 2021-11-04 10:04:45 | 2021-11-10 23:20:36 | 2021-11-10 23:55:46 | 0:35:10 | 0:22:13 | 0:12:57 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} | 2 | |
pass | 6484865 | 2021-11-04 10:04:45 | 2021-11-10 23:21:58 | 2021-11-10 23:48:48 | 0:26:50 | 0:11:50 | 0:15:00 | smithi | master | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-comp-zlib supported-random-distro$/{ubuntu_latest} tasks/insights} | 2 | |
pass | 6484866 | 2021-11-04 10:04:46 | 2021-11-10 23:24:39 | 2021-11-11 00:16:26 | 0:51:47 | 0:41:55 | 0:09:52 | smithi | master | rhel | 8.4 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/sync-many workloads/snaps-few-objects} | 2 | |
pass | 6484867 | 2021-11-04 10:04:47 | 2021-11-10 23:27:00 | 2021-11-11 00:10:48 | 0:43:48 | 0:27:45 | 0:16:03 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 6484868 | 2021-11-04 10:04:48 | 2021-11-10 23:31:21 | 2021-11-11 00:22:03 | 0:50:42 | 0:37:25 | 0:13:17 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6484869 | 2021-11-04 10:04:49 | 2021-11-10 23:38:42 | 2021-11-11 00:04:06 | 0:25:24 | 0:19:16 | 0:06:08 | smithi | master | rhel | 8.4 | rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/few msgr/async no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} tasks/mon_clock_no_skews} | 2 | |
pass | 6484870 | 2021-11-04 10:04:49 | 2021-11-10 23:38:52 | 2021-11-11 00:13:55 | 0:35:03 | 0:19:26 | 0:15:37 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 2 | |
pass | 6484871 | 2021-11-04 10:04:50 | 2021-11-10 23:41:53 | 2021-11-11 00:06:37 | 0:24:44 | 0:13:21 | 0:11:23 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/redirect} | 2 | |
fail | 6484872 | 2021-11-04 10:04:51 | 2021-11-10 23:43:13 | 2021-11-11 00:17:41 | 0:34:28 | 0:22:50 | 0:11:38 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi031 with status 5: 'sudo systemctl stop ceph-85d90692-4282-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484873 | 2021-11-04 10:04:52 | 2021-11-10 23:44:14 | 2021-11-11 00:07:49 | 0:23:35 | 0:11:22 | 0:12:13 | smithi | master | centos | 8.stream | rados/singleton/{all/resolve_stuck_peering mon_election/classic msgr-failures/none msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream}} | 2 | |
fail | 6484874 | 2021-11-04 10:04:53 | 2021-11-10 23:44:54 | 2021-11-11 03:38:21 | 3:53:27 | 3:38:48 | 0:14:39 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi007 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=929e5fa48ebd63d8eb58ef42b902a49e13a2cd48 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh' |
||||||||||||||
pass | 6484875 | 2021-11-04 10:04:53 | 2021-11-10 23:48:55 | 2021-11-11 00:14:27 | 0:25:32 | 0:15:22 | 0:10:10 | smithi | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/rados_python} | 2 | |
fail | 6484876 | 2021-11-04 10:04:54 | 2021-11-10 23:49:15 | 2021-11-11 00:30:38 | 0:41:23 | 0:33:30 | 0:07:53 | smithi | master | rhel | 8.4 | rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/packaged mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi186 with status 5: 'sudo systemctl stop ceph-614938c2-4284-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
fail | 6484877 | 2021-11-04 10:04:55 | 2021-11-10 23:49:16 | 2021-11-11 00:21:52 | 0:32:36 | 0:24:58 | 0:07:38 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484878 | 2021-11-04 10:04:56 | 2021-11-10 23:50:26 | 2021-11-11 00:19:53 | 0:29:27 | 0:17:21 | 0:12:06 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/redirect_promote_tests} | 2 | |
pass | 6484879 | 2021-11-04 10:04:57 | 2021-11-10 23:50:27 | 2021-11-11 00:29:57 | 0:39:30 | 0:28:18 | 0:11:12 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/few rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |
fail | 6484880 | 2021-11-04 10:04:58 | 2021-11-10 23:50:27 | 2021-11-11 00:20:53 | 0:30:26 | 0:23:05 | 0:07:21 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi151 with status 5: 'sudo systemctl stop ceph-5867154a-4283-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484881 | 2021-11-04 10:04:58 | 2021-11-10 23:51:17 | 2021-11-11 00:28:22 | 0:37:05 | 0:29:53 | 0:07:12 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/osd_stale_reads mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6484882 | 2021-11-04 10:04:59 | 2021-11-10 23:51:28 | 2021-11-11 02:41:47 | 2:50:19 | 2:39:39 | 0:10:40 | smithi | master | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/scrub} | 1 | |
pass | 6484883 | 2021-11-04 10:05:00 | 2021-11-10 23:51:28 | 2021-11-11 00:17:33 | 0:26:05 | 0:12:57 | 0:13:08 | smithi | master | centos | 8.stream | rados/singleton/{all/test-crash mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6484884 | 2021-11-04 10:05:01 | 2021-11-10 23:53:48 | 2021-11-11 00:08:59 | 0:15:11 | 0:06:43 | 0:08:28 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_cephadm_repos} | 1 | |
pass | 6484885 | 2021-11-04 10:05:02 | 2021-11-10 23:53:49 | 2021-11-11 00:25:15 | 0:31:26 | 0:21:54 | 0:09:32 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} | 1 | |
fail | 6484886 | 2021-11-04 10:05:02 | 2021-11-10 23:53:49 | 2021-11-11 00:27:15 | 0:33:26 | 0:24:30 | 0:08:56 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484887 | 2021-11-04 10:05:03 | 2021-11-10 23:55:50 | 2021-11-11 00:23:12 | 0:27:22 | 0:12:00 | 0:15:22 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/redirect_set_object} | 2 | |
fail | 6484888 | 2021-11-04 10:05:04 | 2021-11-10 23:59:00 | 2021-11-11 00:35:57 | 0:36:57 | 0:23:08 | 0:13:49 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484889 | 2021-11-04 10:05:05 | 2021-11-11 00:02:11 | 2021-11-11 00:32:06 | 0:29:55 | 0:12:03 | 0:17:52 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 6484890 | 2021-11-04 10:05:06 | 2021-11-11 00:06:42 | 2021-11-11 00:49:05 | 0:42:23 | 0:31:28 | 0:10:55 | smithi | master | centos | 8.stream | rados/objectstore/{backends/objectstore-filestore-memstore supported-random-distro$/{centos_8.stream}} | 1 | |
dead | 6484891 | 2021-11-04 10:05:06 | 2021-11-11 00:07:53 | 2021-11-11 12:21:28 | 12:13:35 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6484892 | 2021-11-04 10:05:07 | 2021-11-11 00:08:53 | 2021-11-11 00:32:34 | 0:23:41 | 0:14:23 | 0:09:18 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6484893 | 2021-11-04 10:05:08 | 2021-11-11 00:08:53 | 2021-11-11 00:35:50 | 0:26:57 | 0:20:58 | 0:05:59 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/pool-access mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6484894 | 2021-11-04 10:05:09 | 2021-11-11 00:08:54 | 2021-11-11 00:38:21 | 0:29:27 | 0:17:45 | 0:11:42 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/set-chunks-read} | 2 | |
dead | 6484895 | 2021-11-04 10:05:10 | 2021-11-11 00:10:54 | 2021-11-11 12:22:32 | 12:11:38 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls mon_election/connectivity} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6484896 | 2021-11-04 10:05:10 | 2021-11-11 00:10:55 | 2021-11-11 00:44:48 | 0:33:53 | 0:23:40 | 0:10:13 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi146 with status 5: 'sudo systemctl stop ceph-631543c4-4286-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
fail | 6484897 | 2021-11-04 10:05:11 | 2021-11-11 00:13:56 | 2021-11-11 00:33:47 | 0:19:51 | 0:09:54 | 0:09:57 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
Command failed on smithi100 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:929e5fa48ebd63d8eb58ef42b902a49e13a2cd48 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 86e34378-4286-11ec-8c2c-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6484898 | 2021-11-04 10:05:12 | 2021-11-11 00:14:36 | 2021-11-11 00:50:03 | 0:35:27 | 0:22:11 | 0:13:16 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream} thrashers/none thrashosds-health workloads/small-objects-balanced} | 2 | |
pass | 6484899 | 2021-11-04 10:05:13 | 2021-11-11 00:16:37 | 2021-11-11 01:53:01 | 1:36:24 | 1:22:05 | 0:14:19 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/thrash-backfill-full mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 2 | |
fail | 6484900 | 2021-11-04 10:05:14 | 2021-11-11 00:17:37 | 2021-11-11 00:47:06 | 0:29:29 | 0:19:41 | 0:09:48 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484901 | 2021-11-04 10:05:15 | 2021-11-11 00:17:47 | 2021-11-11 01:21:04 | 1:03:17 | 0:53:53 | 0:09:24 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6484902 | 2021-11-04 10:05:15 | 2021-11-11 00:19:58 | 2021-11-11 00:48:29 | 0:28:31 | 0:20:00 | 0:08:31 | smithi | master | rhel | 8.4 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} tasks/rados_stress_watch} | 2 | |
fail | 6484903 | 2021-11-04 10:05:16 | 2021-11-11 00:20:59 | 2021-11-11 00:52:59 | 0:32:00 | 0:21:50 | 0:10:10 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi165 with status 5: 'sudo systemctl stop ceph-ca942b22-4287-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
fail | 6484904 | 2021-11-04 10:05:17 | 2021-11-11 00:21:59 | 2021-11-11 00:49:21 | 0:27:22 | 0:16:43 | 0:10:39 | smithi | master | centos | 8.3 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-comp-zstd supported-random-distro$/{centos_8} tasks/module_selftest} | 2 | |
Failure Reason:
Test failure: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) |
||||||||||||||
pass | 6484905 | 2021-11-04 10:05:18 | 2021-11-11 00:22:09 | 2021-11-11 01:13:01 | 0:50:52 | 0:43:29 | 0:07:23 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} | 2 | |
pass | 6484906 | 2021-11-04 10:05:19 | 2021-11-11 00:23:20 | 2021-11-11 01:00:29 | 0:37:09 | 0:24:35 | 0:12:34 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/small-objects-localized} | 2 | |
pass | 6484907 | 2021-11-04 10:05:19 | 2021-11-11 00:25:20 | 2021-11-11 01:00:34 | 0:35:14 | 0:26:57 | 0:08:17 | smithi | master | rhel | 8.4 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/sync workloads/pool-create-delete} | 2 | |
pass | 6484908 | 2021-11-04 10:05:20 | 2021-11-11 00:27:21 | 2021-11-11 00:51:42 | 0:24:21 | 0:09:58 | 0:14:23 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 6484909 | 2021-11-04 10:05:21 | 2021-11-11 00:30:02 | 2021-11-11 01:06:01 | 0:35:59 | 0:25:28 | 0:10:31 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8.stream} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 6484910 | 2021-11-04 10:05:22 | 2021-11-11 00:30:42 | 2021-11-11 01:08:38 | 0:37:56 | 0:24:06 | 0:13:50 | smithi | master | ubuntu | 20.04 | rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/classic msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi190 with status 5: 'sudo systemctl stop ceph-1e665d1e-4289-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484911 | 2021-11-04 10:05:23 | 2021-11-11 00:32:13 | 2021-11-11 00:57:24 | 0:25:11 | 0:19:17 | 0:05:54 | smithi | master | rhel | 8.4 | rados/multimon/{clusters/6 mon_election/classic msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} tasks/mon_clock_with_skews} | 2 | |
pass | 6484912 | 2021-11-04 10:05:23 | 2021-11-11 00:32:13 | 2021-11-11 00:57:04 | 0:24:51 | 0:12:26 | 0:12:25 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/mon_recovery validater/lockdep} | 2 | |
fail | 6484913 | 2021-11-04 10:05:24 | 2021-11-11 00:33:53 | 2021-11-11 01:09:53 | 0:36:00 | 0:22:53 | 0:13:07 | smithi | master | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484914 | 2021-11-04 10:05:25 | 2021-11-11 00:36:04 | 2021-11-11 00:57:30 | 0:21:26 | 0:12:46 | 0:08:40 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/sample_fio} | 1 | |
pass | 6484915 | 2021-11-04 10:05:26 | 2021-11-11 00:36:04 | 2021-11-11 01:20:26 | 0:44:22 | 0:30:05 | 0:14:17 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/thrash-eio mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} | 2 | |
pass | 6484916 | 2021-11-04 10:05:27 | 2021-11-11 00:38:25 | 2021-11-11 01:14:14 | 0:35:49 | 0:22:09 | 0:13:40 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/version-number-sanity mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 6484917 | 2021-11-04 10:05:27 | 2021-11-11 00:49:14 | 2021-11-11 01:20:28 | 0:31:14 | 0:20:01 | 0:11:13 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484918 | 2021-11-04 10:05:28 | 2021-11-11 00:49:25 | 2021-11-11 01:28:56 | 0:39:31 | 0:33:13 | 0:06:18 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/small-objects} | 2 | |
fail | 6484919 | 2021-11-04 10:05:29 | 2021-11-11 00:50:05 | 2021-11-11 01:25:48 | 0:35:43 | 0:22:01 | 0:13:42 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi194 with status 5: 'sudo systemctl stop ceph-7bfc0454-428b-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
fail | 6484920 | 2021-11-04 10:05:30 | 2021-11-11 00:51:46 | 2021-11-11 01:28:07 | 0:36:21 | 0:22:38 | 0:13:43 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
Command failed on smithi038 with status 5: 'sudo systemctl stop ceph-479c3be2-428c-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
fail | 6484921 | 2021-11-04 10:05:31 | 2021-11-11 00:53:06 | 2021-11-11 01:16:17 | 0:23:11 | 0:09:49 | 0:13:22 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi142 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:929e5fa48ebd63d8eb58ef42b902a49e13a2cd48 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8134207c-428c-11ec-8c2c-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6484922 | 2021-11-04 10:05:32 | 2021-11-11 00:57:07 | 2021-11-11 01:30:54 | 0:33:47 | 0:22:14 | 0:11:33 | smithi | master | centos | 8.stream | rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream}} | 2 | |
pass | 6484923 | 2021-11-04 10:05:32 | 2021-11-11 00:57:27 | 2021-11-11 01:44:47 | 0:47:20 | 0:41:16 | 0:06:04 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |
fail | 6484924 | 2021-11-04 10:05:33 | 2021-11-11 00:57:38 | 2021-11-11 01:29:49 | 0:32:11 | 0:23:21 | 0:08:50 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6484925 | 2021-11-04 10:05:34 | 2021-11-11 01:00:38 | 2021-11-11 01:33:35 | 0:32:57 | 0:23:16 | 0:09:41 | smithi | master | centos | 8.2 | rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_ganesha (unittest.loader._FailedTest) |
||||||||||||||
pass | 6484926 | 2021-11-04 10:05:35 | 2021-11-11 01:00:39 | 2021-11-11 01:30:33 | 0:29:54 | 0:20:06 | 0:09:48 | smithi | master | rhel | 8.4 | rados/objectstore/{backends/alloc-hint supported-random-distro$/{rhel_8}} | 1 | |
pass | 6484927 | 2021-11-04 10:05:36 | 2021-11-11 01:03:19 | 2021-11-11 01:36:12 | 0:32:53 | 0:25:52 | 0:07:01 | smithi | master | rhel | 8.4 | rados/rest/{mgr-restful supported-random-distro$/{rhel_8}} | 1 | |
fail | 6484928 | 2021-11-04 10:05:37 | 2021-11-11 01:03:20 | 2021-11-11 01:42:11 | 0:38:51 | 0:25:40 | 0:13:11 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.7.0} | 1 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
pass | 6484929 | 2021-11-04 10:05:37 | 2021-11-11 01:06:10 | 2021-11-11 01:43:22 | 0:37:12 | 0:30:53 | 0:06:19 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6484930 | 2021-11-04 10:05:38 | 2021-11-11 01:06:11 | 2021-11-11 01:31:28 | 0:25:17 | 0:11:23 | 0:13:54 | smithi | master | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/c2c} | 1 | |
fail | 6484931 | 2021-11-04 10:05:39 | 2021-11-11 01:08:41 | 2021-11-11 04:54:05 | 3:45:24 | 3:32:47 | 0:12:37 | smithi | master | ubuntu | 20.04 | rados/upgrade/parallel/{0-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi180 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh' |
||||||||||||||
pass | 6484932 | 2021-11-04 10:05:40 | 2021-11-11 01:10:02 | 2021-11-11 01:45:06 | 0:35:04 | 0:25:28 | 0:09:36 | smithi | master | centos | 8.3 | rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} | 1 | |
dead | 6484933 | 2021-11-04 10:05:41 | 2021-11-11 01:10:02 | 2021-11-11 13:25:40 | 12:15:38 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6484934 | 2021-11-04 10:05:42 | 2021-11-11 01:13:03 | 2021-11-11 01:42:53 | 0:29:50 | 0:10:25 | 0:19:25 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
dead | 6484935 | 2021-11-04 10:05:42 | 2021-11-11 01:18:34 | 2021-11-11 13:32:10 | 12:13:36 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6484936 | 2021-11-04 10:05:43 | 2021-11-11 01:20:35 | 2021-11-11 02:04:03 | 0:43:28 | 0:36:44 | 0:06:44 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/snaps-few-objects-localized} | 2 | |
pass | 6484937 | 2021-11-04 10:05:44 | 2021-11-11 01:20:35 | 2021-11-11 01:47:56 | 0:27:21 | 0:10:53 | 0:16:28 | smithi | master | ubuntu | 20.04 | rados/cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_20.04} 2-node-mgr orchestrator_cli} | 2 | |
pass | 6484938 | 2021-11-04 10:05:45 | 2021-11-11 01:25:56 | 2021-11-11 02:01:31 | 0:35:35 | 0:23:12 | 0:12:23 | smithi | master | centos | 8.3 | rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} | 2 | |
pass | 6484939 | 2021-11-04 10:05:46 | 2021-11-11 01:28:17 | 2021-11-11 01:49:24 | 0:21:07 | 0:10:39 | 0:10:28 | smithi | master | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream} tasks/rados_striper} | 2 | |
pass | 6484940 | 2021-11-04 10:05:47 | 2021-11-11 01:28:17 | 2021-11-11 02:04:39 | 0:36:22 | 0:24:30 | 0:11:52 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/connectivity msgr-failures/osd-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |
fail | 6484941 | 2021-11-04 10:05:47 | 2021-11-11 01:28:57 | 2021-11-11 01:59:16 | 0:30:19 | 0:18:39 | 0:11:40 | smithi | master | centos | 8.2 | rados/cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484942 | 2021-11-04 10:05:48 | 2021-11-11 01:29:58 | 2021-11-11 01:52:04 | 0:22:06 | 0:11:43 | 0:10:23 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/balancer mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6484943 | 2021-11-04 10:05:49 | 2021-11-11 01:30:38 | 2021-11-11 02:00:25 | 0:29:47 | 0:18:49 | 0:10:58 | smithi | master | centos | 8.2 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi166 with status 5: 'sudo systemctl stop ceph-0ef3b806-4291-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484944 | 2021-11-04 10:05:50 | 2021-11-11 01:30:58 | 2021-11-11 02:10:28 | 0:39:30 | 0:27:28 | 0:12:02 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} | 2 | |
pass | 6484945 | 2021-11-04 10:05:51 | 2021-11-11 01:33:39 | 2021-11-11 01:54:03 | 0:20:24 | 0:09:17 | 0:11:07 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/sample_radosbench} | 1 | |
fail | 6484946 | 2021-11-04 10:05:51 | 2021-11-11 01:33:39 | 2021-11-11 01:55:10 | 0:21:31 | 0:12:28 | 0:09:03 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-singlehost/{0-distro$/{rhel_8.4_container_tools_3.0} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
Command failed on smithi085 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:929e5fa48ebd63d8eb58ef42b902a49e13a2cd48 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid dc426474-4291-11ec-8c2c-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6484947 | 2021-11-04 10:05:52 | 2021-11-11 01:36:20 | 2021-11-11 01:58:08 | 0:21:48 | 0:06:38 | 0:15:10 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/watch-notify-same-primary mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6484948 | 2021-11-04 10:05:53 | 2021-11-11 01:42:21 | 2021-11-11 02:22:14 | 0:39:53 | 0:32:42 | 0:07:11 | smithi | master | rhel | 8.4 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-hybrid supported-random-distro$/{rhel_8} tasks/progress} | 2 | |
dead | 6484949 | 2021-11-04 10:05:54 | 2021-11-11 01:43:02 | 2021-11-11 13:51:35 | 12:08:33 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6484950 | 2021-11-04 10:05:55 | 2021-11-11 01:43:02 | 2021-11-11 02:07:50 | 0:24:48 | 0:13:58 | 0:10:50 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/write_fadvise_dontneed} | 2 | |
pass | 6484951 | 2021-11-04 10:05:56 | 2021-11-11 01:44:32 | 2021-11-11 02:15:44 | 0:31:12 | 0:24:27 | 0:06:45 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
dead | 6484952 | 2021-11-04 10:05:57 | 2021-11-11 01:44:33 | 2021-11-11 13:57:29 | 12:12:56 | smithi | master | centos | 8.3 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6484953 | 2021-11-04 10:05:57 | 2021-11-11 01:44:53 | 2021-11-11 02:34:14 | 0:49:21 | 0:35:36 | 0:13:45 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-radosbench} | 2 | |
pass | 6484954 | 2021-11-04 10:05:58 | 2021-11-11 01:48:04 | 2021-11-11 02:12:34 | 0:24:30 | 0:12:36 | 0:11:54 | smithi | master | centos | 8.stream | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream} thrashers/force-sync-many workloads/rados_5925} | 2 | |
pass | 6484955 | 2021-11-04 10:05:59 | 2021-11-11 01:49:25 | 2021-11-11 02:32:47 | 0:43:22 | 0:28:09 | 0:15:13 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/filestore-xfs rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 6484956 | 2021-11-04 10:06:00 | 2021-11-11 01:53:06 | 2021-11-11 02:29:59 | 0:36:53 | 0:23:56 | 0:12:57 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6484957 | 2021-11-04 10:06:01 | 2021-11-11 01:54:06 | 2021-11-11 02:29:10 | 0:35:04 | 0:25:58 | 0:09:06 | smithi | master | rhel | 8.4 | rados/multimon/{clusters/9 mon_election/connectivity msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/mon_recovery} | 3 | |
pass | 6484958 | 2021-11-04 10:06:02 | 2021-11-11 03:15:47 | 4068 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados tasks/rados_api_tests validater/valgrind} | 2 | ||||
fail | 6484959 | 2021-11-04 10:06:02 | 2021-11-11 01:58:17 | 2021-11-11 02:30:36 | 0:32:19 | 0:20:49 | 0:11:30 | smithi | master | centos | 8.2 | rados/cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/root mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi152 with status 5: 'sudo systemctl stop ceph-6964130e-4295-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484960 | 2021-11-04 10:06:03 | 2021-11-11 01:59:18 | 2021-11-11 02:19:31 | 0:20:13 | 0:09:53 | 0:10:20 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_adoption} | 1 | |
pass | 6484961 | 2021-11-04 10:06:04 | 2021-11-11 02:00:28 | 2021-11-11 02:19:27 | 0:18:59 | 0:10:21 | 0:08:38 | smithi | master | centos | 8.stream | rados/singleton/{all/admin-socket mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6484962 | 2021-11-04 10:06:05 | 2021-11-11 02:00:29 | 2021-11-11 02:27:34 | 0:27:05 | 0:14:00 | 0:13:05 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
pass | 6484963 | 2021-11-04 10:06:06 | 2021-11-11 02:01:39 | 2021-11-11 02:47:34 | 0:45:55 | 0:32:56 | 0:12:59 | smithi | master | centos | 8.stream | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6484964 | 2021-11-04 10:06:06 | 2021-11-11 02:04:10 | 2021-11-11 02:35:35 | 0:31:25 | 0:24:59 | 0:06:26 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484965 | 2021-11-04 10:06:07 | 2021-11-11 02:04:40 | 2021-11-11 02:32:06 | 0:27:26 | 0:20:14 | 0:07:12 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 6484966 | 2021-11-04 10:06:08 | 2021-11-11 02:04:40 | 2021-11-11 02:36:59 | 0:32:19 | 0:18:30 | 0:13:49 | smithi | master | centos | 8.3 | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi149 with status 5: 'sudo systemctl stop ceph-3091c8fe-4296-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484967 | 2021-11-04 10:06:09 | 2021-11-11 02:07:51 | 2021-11-11 02:33:27 | 0:25:36 | 0:13:36 | 0:12:00 | smithi | master | centos | 8.3 | rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{centos_8}} | 1 | |
pass | 6484968 | 2021-11-04 10:06:10 | 2021-11-11 02:10:32 | 2021-11-11 02:48:39 | 0:38:07 | 0:24:15 | 0:13:52 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/cache-agent-big} | 2 | |
fail | 6484969 | 2021-11-04 10:06:10 | 2021-11-11 02:12:42 | 2021-11-11 02:33:38 | 0:20:56 | 0:12:31 | 0:08:25 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi045 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=929e5fa48ebd63d8eb58ef42b902a49e13a2cd48 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 6484970 | 2021-11-04 10:06:11 | 2021-11-11 02:12:43 | 2021-11-11 02:52:38 | 0:39:55 | 0:29:30 | 0:10:25 | smithi | master | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/rados_workunit_loadgen_big} | 2 | |
pass | 6484971 | 2021-11-04 10:06:12 | 2021-11-11 02:12:53 | 2021-11-11 02:39:01 | 0:26:08 | 0:16:30 | 0:09:38 | smithi | master | centos | 8.stream | rados/singleton/{all/backfill-toofull mon_election/connectivity msgr-failures/none msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6484972 | 2021-11-04 10:06:13 | 2021-11-11 02:14:13 | 2021-11-11 02:49:42 | 0:35:29 | 0:23:27 | 0:12:02 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484973 | 2021-11-04 10:06:14 | 2021-11-11 02:15:54 | 2021-11-11 02:44:47 | 0:28:53 | 0:12:39 | 0:16:14 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 6484974 | 2021-11-04 10:06:15 | 2021-11-11 02:19:35 | 2021-11-11 02:42:59 | 0:23:24 | 0:11:13 | 0:12:11 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_read} | 1 | |
fail | 6484975 | 2021-11-04 10:06:15 | 2021-11-11 02:20:35 | 2021-11-11 02:51:31 | 0:30:56 | 0:19:15 | 0:11:41 | smithi | master | centos | 8.3 | rados/cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484976 | 2021-11-04 10:06:16 | 2021-11-11 02:22:16 | 2021-11-11 02:41:54 | 0:19:38 | 0:09:57 | 0:09:41 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6484977 | 2021-11-04 10:06:17 | 2021-11-11 02:22:16 | 2021-11-11 02:57:04 | 0:34:48 | 0:20:50 | 0:13:58 | smithi | master | centos | 8.3 | rados/standalone/{supported-random-distro$/{centos_8} workloads/crush} | 1 | |
pass | 6484978 | 2021-11-04 10:06:18 | 2021-11-11 02:27:37 | 2021-11-11 03:02:47 | 0:35:10 | 0:26:20 | 0:08:50 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/cache-agent-small} | 2 | |
fail | 6484979 | 2021-11-04 10:06:19 | 2021-11-11 02:29:18 | 2021-11-11 03:02:30 | 0:33:12 | 0:22:36 | 0:10:36 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi038 with status 5: 'sudo systemctl stop ceph-8bebf76c-4299-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
fail | 6484980 | 2021-11-04 10:06:19 | 2021-11-11 02:29:18 | 2021-11-11 03:01:32 | 0:32:14 | 0:21:41 | 0:10:33 | smithi | master | centos | 8.3 | rados/cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/packaged mon_election/connectivity msgr/async-v2only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi190 with status 5: 'sudo systemctl stop ceph-b7815192-4299-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484981 | 2021-11-04 10:06:20 | 2021-11-11 02:30:08 | 2021-11-11 02:51:53 | 0:21:45 | 0:11:46 | 0:09:59 | smithi | master | centos | 8.stream | rados/singleton/{all/deduptool mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6484982 | 2021-11-04 10:06:21 | 2021-11-11 02:30:39 | 2021-11-11 03:08:31 | 0:37:52 | 0:22:20 | 0:15:32 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/connectivity msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/rbd_cls} | 3 | |
Failure Reason:
Command failed on smithi146 with status 5: 'sudo systemctl stop ceph-4c8f7a8e-429a-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
fail | 6484983 | 2021-11-04 10:06:22 | 2021-11-11 02:32:49 | 2021-11-11 03:02:15 | 0:29:26 | 0:18:50 | 0:10:36 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6484984 | 2021-11-04 10:06:23 | 2021-11-11 02:32:50 | 2021-11-11 03:08:33 | 0:35:43 | 0:24:24 | 0:11:19 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/cache-pool-snaps-readproxy} | 2 | |
fail | 6484985 | 2021-11-04 10:06:24 | 2021-11-11 02:33:40 | 2021-11-11 03:05:46 | 0:32:06 | 0:23:30 | 0:08:36 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi194 with status 5: 'sudo systemctl stop ceph-16d601d8-429a-11ec-8c2c-001a4aab830c@mon.b' |
||||||||||||||
pass | 6484986 | 2021-11-04 10:06:24 | 2021-11-11 02:34:21 | 2021-11-11 02:56:39 | 0:22:18 | 0:11:36 | 0:10:42 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/crushdiff mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} | 1 | |
dead | 6484987 | 2021-11-04 10:06:25 | 2021-11-11 02:35:41 | 2021-11-11 02:53:02 | 0:17:21 | 0:05:28 | 0:11:53 | smithi | master | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-low-osd-mem-target supported-random-distro$/{ubuntu_latest} tasks/prometheus} | 2 | |
Failure Reason:
{'Failure object was': {'smithi105.front.sepia.ceph.com': {'cmd': ['/usr/bin/pip3', 'install', '-U', 'git+https://github.com/sebastian-philipp/registries-conf-ctl'], 'msg': "stdout: Collecting git+https://github.com/sebastian-philipp/registries-conf-ctl\n Cloning https://github.com/sebastian-philipp/registries-conf-ctl to ./pip-req-build-x0yytmft\n\n:stderr: Running command git clone -q https://github.com/sebastian-philipp/registries-conf-ctl /tmp/pip-req-build-x0yytmft\n fatal: unable to access 'https://github.com/sebastian-philipp/registries-conf-ctl/': Failed to connect to github.com port 443: Connection timed out\nERROR: Command errored out with exit status 128: git clone -q https://github.com/sebastian-philipp/registries-conf-ctl /tmp/pip-req-build-x0yytmft Check the logs for full command output.\n", 'invocation': {'module_args': {'name': ['git+https://github.com/sebastian-philipp/registries-conf-ctl'], 'state': 'latest', 'virtualenv_site_packages': False, 'virtualenv_command': 'virtualenv', 'editable': False, 'version': 'None', 'requirements': 'None', 'virtualenv': 'None', 'virtualenv_python': 'None', 'extra_args': 'None', 'chdir': 'None', 'executable': 'None', 'umask': 'None'}}, '_ansible_no_log': False, 'changed': False}}, 'Traceback (most recent call last)': 'File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 117, in represent_mapping node_key = self.represent_data(item_key) File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)', 'yaml.representer.RepresenterError': "('cannot represent an object', 'cmd')"} |
||||||||||||||
pass | 6484988 | 2021-11-04 10:06:26 | 2021-11-11 02:37:02 | 2021-11-11 02:52:04 | 0:15:02 | 0:06:47 | 0:08:15 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_cephadm_repos} | 1 | |
pass | 6484989 | 2021-11-04 10:06:27 | 2021-11-11 02:37:02 | 2021-11-11 03:00:09 | 0:23:07 | 0:12:41 | 0:10:26 | smithi | master | centos | 8.stream | rados/singleton/{all/divergent_priors mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6484990 | 2021-11-04 10:06:28 | 2021-11-11 03:21:49 | 1689 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/cache-pool-snaps} | 2 | ||||
dead | 6484991 | 2021-11-04 10:06:29 | 2021-11-11 02:42:03 | 2021-11-11 14:57:26 | 12:15:23 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6484992 | 2021-11-04 10:06:29 | 2021-11-11 03:16:00 | 1309 | smithi | master | centos | 8.3 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-balanced} | 2 | ||||
pass | 6484993 | 2021-11-04 10:06:30 | 2021-11-11 02:44:54 | 2021-11-11 03:33:03 | 0:48:09 | 0:35:26 | 0:12:43 | smithi | master | centos | 8.3 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/many workloads/rados_api_tests} | 2 | |
pass | 6484994 | 2021-11-04 10:06:31 | 2021-11-11 02:47:35 | 2021-11-11 03:15:12 | 0:27:37 | 0:13:57 | 0:13:40 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 6484995 | 2021-11-04 10:06:32 | 2021-11-11 02:49:46 | 2021-11-11 03:32:37 | 0:42:51 | 0:31:25 | 0:11:26 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6484996 | 2021-11-04 10:06:33 | 2021-11-11 02:51:36 | 2021-11-11 03:13:31 | 0:21:55 | 0:10:24 | 0:11:31 | smithi | master | centos | 8.stream | rados/multimon/{clusters/21 mon_election/classic msgr-failures/many msgr/async no_pools objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream} tasks/mon_clock_no_skews} | 3 | |
dead | 6484997 | 2021-11-04 10:06:34 | 2021-11-11 02:52:07 | 2021-11-11 15:04:47 | 12:12:40 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/connectivity} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6484998 | 2021-11-04 10:06:34 | 2021-11-11 02:52:47 | 2021-11-11 03:20:11 | 0:27:24 | 0:15:29 | 0:11:55 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/filestore-xfs rados tasks/rados_cls_all validater/lockdep} | 2 | |
pass | 6484999 | 2021-11-04 10:06:35 | 2021-11-11 02:53:08 | 2021-11-11 03:32:36 | 0:39:28 | 0:23:23 | 0:16:05 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8.stream} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
pass | 6485000 | 2021-11-04 10:06:36 | 2021-11-11 02:57:08 | 2021-11-11 03:19:12 | 0:22:04 | 0:09:25 | 0:12:39 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/export-after-evict mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
fail | 6485001 | 2021-11-04 10:06:37 | 2021-11-11 03:00:19 | 2021-11-11 03:30:05 | 0:29:46 | 0:19:34 | 0:10:12 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |