User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
kchai | 2021-07-25 02:43:47 | 2021-07-25 03:09:19 | 2021-07-25 03:24:59 | 0:15:40 | rados | wip-kefu-testing-2021-07-24-2153 | smithi | f7cd730 | 8 | 49 | 74 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 6290624 | 2021-07-25 02:45:04 | 2021-07-25 02:45:05 | 2021-07-25 03:11:19 | 0:26:14 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/redirect_set_object} | 2 | |||
fail | 6290625 | 2021-07-25 02:45:05 | 2021-07-25 02:45:06 | 2021-07-25 03:06:10 | 0:21:04 | 0:10:15 | 0:10:49 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
Command failed on smithi049 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:f7cd730b96380f0a18fbfd486fd4354c294b17eb -v bootstrap --fsid ba4c9ff8-ecf4-11eb-8c23-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 172.21.15.49 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6290626 | 2021-07-25 02:45:06 | 2021-07-25 02:45:07 | 2021-07-25 03:07:45 | 0:22:38 | 0:14:44 | 0:07:54 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
Failure Reason:
Command failed on smithi188 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
dead | 6290627 | 2021-07-25 02:45:07 | 2021-07-25 02:45:07 | 2021-07-25 03:10:34 | 0:25:27 | smithi | master | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/failover} | 2 | |||
fail | 6290628 | 2021-07-25 02:45:08 | 2021-07-25 02:45:09 | 2021-07-25 03:03:29 | 0:18:20 | 0:08:47 | 0:09:33 | smithi | master | centos | 8.stream | rados/singleton/{all/admin-socket mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8.stream}} | 1 | |
Failure Reason:
Command failed on smithi045 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290629 | 2021-07-25 02:45:09 | 2021-07-25 02:45:09 | 2021-07-25 03:07:43 | 0:22:34 | 0:09:58 | 0:12:36 | smithi | master | centos | 8.3 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} | 2 | |
Failure Reason:
Command failed on smithi148 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
dead | 6290630 | 2021-07-25 02:45:10 | 2021-07-25 02:45:10 | 2021-07-25 03:10:11 | 0:25:01 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_kubic_stable 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
fail | 6290631 | 2021-07-25 02:45:11 | 2021-07-25 02:45:12 | 2021-07-25 03:06:07 | 0:20:55 | 0:10:06 | 0:10:49 | smithi | master | centos | 8.3 | rados/multimon/{clusters/9 mon_election/classic msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 3 | |
Failure Reason:
Command failed on smithi134 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290632 | 2021-07-25 02:45:12 | 2021-07-25 02:45:12 | 2021-07-25 03:08:32 | 0:23:20 | 0:08:56 | 0:14:24 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/balancer mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed on smithi185 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
dead | 6290633 | 2021-07-25 02:45:13 | 2021-07-25 02:45:14 | 2021-07-25 03:09:46 | 0:24:32 | smithi | master | ubuntu | 20.04 | rados/cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_20.04} 2-node-mgr orchestrator_cli} | 2 | |||
fail | 6290634 | 2021-07-25 02:45:14 | 2021-07-25 02:45:14 | 2021-07-25 03:08:16 | 0:23:02 | 0:12:42 | 0:10:20 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
Command failed on smithi029 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:f7cd730b96380f0a18fbfd486fd4354c294b17eb -v bootstrap --fsid 2f305cce-ecf5-11eb-8c23-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.29 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6290635 | 2021-07-25 02:45:14 | 2021-07-25 02:45:15 | 2021-07-25 03:03:26 | 0:18:11 | 0:08:46 | 0:09:25 | smithi | master | centos | 8.stream | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8.stream}} | 1 | |
Failure Reason:
Command failed on smithi043 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290636 | 2021-07-25 02:45:15 | 2021-07-25 02:45:16 | 2021-07-25 03:04:20 | 0:19:04 | 0:09:38 | 0:09:26 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8.stream} thrashers/default thrashosds-health workloads/set-chunks-read} | 2 | |
Failure Reason:
Command failed on smithi166 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290637 | 2021-07-25 02:45:16 | 2021-07-25 02:45:17 | 2021-07-25 03:07:56 | 0:22:39 | 0:08:28 | 0:14:11 | smithi | master | centos | 8.stream | rados/objectstore/{backends/keyvaluedb supported-random-distro$/{centos_8.stream}} | 1 | |
Failure Reason:
Command failed on smithi109 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290638 | 2021-07-25 02:45:17 | 2021-07-25 02:45:18 | 2021-07-25 03:05:18 | 0:20:00 | 0:10:09 | 0:09:51 | smithi | master | centos | 8.3 | rados/cephadm/smoke/{distro/centos_8.3_kubic_stable fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi058 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:f7cd730b96380f0a18fbfd486fd4354c294b17eb -v bootstrap --fsid dea0dd38-ecf4-11eb-8c23-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.58 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
pass | 6290639 | 2021-07-25 02:45:18 | 2021-07-25 02:45:19 | 2021-07-25 03:06:09 | 0:20:50 | 0:09:42 | 0:11:08 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/deduptool mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6290640 | 2021-07-25 02:45:19 | 2021-07-25 02:45:19 | 2021-07-25 03:05:09 | 0:19:50 | 0:12:42 | 0:07:08 | smithi | master | rhel | 8.4 | rados/monthrash/{ceph clusters/9-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/one workloads/rados_mon_workunits} | 2 | |
Failure Reason:
Command failed on smithi168 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290641 | 2021-07-25 02:45:20 | 2021-07-25 02:45:21 | 2021-07-25 03:04:50 | 0:19:29 | 0:08:18 | 0:11:11 | smithi | master | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/mon} | 1 | |
Failure Reason:
Command failed on smithi160 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290642 | 2021-07-25 02:45:21 | 2021-07-25 02:45:21 | 2021-07-25 03:07:52 | 0:22:31 | 0:12:52 | 0:09:39 | smithi | master | rhel | 8.3 | rados/cephadm/smoke-singlehost/{0-distro$/{rhel_8.3_kubic_stable} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
Command failed on smithi107 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:f7cd730b96380f0a18fbfd486fd4354c294b17eb -v bootstrap --fsid 2564e32c-ecf5-11eb-8c23-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 172.21.15.107 --single-host-defaults --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
pass | 6290643 | 2021-07-25 02:45:22 | 2021-07-25 02:45:23 | 2021-07-25 03:07:02 | 0:21:39 | 0:10:35 | 0:11:04 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6290644 | 2021-07-25 02:45:23 | 2021-07-25 02:45:23 | 2021-07-25 03:04:42 | 0:19:19 | 0:09:43 | 0:09:36 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
Command failed on smithi154 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290645 | 2021-07-25 02:45:24 | 2021-07-25 02:45:25 | 2021-07-25 03:05:34 | 0:20:09 | 0:07:33 | 0:12:36 | smithi | master | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream} tasks/scrub_test} | 2 | |
Failure Reason:
Command failed on smithi123 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd sqlite-devel sqlite-devel sqlite-devel sqlite-devel' |
||||||||||||||
dead | 6290646 | 2021-07-25 02:45:25 | 2021-07-25 02:45:25 | 2021-07-25 03:11:36 | 0:26:11 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/classic} | 2 | |||
dead | 6290647 | 2021-07-25 02:45:26 | 2021-07-25 02:45:26 | 2021-07-25 03:09:56 | 0:24:30 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} | 1 | |||
fail | 6290648 | 2021-07-25 02:45:26 | 2021-07-25 02:45:28 | 2021-07-25 03:05:47 | 0:20:19 | 0:08:54 | 0:11:25 | smithi | master | centos | 8.stream | rados/singleton/{all/divergent_priors mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8.stream}} | 1 | |
Failure Reason:
Command failed on smithi061 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290649 | 2021-07-25 02:45:27 | 2021-07-25 02:45:28 | 2021-07-25 03:05:45 | 0:20:17 | 0:09:32 | 0:10:45 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_adoption} | 1 | |
Failure Reason:
Command failed on smithi167 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
dead | 6290650 | 2021-07-25 02:45:28 | 2021-07-25 02:45:29 | 2021-07-25 03:10:27 | 0:24:58 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{centos_8.stream} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |||
fail | 6290651 | 2021-07-25 02:45:29 | 2021-07-25 02:45:30 | 2021-07-25 03:05:36 | 0:20:06 | 0:09:39 | 0:10:27 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
Failure Reason:
Command failed on smithi143 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290652 | 2021-07-25 02:45:30 | 2021-07-25 02:45:31 | 2021-07-25 03:09:08 | 0:23:37 | 0:08:45 | 0:14:52 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream} thrashers/mapgap thrashosds-health workloads/small-objects-balanced} | 2 | |
Failure Reason:
Command failed on smithi076 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290653 | 2021-07-25 02:45:31 | 2021-07-25 02:45:31 | 2021-07-25 03:10:41 | 0:25:10 | 0:13:44 | 0:11:26 | smithi | master | rhel | 8.3 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/rgw 3-final} | 2 | |
Failure Reason:
Command failed on smithi094 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:f7cd730b96380f0a18fbfd486fd4354c294b17eb -v bootstrap --fsid 6615c83c-ecf5-11eb-8c23-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 172.21.15.94 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
pass | 6290654 | 2021-07-25 02:45:32 | 2021-07-25 02:45:33 | 2021-07-25 03:06:14 | 0:20:41 | 0:09:35 | 0:11:06 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6290655 | 2021-07-25 02:45:33 | 2021-07-25 02:45:33 | 2021-07-25 03:05:11 | 0:19:38 | 0:09:47 | 0:09:51 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed on smithi112 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
pass | 6290656 | 2021-07-25 02:45:34 | 2021-07-25 02:45:35 | 2021-07-25 03:05:01 | 0:19:26 | 0:10:51 | 0:08:35 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/divergent_priors2 mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} | 1 | |
dead | 6290657 | 2021-07-25 02:45:35 | 2021-07-25 02:45:35 | 2021-07-25 03:11:10 | 0:25:35 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/basic 3-final} | 2 | |||
dead | 6290658 | 2021-07-25 02:45:36 | 2021-07-25 02:45:36 | 2021-07-25 03:09:59 | 0:24:23 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/small-objects-localized} | 2 | |||
dead | 6290659 | 2021-07-25 02:45:37 | 2021-07-25 02:45:38 | 2021-07-25 03:11:20 | 0:25:42 | smithi | master | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-comp-lz4 supported-random-distro$/{ubuntu_latest} tasks/insights} | 2 | |||
dead | 6290660 | 2021-07-25 02:45:38 | 2021-07-25 02:45:38 | 2021-07-25 03:11:27 | 0:25:49 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |||
fail | 6290661 | 2021-07-25 02:45:39 | 2021-07-25 02:45:40 | 2021-07-25 03:10:09 | 0:24:29 | 0:10:35 | 0:13:54 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi052 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290662 | 2021-07-25 02:45:40 | 2021-07-25 02:45:40 | 2021-07-25 03:09:13 | 0:23:33 | 0:08:49 | 0:14:44 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed on smithi059 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290663 | 2021-07-25 02:45:41 | 2021-07-25 02:45:41 | 2021-07-25 03:05:41 | 0:20:00 | 0:13:14 | 0:06:46 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/fastclose objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/minsize_recovery thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} | 2 | |
Failure Reason:
Command failed on smithi074 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290664 | 2021-07-25 02:45:42 | 2021-07-25 02:45:43 | 2021-07-25 03:06:58 | 0:21:15 | 0:13:23 | 0:07:52 | smithi | master | rhel | 8.3 | rados/cephadm/smoke/{distro/rhel_8.3_kubic_stable fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi149 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:f7cd730b96380f0a18fbfd486fd4354c294b17eb -v bootstrap --fsid 1f00f3ae-ecf5-11eb-8c23-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.149 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6290665 | 2021-07-25 02:45:42 | 2021-07-25 02:45:43 | 2021-07-25 03:07:46 | 0:22:03 | 0:10:47 | 0:11:16 | smithi | master | centos | 8.3 | rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/mon_recovery} | 3 | |
Failure Reason:
Command failed on smithi087 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290666 | 2021-07-25 02:45:43 | 2021-07-25 02:45:44 | 2021-07-25 03:06:27 | 0:20:43 | 0:08:21 | 0:12:22 | smithi | master | centos | 8.stream | rados/singleton/{all/dump-stuck mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8.stream}} | 1 | |
Failure Reason:
Command failed on smithi078 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290667 | 2021-07-25 02:45:44 | 2021-07-25 02:45:45 | 2021-07-25 03:05:44 | 0:19:59 | 0:13:24 | 0:06:35 | smithi | master | rhel | 8.4 | rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed on smithi035 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290668 | 2021-07-25 02:45:45 | 2021-07-25 02:45:46 | 2021-07-25 03:08:56 | 0:23:10 | 0:10:56 | 0:12:14 | smithi | master | centos | 8.3 | rados/cephadm/with-work/{0-distro/centos_8.3_kubic_stable fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi192 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290669 | 2021-07-25 02:45:46 | 2021-07-25 02:45:47 | 2021-07-25 03:02:52 | 0:17:05 | 0:08:59 | 0:08:06 | smithi | master | rhel | 8.4 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} tasks/libcephsqlite} | 2 | |
Failure Reason:
Command failed on smithi187 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd sqlite-devel sqlite-devel sqlite-devel sqlite-devel' |
||||||||||||||
fail | 6290670 | 2021-07-25 02:45:47 | 2021-07-25 02:45:47 | 2021-07-25 03:08:50 | 0:23:03 | 0:09:01 | 0:14:02 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8.stream} thrashers/none thrashosds-health workloads/small-objects} | 2 | |
Failure Reason:
Command failed on smithi093 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
dead | 6290671 | 2021-07-25 02:45:48 | 2021-07-25 02:45:48 | 2021-07-25 03:09:46 | 0:23:58 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_seq_read} | 1 | |||
pass | 6290672 | 2021-07-25 02:45:49 | 2021-07-25 02:45:49 | 2021-07-25 03:06:28 | 0:20:39 | 0:08:20 | 0:12:19 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/export-after-evict mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6290673 | 2021-07-25 02:45:51 | 2021-07-25 02:45:52 | 2021-07-25 03:09:06 | 0:23:14 | 0:10:08 | 0:13:06 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/client-keyring 3-final} | 2 | |
Failure Reason:
Command failed on smithi027 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:f7cd730b96380f0a18fbfd486fd4354c294b17eb -v bootstrap --fsid 5795e03a-ecf5-11eb-8c23-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 172.21.15.27 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
dead | 6290674 | 2021-07-25 02:45:52 | 2021-07-25 02:45:52 | 2021-07-25 03:11:34 | 0:25:42 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} | 1 | |||
dead | 6290675 | 2021-07-25 02:45:53 | 2021-07-25 02:45:55 | 2021-07-25 03:10:24 | 0:24:29 | smithi | master | ubuntu | 20.04 | rados/monthrash/{ceph clusters/3-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/snaps-few-objects} | 2 | |||
fail | 6290676 | 2021-07-25 02:45:54 | 2021-07-25 02:45:55 | 2021-07-25 03:10:59 | 0:25:04 | 0:14:03 | 0:11:01 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/fastclose rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |
Failure Reason:
Command failed on smithi118 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290677 | 2021-07-25 02:45:55 | 2021-07-25 02:45:56 | 2021-07-25 03:06:35 | 0:20:39 | 0:13:41 | 0:06:58 | smithi | master | rhel | 8.4 | rados/singleton/{all/ec-lost-unfound mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed on smithi001 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
dead | 6290678 | 2021-07-25 02:45:56 | 2021-07-25 02:45:57 | 2021-07-25 03:10:21 | 0:24:24 | smithi | master | centos | 8.3 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/connectivity} | 2 | |||
fail | 6290679 | 2021-07-25 02:45:57 | 2021-07-25 02:45:57 | 2021-07-25 03:09:02 | 0:23:05 | 0:09:59 | 0:13:06 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados tasks/mon_recovery validater/lockdep} | 2 | |
Failure Reason:
Command failed on smithi106 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
dead | 6290680 | 2021-07-25 02:45:58 | 2021-07-25 02:45:59 | 2021-07-25 03:09:41 | 0:23:42 | smithi | master | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd-backfill} | 1 | |||
fail | 6290681 | 2021-07-25 02:45:59 | 2021-07-25 02:45:59 | 2021-07-25 03:08:32 | 0:22:33 | 0:09:13 | 0:13:20 | smithi | master | centos | 8.3 | rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} | 1 | |
Failure Reason:
Command failed on smithi065 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
pass | 6290682 | 2021-07-25 02:46:00 | 2021-07-25 02:46:01 | 2021-07-25 03:07:42 | 0:21:41 | 0:09:06 | 0:12:35 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_cephadm_repos} | 1 | |
dead | 6290683 | 2021-07-25 02:46:01 | 2021-07-25 02:46:01 | 2021-07-25 03:11:06 | 0:25:05 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |||
fail | 6290684 | 2021-07-25 02:46:02 | 2021-07-25 02:46:03 | 2021-07-25 03:07:43 | 0:21:40 | 0:10:30 | 0:11:10 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
Failure Reason:
Command failed on smithi133 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290685 | 2021-07-25 02:46:03 | 2021-07-25 02:46:03 | 2021-07-25 03:09:01 | 0:22:58 | 0:08:50 | 0:14:08 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
Failure Reason:
Command failed on smithi003 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290686 | 2021-07-25 02:46:03 | 2021-07-25 02:46:04 | 2021-07-25 03:07:37 | 0:21:33 | 0:13:37 | 0:07:56 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/full-tiering mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed on smithi026 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290687 | 2021-07-25 02:46:04 | 2021-07-25 02:46:05 | 2021-07-25 03:10:20 | 0:24:15 | 0:13:47 | 0:10:28 | smithi | master | rhel | 8.3 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/iscsi 3-final} | 2 | |
Failure Reason:
Command failed on smithi153 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:f7cd730b96380f0a18fbfd486fd4354c294b17eb -v bootstrap --fsid 5c45c348-ecf5-11eb-8c23-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 172.21.15.153 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 6290688 | 2021-07-25 02:46:05 | 2021-07-25 02:46:06 | 2021-07-25 03:05:39 | 0:19:33 | 0:12:25 | 0:07:08 | smithi | master | rhel | 8.4 | rados/singleton/{all/erasure-code-nonregression mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed on smithi099 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
dead | 6290689 | 2021-07-25 02:46:06 | 2021-07-25 02:46:07 | 2021-07-25 03:10:29 | 0:24:22 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/radosbench} | 3 | |||
fail | 6290690 | 2021-07-25 02:46:07 | 2021-07-25 02:46:08 | 2021-07-25 03:10:09 | 0:24:01 | 0:10:16 | 0:13:45 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi063 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
dead | 6290691 | 2021-07-25 02:46:08 | 2021-07-25 02:46:08 | 2021-07-25 03:11:26 | 0:25:18 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/mirror 3-final} | 2 | |||
dead | 6290692 | 2021-07-25 02:46:09 | 2021-07-25 02:46:10 | 2021-07-25 03:11:22 | 0:25:12 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/snaps-few-objects-localized} | 2 | |||
fail | 6290693 | 2021-07-25 02:46:10 | 2021-07-25 02:46:10 | 2021-07-25 03:05:45 | 0:19:35 | 0:08:37 | 0:10:58 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/health-warnings mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |
Failure Reason:
Command failed on smithi096 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290694 | 2021-07-25 02:46:11 | 2021-07-25 02:46:12 | 2021-07-25 03:09:55 | 0:23:43 | 0:13:45 | 0:09:58 | smithi | master | rhel | 8.4 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-comp-snappy supported-random-distro$/{rhel_8} tasks/module_selftest} | 2 | |
Failure Reason:
Command failed on smithi170 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
dead | 6290695 | 2021-07-25 02:46:12 | 2021-07-25 02:46:12 | 2021-07-25 03:10:43 | 0:24:31 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_kubic_stable 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
dead | 6290696 | 2021-07-25 02:46:13 | 2021-07-25 02:46:14 | 2021-07-25 03:10:21 | 0:24:07 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |||
pass | 6290697 | 2021-07-25 02:46:14 | 2021-07-25 02:46:14 | 2021-07-25 03:08:11 | 0:21:57 | 0:11:36 | 0:10:21 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4M_write} | 1 | |
fail | 6290698 | 2021-07-25 02:46:15 | 2021-07-25 02:46:16 | 2021-07-25 03:05:02 | 0:18:46 | 0:08:32 | 0:10:14 | smithi | master | centos | 8.3 | rados/singleton/{all/lost-unfound-delete mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed on smithi114 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290699 | 2021-07-25 02:46:16 | 2021-07-25 02:46:16 | 2021-07-25 03:07:03 | 0:20:47 | 0:07:55 | 0:12:52 | smithi | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi181 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd sqlite-devel sqlite-devel sqlite-devel sqlite-devel' |
||||||||||||||
dead | 6290700 | 2021-07-25 02:46:17 | 2021-07-25 02:46:18 | 2021-07-25 03:10:02 | 0:23:44 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{distro/ubuntu_20.04 fixed-2 mon_election/classic start} | 2 | |||
fail | 6290701 | 2021-07-25 02:46:18 | 2021-07-25 02:46:18 | 2021-07-25 03:05:49 | 0:19:31 | 0:09:31 | 0:10:00 | smithi | master | centos | 8.3 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/connectivity msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-radosbench} | 2 | |
Failure Reason:
Command failed on smithi092 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6290702 | 2021-07-25 02:46:19 | 2021-07-25 02:46:20 | 2021-07-25 03:04:45 | 0:18:25 | 0:08:41 | 0:09:44 | smithi | master | centos | 8.stream | rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8.stream}} | 1 | |
Failure Reason:
Command failed on smithi022 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
pass | 6290703 | 2021-07-25 02:46:20 | 2021-07-25 02:46:20 | 2021-07-25 03:09:16 | 0:22:56 | 0:08:54 | 0:14:02 | smithi | master | ubuntu | 20.04 | rados/multimon/{clusters/3 mon_election/classic msgr-failures/many msgr/async no_pools objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} | 2 | |
dead | 6290704 | 2021-07-25 02:46:21 | 2021-07-25 02:46:21 | 2021-07-25 03:10:25 | 0:24:04 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_orch_cli} | 1 | |||
dead | 6290705 | 2021-07-25 02:46:22 | 2021-07-25 02:53:12 | 2021-07-25 03:10:27 | 0:17:15 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |||
dead | 6290706 | 2021-07-25 02:46:23 | 2021-07-25 02:53:12 | 2021-07-25 03:10:57 | 0:17:45 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/snaps-few-objects} | 2 | |||
dead | 6290707 | 2021-07-25 02:46:23 | 2021-07-25 03:03:04 | 2021-07-25 03:11:00 | 0:07:56 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/nfs-ingress 3-final} | 2 | |||
dead | 6290708 | 2021-07-25 02:46:24 | 2021-07-25 03:03:42 | 2021-07-25 03:10:56 | 0:07:14 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/lost-unfound mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} | 1 | |||
dead | 6290709 | 2021-07-25 02:46:25 | 2021-07-25 03:03:51 | 2021-07-25 03:11:19 | 0:07:28 | smithi | master | centos | 8.3 | rados/monthrash/{ceph clusters/9-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/sync workloads/pool-create-delete} | 2 | |||
dead | 6290710 | 2021-07-25 02:46:26 | 2021-07-25 03:04:32 | 2021-07-25 03:24:01 | 0:19:29 | smithi | master | rhel | 8.3 | rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 6290711 | 2021-07-25 02:46:27 | 2021-07-25 03:04:53 | 2021-07-25 03:09:45 | 0:04:52 | smithi | master | centos | 8.2 | rados/dashboard/{centos_8.2_kubic_stable debug/mgr mon_election/classic random-objectstore$/{filestore-xfs} tasks/e2e} | 2 | |||
Failure Reason:
Error reimaging machines: SSH connection to smithi154 was lost: "while [ ! -e '/.cephlab_net_configured' ]; do sleep 5; done" |
||||||||||||||
dead | 6290712 | 2021-07-25 02:46:28 | 2021-07-25 03:04:54 | 2021-07-25 03:24:13 | 0:19:19 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados tasks/rados_api_tests validater/valgrind} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 6290713 | 2021-07-25 02:46:29 | 2021-07-25 03:05:04 | 2021-07-25 03:09:47 | 0:04:43 | smithi | master | rhel | 8.3 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/nfs-ingress2 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: SSH connection to smithi112 was lost: "while [ ! -e '/.cephlab_net_configured' ]; do sleep 5; done" |
||||||||||||||
dead | 6290714 | 2021-07-25 02:46:30 | 2021-07-25 03:05:15 | 2021-07-25 03:09:47 | 0:04:32 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |||
Failure Reason:
Error reimaging machines: SSH connection to smithi168 was lost: "while [ ! -e '/.cephlab_net_configured' ]; do sleep 5; done" |
||||||||||||||
dead | 6290715 | 2021-07-25 02:46:31 | 2021-07-25 03:05:15 | 2021-07-25 03:24:01 | 0:18:46 | smithi | master | rhel | 8.4 | rados/singleton/{all/max-pg-per-osd.from-mon mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 1 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 6290716 | 2021-07-25 02:46:32 | 2021-07-25 03:05:25 | 2021-07-25 03:23:45 | 0:18:20 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 6290717 | 2021-07-25 02:46:33 | 2021-07-25 03:05:35 | 2021-07-25 03:24:27 | 0:18:52 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8.stream} thrashers/mapgap thrashosds-health workloads/write_fadvise_dontneed} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 6290718 | 2021-07-25 02:46:33 | 2021-07-25 03:05:46 | 2021-07-25 03:23:48 | 0:18:02 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 6290719 | 2021-07-25 02:46:34 | 2021-07-25 03:05:46 | 2021-07-25 03:23:42 | 0:17:56 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 6290720 | 2021-07-25 02:46:35 | 2021-07-25 03:05:47 | 2021-07-25 03:24:54 | 0:19:07 | smithi | master | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd} | 1 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 6290721 | 2021-07-25 02:46:36 | 2021-07-25 03:05:47 | 2021-07-25 03:24:33 | 0:18:46 | smithi | master | centos | 8.3 | rados/cephadm/smoke/{distro/centos_8.3_kubic_stable fixed-2 mon_election/connectivity start} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 6290722 | 2021-07-25 02:46:37 | 2021-07-25 03:05:57 | 2021-07-25 03:24:12 | 0:18:15 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} | 1 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 6290723 | 2021-07-25 02:46:38 | 2021-07-25 03:05:58 | 2021-07-25 03:24:39 | 0:18:41 | smithi | master | centos | 8.stream | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8.stream}} | 1 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 6290724 | 2021-07-25 02:46:39 | 2021-07-25 03:05:58 | 2021-07-25 03:24:02 | 0:18:04 | smithi | master | rhel | 8.4 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/rados_cls_all} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 6290725 | 2021-07-25 02:46:40 | 2021-07-25 03:06:09 | 2021-07-25 03:24:01 | 0:17:52 | smithi | master | centos | 8.3 | rados/cephadm/smoke-singlehost/{0-distro$/{centos_8.3_kubic_stable} 1-start 2-services/rgw 3-final} | 1 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 6290726 | 2021-07-25 02:46:41 | 2021-07-25 03:06:10 | 2021-07-25 03:24:03 | 0:17:53 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/connectivity msgr-failures/few rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 6290727 | 2021-07-25 02:46:42 | 2021-07-25 03:06:20 | 2021-07-25 03:24:29 | 0:18:09 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 6290728 | 2021-07-25 02:46:43 | 2021-07-25 03:06:20 | 2021-07-25 03:24:00 | 0:17:40 | smithi | master | centos | 8.3 | rados/singleton/{all/max-pg-per-osd.from-primary mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} | 1 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 6290729 | 2021-07-25 02:46:44 | 2021-07-25 03:06:21 | 2021-07-25 03:24:40 | 0:18:19 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/classic} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 6290730 | 2021-07-25 02:46:45 | 2021-07-25 03:06:31 | 2021-07-25 03:22:05 | 0:15:34 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6290731 | 2021-07-25 02:46:45 | 2021-07-25 03:07:01 | 2021-07-25 03:24:40 | 0:17:39 | smithi | master | rhel | 8.4 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-comp-zlib supported-random-distro$/{rhel_8} tasks/progress} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 6290732 | 2021-07-25 02:46:46 | 2021-07-25 03:07:12 | 2021-07-25 03:22:57 | 0:15:45 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6290733 | 2021-07-25 02:46:47 | 2021-07-25 03:07:52 | 2021-07-25 03:22:58 | 0:15:06 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_adoption} | 1 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6290734 | 2021-07-25 02:46:48 | 2021-07-25 03:07:52 | 2021-07-25 03:23:01 | 0:15:09 | smithi | master | centos | 8.stream | rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8.stream}} | 1 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6290735 | 2021-07-25 02:46:49 | 2021-07-25 03:07:52 | 2021-07-25 03:22:57 | 0:15:05 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-small-objects-balanced} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6290736 | 2021-07-25 02:46:50 | 2021-07-25 03:07:53 | 2021-07-25 03:22:57 | 0:15:04 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/nfs 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6290737 | 2021-07-25 02:46:51 | 2021-07-25 03:07:53 | 2021-07-25 03:23:02 | 0:15:09 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/max-pg-per-osd.from-replica mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} | 1 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6290738 | 2021-07-25 02:46:52 | 2021-07-25 03:07:53 | 2021-07-25 03:23:02 | 0:15:09 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6290739 | 2021-07-25 02:46:53 | 2021-07-25 03:07:53 | 2021-07-25 03:23:02 | 0:15:09 | smithi | master | centos | 8.3 | rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6290740 | 2021-07-25 02:46:54 | 2021-07-25 03:07:54 | 2021-07-25 03:23:03 | 0:15:09 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/rbd_cls} | 3 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6290741 | 2021-07-25 02:46:55 | 2021-07-25 03:07:55 | 2021-07-25 03:23:20 | 0:15:25 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/nfs2 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6290742 | 2021-07-25 02:46:55 | 2021-07-25 03:08:16 | 2021-07-25 03:23:31 | 0:15:15 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/cache-agent-big} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6290743 | 2021-07-25 02:46:56 | 2021-07-25 03:08:26 | 2021-07-25 03:23:34 | 0:15:08 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_cephadm} | 1 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6290744 | 2021-07-25 02:46:57 | 2021-07-25 03:08:26 | 2021-07-25 03:23:41 | 0:15:15 | smithi | master | centos | 8.stream | rados/monthrash/{ceph clusters/3-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream} thrashers/force-sync-many workloads/rados_5925} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6290745 | 2021-07-25 02:46:58 | 2021-07-25 03:08:36 | 2021-07-25 03:24:05 | 0:15:29 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/sample_fio} | 1 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6290746 | 2021-07-25 02:46:59 | 2021-07-25 03:08:57 | 2021-07-25 03:24:05 | 0:15:08 | smithi | master | centos | 8.3 | rados/singleton/{all/mon-auth-caps mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} | 1 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6290747 | 2021-07-25 02:47:00 | 2021-07-25 03:08:57 | 2021-07-25 03:24:05 | 0:15:08 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6290748 | 2021-07-25 02:47:01 | 2021-07-25 03:08:57 | 2021-07-25 03:24:11 | 0:15:14 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6290749 | 2021-07-25 02:47:02 | 2021-07-25 03:09:07 | 2021-07-25 03:24:11 | 0:15:04 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados tasks/rados_cls_all validater/lockdep} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6290750 | 2021-07-25 02:47:03 | 2021-07-25 03:09:08 | 2021-07-25 03:24:12 | 0:15:04 | smithi | master | rhel | 8.3 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/rgw-ingress 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6290751 | 2021-07-25 02:47:04 | 2021-07-25 03:09:08 | 2021-07-25 03:24:30 | 0:15:22 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/cache-agent-small} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6290752 | 2021-07-25 02:47:05 | 2021-07-25 03:09:18 | 2021-07-25 03:24:22 | 0:15:04 | smithi | master | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8.stream} tasks/rados_python} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6290753 | 2021-07-25 02:47:06 | 2021-07-25 03:09:19 | 2021-07-25 03:24:59 | 0:15:40 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_kubic_stable 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
dead | 6290857 | 2021-07-25 02:48:42 | 2021-07-25 02:48:42 | smithi | master | centos | 8.stream | rados/singleton/{all/radostool mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8.stream}} | — |