User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
lflores | 2022-12-15 16:40:54 | 2022-12-15 16:48:11 | 2022-12-15 20:09:49 | 3:21:38 | rados | wip-yuri3-testing-2022-12-14-0855-pacific | smithi | 2524708 | 28 | 32 | 51 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 7119749 | 2022-12-15 16:43:39 | 2022-12-15 16:46:49 | 2022-12-15 16:49:51 | 0:03:02 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi115 |
||||||||||||||
dead | 7119752 | 2022-12-15 16:43:55 | 2022-12-15 16:47:15 | 2022-12-15 16:58:08 | 0:10:53 | 0:03:42 | 0:07:11 | smithi | main | rhel | 8.4 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/sync workloads/rados_mon_osdmap_prune} | 2 | |
Failure Reason:
{'smithi203.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}, 'smithi012.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}} |
||||||||||||||
pass | 7119755 | 2022-12-15 16:44:06 | 2022-12-15 16:48:11 | 2022-12-15 17:16:31 | 0:28:20 | 0:15:53 | 0:12:27 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 7119756 | 2022-12-15 16:44:16 | 2022-12-15 16:50:33 | 2022-12-15 17:08:49 | 0:18:16 | 0:05:30 | 0:12:46 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi008 with status 2: 'git archive --remote=https://git.ceph.com/ceph-ci.git 25247086556727088a4e5f94004449b27369ea05 src/cephadm/cephadm | tar -xO src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm' |
||||||||||||||
fail | 7119759 | 2022-12-15 16:44:32 | 2022-12-15 16:51:02 | 2022-12-15 17:12:43 | 0:21:41 | 0:07:59 | 0:13:42 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} | 1 | |
Failure Reason:
Command failed on smithi115 with status 1: 'sudo kubeadm init --node-name smithi115 --token abcdef.rb088tgj6wrvheaq --pod-network-cidr 10.251.144.0/21' |
||||||||||||||
fail | 7119766 | 2022-12-15 16:44:47 | 2022-12-15 16:51:03 | 2022-12-15 17:12:59 | 0:21:56 | 0:07:56 | 0:14:00 | smithi | main | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
Failure Reason:
Command failed on smithi140 with status 2: 'git archive --remote=https://git.ceph.com/ceph-ci.git 25247086556727088a4e5f94004449b27369ea05 src/cephadm/cephadm | tar -xO src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm' |
||||||||||||||
fail | 7119767 | 2022-12-15 16:44:56 | 2022-12-15 16:53:59 | 2022-12-15 17:15:04 | 0:21:05 | 0:11:26 | 0:09:39 | smithi | main | centos | 8.stream | rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi090 with status 2: 'git archive --remote=https://git.ceph.com/ceph-ci.git 25247086556727088a4e5f94004449b27369ea05 src/cephadm/cephadm | tar -xO src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm' |
||||||||||||||
pass | 7119771 | 2022-12-15 16:45:12 | 2022-12-15 16:54:01 | 2022-12-15 17:23:41 | 0:29:40 | 0:18:10 | 0:11:30 | smithi | main | centos | 8.stream | rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 2 | |
fail | 7119774 | 2022-12-15 16:45:23 | 2022-12-15 16:56:50 | 2022-12-15 17:15:06 | 0:18:16 | 0:05:11 | 0:13:05 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
Failure Reason:
Command failed on smithi038 with status 2: 'git archive --remote=https://git.ceph.com/ceph-ci.git 25247086556727088a4e5f94004449b27369ea05 src/cephadm/cephadm | tar -xO src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm' |
||||||||||||||
fail | 7119778 | 2022-12-15 16:45:30 | 2022-12-15 16:58:43 | 2022-12-15 17:17:43 | 0:19:00 | 0:05:29 | 0:13:31 | smithi | main | ubuntu | 20.04 | rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zstd} supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Command failed on smithi106 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
pass | 7119781 | 2022-12-15 16:45:41 | 2022-12-15 16:59:15 | 2022-12-15 20:09:49 | 3:10:34 | 2:58:00 | 0:12:34 | smithi | main | ubuntu | 18.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/classic thrashosds-health ubuntu_18.04} | 4 | |
pass | 7119782 | 2022-12-15 16:45:46 | 2022-12-15 17:37:14 | 1666 | smithi | main | rhel | 8.4 | rados/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{rhel_8} workloads/ceph_iscsi} | 3 | ||||
pass | 7119787 | 2022-12-15 16:45:52 | 2022-12-15 16:59:46 | 2022-12-15 17:27:40 | 0:27:54 | 0:16:06 | 0:11:48 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
dead | 7119789 | 2022-12-15 16:45:56 | 2022-12-15 17:02:41 | 2022-12-15 17:10:06 | 0:07:25 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/classic start} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi083 |
||||||||||||||
dead | 7119793 | 2022-12-15 16:46:13 | 2022-12-15 17:08:08 | 2022-12-15 17:10:45 | 0:02:37 | smithi | main | centos | 8.stream | rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi039 |
||||||||||||||
pass | 7119794 | 2022-12-15 16:46:19 | 2022-12-15 17:08:08 | 2022-12-15 17:45:03 | 0:36:55 | 0:26:28 | 0:10:27 | smithi | main | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/small-objects-balanced} | 2 | |
fail | 7119797 | 2022-12-15 16:46:25 | 2022-12-15 17:09:14 | 2022-12-15 17:27:40 | 0:18:26 | 0:04:56 | 0:13:30 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke-singlehost/{0-distro$/{ubuntu_20.04} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
Command failed on smithi137 with status 2: 'git archive --remote=https://git.ceph.com/ceph-ci.git 25247086556727088a4e5f94004449b27369ea05 src/cephadm/cephadm | tar -xO src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm' |
||||||||||||||
pass | 7119802 | 2022-12-15 16:46:42 | 2022-12-15 17:10:29 | 2022-12-15 17:46:08 | 0:35:39 | 0:21:29 | 0:14:10 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/random-eio mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 2 | |
pass | 7119804 | 2022-12-15 16:46:58 | 2022-12-15 17:11:19 | 2022-12-15 17:38:25 | 0:27:06 | 0:18:41 | 0:08:25 | smithi | main | centos | 8.stream | rados/singleton/{all/rebuild-mondb mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7119806 | 2022-12-15 16:47:14 | 2022-12-15 17:11:20 | 2022-12-15 17:33:21 | 0:22:01 | 0:07:19 | 0:14:42 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} | 3 | |
Failure Reason:
Command failed on smithi039 with status 2: 'git archive --remote=https://git.ceph.com/ceph-ci.git 25247086556727088a4e5f94004449b27369ea05 src/cephadm/cephadm | tar -xO src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm' |
||||||||||||||
fail | 7119809 | 2022-12-15 16:47:31 | 2022-12-15 17:12:14 | 2022-12-15 17:31:34 | 0:19:20 | 0:08:16 | 0:11:04 | smithi | main | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
Command failed on smithi115 with status 2: 'git archive --remote=https://git.ceph.com/ceph-ci.git 25247086556727088a4e5f94004449b27369ea05 src/cephadm/cephadm | tar -xO src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm' |
||||||||||||||
fail | 7119812 | 2022-12-15 16:47:38 | 2022-12-15 17:13:09 | 2022-12-15 17:34:15 | 0:21:06 | 0:12:10 | 0:08:56 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi170 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=25247086556727088a4e5f94004449b27369ea05 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 7119817 | 2022-12-15 16:47:48 | 2022-12-15 17:13:09 | 2022-12-15 17:57:43 | 0:44:34 | 0:33:49 | 0:10:45 | smithi | main | rhel | 8.4 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/many workloads/rados_mon_workunits} | 2 | |
pass | 7119818 | 2022-12-15 16:47:54 | 2022-12-15 17:13:50 | 2022-12-15 18:19:14 | 1:05:24 | 0:56:18 | 0:09:06 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
fail | 7119821 | 2022-12-15 16:48:11 | 2022-12-15 17:14:04 | 2022-12-15 17:36:39 | 0:22:35 | 0:13:11 | 0:09:24 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test post-file.sh) on smithi139 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=25247086556727088a4e5f94004449b27369ea05 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh' |
||||||||||||||
pass | 7119823 | 2022-12-15 16:48:17 | 2022-12-15 17:14:04 | 2022-12-15 17:38:00 | 0:23:56 | 0:13:31 | 0:10:25 | smithi | main | centos | 8.stream | rados/singleton/{all/resolve_stuck_peering mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} | 2 | |
pass | 7119825 | 2022-12-15 16:48:28 | 2022-12-15 17:14:05 | 2022-12-15 17:30:41 | 0:16:36 | 0:08:43 | 0:07:53 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm_repos} | 1 | |
pass | 7119827 | 2022-12-15 16:48:30 | 2022-12-15 17:14:35 | 2022-12-15 17:35:26 | 0:20:51 | 0:12:17 | 0:08:34 | smithi | main | rhel | 8.4 | rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7119829 | 2022-12-15 16:48:41 | 2022-12-15 17:14:56 | 2022-12-15 17:36:31 | 0:21:35 | 0:10:05 | 0:11:30 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/test-crash mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7119832 | 2022-12-15 16:48:47 | 2022-12-15 17:14:56 | 2022-12-15 17:35:03 | 0:20:07 | 0:06:41 | 0:13:26 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
Command failed on smithi038 with status 2: 'git archive --remote=https://git.ceph.com/ceph-ci.git 25247086556727088a4e5f94004449b27369ea05 src/cephadm/cephadm | tar -xO src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm' |
||||||||||||||
pass | 7119834 | 2022-12-15 16:48:53 | 2022-12-15 17:15:59 | 2022-12-15 18:00:32 | 0:44:33 | 0:34:12 | 0:10:21 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} | 2 | |
fail | 7119836 | 2022-12-15 16:48:54 | 2022-12-15 17:16:10 | 2022-12-15 17:35:09 | 0:18:59 | 0:08:08 | 0:10:51 | smithi | main | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
Command failed on smithi055 with status 2: 'git archive --remote=https://git.ceph.com/ceph-ci.git 25247086556727088a4e5f94004449b27369ea05 src/cephadm/cephadm | tar -xO src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm' |
||||||||||||||
fail | 7119837 | 2022-12-15 16:48:55 | 2022-12-15 17:16:10 | 2022-12-15 17:33:37 | 0:17:27 | 0:05:22 | 0:12:05 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi146 with status 2: 'git archive --remote=https://git.ceph.com/ceph-ci.git 25247086556727088a4e5f94004449b27369ea05 src/cephadm/cephadm | tar -xO src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm' |
||||||||||||||
pass | 7119838 | 2022-12-15 16:49:06 | 2022-12-15 17:16:37 | 2022-12-15 17:51:46 | 0:35:09 | 0:27:33 | 0:07:36 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
pass | 7119839 | 2022-12-15 16:49:08 | 2022-12-15 17:16:37 | 2022-12-15 17:39:47 | 0:23:10 | 0:09:42 | 0:13:28 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/health-warnings mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7119840 | 2022-12-15 16:49:09 | 2022-12-15 17:18:31 | 2022-12-15 17:39:19 | 0:20:48 | 0:10:40 | 0:10:08 | smithi | main | centos | 8.stream | rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi134 with status 2: 'git archive --remote=https://git.ceph.com/ceph-ci.git 25247086556727088a4e5f94004449b27369ea05 src/cephadm/cephadm | tar -xO src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm' |
||||||||||||||
dead | 7119841 | 2022-12-15 16:49:11 | 2022-12-15 17:19:01 | 2022-12-15 17:23:01 | 0:04:00 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi106 |
||||||||||||||
dead | 7119842 | 2022-12-15 16:49:17 | 2022-12-15 17:19:45 | 2022-12-15 18:11:07 | 0:51:22 | smithi | main | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity objectstore/bluestore-hybrid supported-random-distro$/{centos_8} tasks/failover} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 7119843 | 2022-12-15 16:49:27 | 2022-12-15 17:20:20 | 2022-12-15 17:34:17 | 0:13:57 | 0:04:16 | 0:09:41 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/one workloads/snaps-few-objects} | 2 | |
Failure Reason:
{'smithi026.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}, 'smithi006.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}} |
||||||||||||||
dead | 7119844 | 2022-12-15 16:49:34 | 2022-12-15 17:21:49 | 2022-12-15 17:37:03 | 0:15:14 | 0:03:38 | 0:11:36 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/write_fadvise_dontneed} | 2 | |
Failure Reason:
{'smithi196.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}, 'smithi149.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}} |
||||||||||||||
pass | 7119845 | 2022-12-15 16:49:38 | 2022-12-15 17:21:49 | 2022-12-15 18:17:09 | 0:55:20 | 0:42:12 | 0:13:08 | smithi | main | ubuntu | 20.04 | rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
pass | 7119846 | 2022-12-15 16:49:50 | 2022-12-15 17:23:49 | 2022-12-15 17:51:17 | 0:27:28 | 0:16:18 | 0:11:10 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7119847 | 2022-12-15 16:49:56 | 2022-12-15 17:25:45 | 2022-12-15 17:46:51 | 0:21:06 | 0:11:38 | 0:09:28 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/rados_striper} | 2 | |
pass | 7119848 | 2022-12-15 16:50:07 | 2022-12-15 17:26:27 | 2022-12-15 18:08:56 | 0:42:29 | 0:35:20 | 0:07:09 | smithi | main | centos | 8.stream | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} | 1 | |
dead | 7119849 | 2022-12-15 16:50:24 | 2022-12-15 17:26:27 | 2022-12-15 17:30:22 | 0:03:55 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi137 |
||||||||||||||
fail | 7119850 | 2022-12-15 16:50:32 | 2022-12-15 17:28:01 | 2022-12-15 17:47:38 | 0:19:37 | 0:08:05 | 0:11:32 | smithi | main | ubuntu | 18.04 | rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} | 1 | |
Failure Reason:
Command failed on smithi033 with status 1: 'sudo kubeadm init --node-name smithi033 --token abcdef.2104j6mljg4vsd28 --pod-network-cidr 10.249.0.0/21' |
||||||||||||||
fail | 7119851 | 2022-12-15 16:50:38 | 2022-12-15 17:28:02 | 2022-12-15 17:54:12 | 0:26:10 | 0:07:23 | 0:18:47 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
Command failed on smithi016 with status 2: 'git archive --remote=https://git.ceph.com/ceph-ci.git 25247086556727088a4e5f94004449b27369ea05 src/cephadm/cephadm | tar -xO src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm' |
||||||||||||||
fail | 7119852 | 2022-12-15 16:50:55 | 2022-12-15 17:31:45 | 2022-12-15 17:54:12 | 0:22:27 | 0:05:39 | 0:16:48 | smithi | main | rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} | 2 | |||
Failure Reason:
Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=25247086556727088a4e5f94004449b27369ea05 |
||||||||||||||
dead | 7119853 | 2022-12-15 16:51:01 | 2022-12-15 17:32:10 | 2022-12-15 17:35:55 | 0:03:45 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/cache-agent-big} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi183 |
||||||||||||||
dead | 7119854 | 2022-12-15 16:51:02 | 2022-12-15 17:32:31 | 2022-12-15 17:42:38 | 0:10:07 | 0:02:09 | 0:07:58 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
Failure Reason:
{'smithi156.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}, 'smithi136.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}} |
||||||||||||||
fail | 7119855 | 2022-12-15 16:51:08 | 2022-12-15 17:34:23 | 2022-12-15 17:39:48 | 0:05:25 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
Stale jobs detected, aborting. |
||||||||||||||
fail | 7119856 | 2022-12-15 16:51:20 | 2022-12-15 17:34:44 | 2022-12-15 17:40:21 | 0:05:37 | smithi | main | rhel | 8.4 | rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} | 2 | |||
Failure Reason:
Stale jobs detected, aborting. |
||||||||||||||
fail | 7119857 | 2022-12-15 16:51:36 | 2022-12-15 17:34:44 | 2022-12-15 17:48:38 | 0:13:54 | 0:05:26 | 0:08:28 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi006 with status 2: 'git archive --remote=https://git.ceph.com/ceph-ci.git 25247086556727088a4e5f94004449b27369ea05 src/cephadm/cephadm | tar -xO src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm' |
||||||||||||||
fail | 7119858 | 2022-12-15 16:51:52 | 2022-12-15 17:34:50 | 2022-12-15 17:52:23 | 0:17:33 | 0:04:44 | 0:12:49 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke-singlehost/{0-distro$/{ubuntu_18.04} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
Command failed on smithi103 with status 2: 'git archive --remote=https://git.ceph.com/ceph-ci.git 25247086556727088a4e5f94004449b27369ea05 src/cephadm/cephadm | tar -xO src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm' |
||||||||||||||
pass | 7119859 | 2022-12-15 16:52:03 | 2022-12-15 17:35:25 | 2022-12-15 17:58:27 | 0:23:02 | 0:14:45 | 0:08:17 | smithi | main | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8} workloads/mgr} | 1 | |
pass | 7119860 | 2022-12-15 16:52:10 | 2022-12-15 17:35:55 | 2022-12-15 18:19:14 | 0:43:19 | 0:34:27 | 0:08:52 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} | 2 | |
pass | 7119861 | 2022-12-15 16:52:16 | 2022-12-15 17:36:15 | 2022-12-15 18:00:13 | 0:23:58 | 0:11:54 | 0:12:04 | smithi | main | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic objectstore/bluestore-low-osd-mem-target supported-random-distro$/{ubuntu_latest} tasks/insights} | 2 | |
pass | 7119862 | 2022-12-15 16:52:19 | 2022-12-15 17:36:46 | 2022-12-15 18:06:51 | 0:30:05 | 0:20:50 | 0:09:15 | smithi | main | centos | 8.stream | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/sync-many workloads/pool-create-delete} | 2 | |
pass | 7119863 | 2022-12-15 16:52:23 | 2022-12-15 17:36:46 | 2022-12-15 18:06:12 | 0:29:26 | 0:15:59 | 0:13:27 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 7119864 | 2022-12-15 16:52:39 | 2022-12-15 17:37:43 | 2022-12-15 17:59:31 | 0:21:48 | 0:06:40 | 0:15:08 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} | 3 | |
Failure Reason:
Command failed on smithi089 with status 2: 'git archive --remote=https://git.ceph.com/ceph-ci.git 25247086556727088a4e5f94004449b27369ea05 src/cephadm/cephadm | tar -xO src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm' |
||||||||||||||
fail | 7119865 | 2022-12-15 16:52:45 | 2022-12-15 17:37:43 | 2022-12-15 17:57:57 | 0:20:14 | 0:05:10 | 0:15:04 | smithi | main | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi088 with status 2: 'git archive --remote=https://git.ceph.com/ceph-ci.git 25247086556727088a4e5f94004449b27369ea05 src/cephadm/cephadm | tar -xO src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm' |
||||||||||||||
dead | 7119866 | 2022-12-15 16:52:49 | 2022-12-15 17:38:33 | 2022-12-15 17:43:06 | 0:04:33 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi164 |
||||||||||||||
dead | 7119867 | 2022-12-15 16:52:50 | 2022-12-15 17:38:49 | 2022-12-15 17:42:01 | 0:03:12 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi134 |
||||||||||||||
dead | 7119868 | 2022-12-15 16:52:51 | 2022-12-15 17:39:48 | 2022-12-15 17:43:42 | 0:03:54 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/cache-snaps} | 3 | |||
Failure Reason:
Error reimaging machines: SSH connection to smithi164 was lost: "sudo sed -i -e 's/smithi164/smithi164/g' /etc/hosts" |
||||||||||||||
fail | 7119869 | 2022-12-15 16:52:52 | 2022-12-15 17:39:48 | 2022-12-15 17:47:44 | 0:07:56 | smithi | main | centos | 8.stream | rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} | 2 | |||
Failure Reason:
Stale jobs detected, aborting. |
||||||||||||||
dead | 7119870 | 2022-12-15 16:52:59 | 2022-12-15 17:39:49 | 2022-12-15 17:45:39 | 0:05:50 | smithi | main | rhel | 8.4 | rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |||
Failure Reason:
SSH connection to smithi061 was lost: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y install linux-image-generic' |
||||||||||||||
fail | 7119871 | 2022-12-15 16:53:15 | 2022-12-15 17:40:25 | 2022-12-15 17:47:38 | 0:07:13 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
Stale jobs detected, aborting. |
||||||||||||||
fail | 7119872 | 2022-12-15 16:53:19 | 2022-12-15 17:41:00 | 2022-12-15 17:55:43 | 0:14:43 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/radosbench} | 3 | |||
Failure Reason:
Stale jobs detected, aborting. |
||||||||||||||
fail | 7119873 | 2022-12-15 16:53:23 | 2022-12-15 17:41:47 | 2022-12-15 17:49:09 | 0:07:22 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} | 2 | |||
Failure Reason:
Stale jobs detected, aborting. |
||||||||||||||
fail | 7119874 | 2022-12-15 16:53:29 | 2022-12-15 17:42:14 | 2022-12-15 18:06:01 | 0:23:47 | 0:11:04 | 0:12:43 | smithi | main | centos | 8.stream | rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi136 with status 2: 'git archive --remote=https://git.ceph.com/ceph-ci.git 25247086556727088a4e5f94004449b27369ea05 src/cephadm/cephadm | tar -xO src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm' |
||||||||||||||
dead | 7119875 | 2022-12-15 16:53:45 | 2022-12-15 17:43:52 | 2022-12-15 18:00:39 | 0:16:47 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read} | 2 | |||
dead | 7119876 | 2022-12-15 16:53:50 | 2022-12-15 17:44:32 | 2022-12-15 17:57:29 | 0:12:57 | smithi | main | rhel | 8.4 | rados/singleton/{all/divergent_priors2 mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} | 1 | |||
fail | 7119877 | 2022-12-15 16:54:01 | 2022-12-15 17:44:37 | 2022-12-15 17:58:14 | 0:13:37 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/rbd_cls} | 3 | |||
Failure Reason:
timed out |
||||||||||||||
dead | 7119878 | 2022-12-15 16:54:07 | 2022-12-15 17:44:38 | 2022-12-15 18:02:37 | 0:17:59 | smithi | main | ubuntu | 18.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/connectivity thrashosds-health ubuntu_18.04} | 4 | |||
pass | 7119879 | 2022-12-15 16:54:14 | 2022-12-15 17:46:20 | 2022-12-15 18:24:57 | 0:38:37 | 0:28:38 | 0:09:59 | smithi | main | rhel | 8.4 | rados/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{rhel_8} workloads/ceph_iscsi} | 3 | |
fail | 7119880 | 2022-12-15 16:54:25 | 2022-12-15 17:47:12 | 2022-12-15 18:06:34 | 0:19:22 | 0:07:48 | 0:11:34 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi134 with status 2: 'git archive --remote=https://git.ceph.com/ceph-ci.git 25247086556727088a4e5f94004449b27369ea05 src/cephadm/cephadm | tar -xO src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm' |
||||||||||||||
dead | 7119881 | 2022-12-15 16:54:28 | 2022-12-15 17:47:12 | 2022-12-15 19:42:14 | 1:55:02 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 7119882 | 2022-12-15 16:54:34 | 2022-12-15 17:48:27 | 2022-12-15 19:42:55 | 1:54:28 | smithi | main | ubuntu | 18.04 | rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
pass | 7119883 | 2022-12-15 16:54:46 | 2022-12-15 17:48:47 | 2022-12-15 18:31:17 | 0:42:30 | 0:32:23 | 0:10:07 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
dead | 7119884 | 2022-12-15 16:55:02 | 2022-12-15 17:48:48 | 2022-12-15 17:54:41 | 0:05:53 | smithi | main | rhel | 8.4 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/mon} | 1 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/task/cancel (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f810768b100>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
fail | 7119885 | 2022-12-15 16:55:07 | 2022-12-15 17:49:13 | 2022-12-15 18:22:29 | 0:33:16 | 0:11:38 | 0:21:38 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi044 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=25247086556727088a4e5f94004449b27369ea05 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
dead | 7119886 | 2022-12-15 16:55:13 | 2022-12-15 17:49:43 | 2022-12-15 18:11:36 | 0:21:53 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/redirect_promote_tests} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 7119887 | 2022-12-15 16:55:14 | 2022-12-15 17:50:17 | 2022-12-15 18:11:23 | 0:21:06 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/many workloads/rados_mon_osdmap_prune} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 7119888 | 2022-12-15 16:55:15 | 2022-12-15 17:50:43 | 2022-12-15 18:23:59 | 0:33:16 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 7119889 | 2022-12-15 16:55:21 | 2022-12-15 17:51:04 | 2022-12-15 17:54:30 | 0:03:26 | smithi | main | centos | 8.stream | rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/task/cancel (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2acf10b4f0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119890 | 2022-12-15 16:55:32 | 2022-12-15 17:51:46 | 2022-12-15 18:06:18 | 0:14:32 | 0:04:05 | 0:10:27 | smithi | main | ubuntu | 20.04 | rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} | 2 | |
Failure Reason:
{'smithi080.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}, 'smithi047.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}} |
||||||||||||||
dead | 7119891 | 2022-12-15 16:55:44 | 2022-12-15 17:51:46 | 2022-12-15 17:54:26 | 0:02:40 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fcb3686e6d0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119892 | 2022-12-15 16:55:54 | 2022-12-15 17:53:38 | 2022-12-15 17:55:09 | 0:01:31 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm_repos} | 1 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f78b2e61d90>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119893 | 2022-12-15 16:56:00 | 2022-12-15 17:54:45 | 2022-12-15 17:55:09 | 0:00:24 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/redirect_set_object} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1317a15a90>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119894 | 2022-12-15 16:56:16 | 2022-12-15 17:54:46 | 2022-12-15 17:55:09 | 0:00:23 | smithi | main | centos | 8.stream | rados/singleton/{all/max-pg-per-osd.from-mon mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 1 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f6e5a730fa0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119895 | 2022-12-15 16:56:27 | 2022-12-15 17:54:51 | 2022-12-15 17:55:52 | 0:01:01 | smithi | main | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f93bfb59ac0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119896 | 2022-12-15 16:56:44 | 2022-12-15 17:55:01 | 2022-12-15 17:56:00 | 0:00:59 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/cache-snaps} | 3 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fa87f55e640>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119897 | 2022-12-15 16:56:50 | 2022-12-15 17:55:42 | 2022-12-15 17:56:36 | 0:00:54 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe159e0d8b0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119898 | 2022-12-15 16:57:00 | 2022-12-15 17:56:36 | 2022-12-15 17:57:22 | 0:00:46 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fbb050b2b20>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119899 | 2022-12-15 16:57:04 | 2022-12-15 17:56:56 | 2022-12-15 17:57:43 | 0:00:47 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f67a62b0910>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119900 | 2022-12-15 16:57:05 | 2022-12-15 17:57:14 | 2022-12-15 17:57:43 | 0:00:29 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/max-pg-per-osd.from-replica mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} | 1 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f63577b2e80>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119901 | 2022-12-15 16:57:07 | 2022-12-15 17:57:30 | 2022-12-15 17:57:43 | 0:00:13 | smithi | main | rhel | 8.4 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/one workloads/rados_mon_workunits} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f74b416aca0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119902 | 2022-12-15 16:57:08 | 2022-12-15 17:57:37 | 2022-12-15 17:57:43 | 0:00:06 | smithi | main | centos | 8.stream | rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fa32fb26be0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119903 | 2022-12-15 16:57:24 | 2022-12-15 17:57:43 | 2022-12-15 17:58:07 | 0:00:24 | smithi | main | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fa4846f5a30>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119904 | 2022-12-15 16:57:30 | 2022-12-15 17:57:43 | 2022-12-15 17:58:27 | 0:00:44 | smithi | main | rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} | 2 | |||||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2de0e19be0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119905 | 2022-12-15 16:57:41 | 2022-12-15 17:57:59 | 2022-12-15 17:58:40 | 0:00:41 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4419a479d0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119906 | 2022-12-15 16:57:52 | 2022-12-15 17:57:59 | 2022-12-15 17:59:16 | 0:01:17 | smithi | main | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f6ba41ff8e0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119907 | 2022-12-15 16:57:54 | 2022-12-15 17:58:32 | 2022-12-15 17:59:46 | 0:01:14 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/connectivity start} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f65bea06a60>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119908 | 2022-12-15 16:58:10 | 2022-12-15 17:59:08 | 2022-12-15 17:59:47 | 0:00:39 | smithi | main | rhel | 8.4 | rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{rhel_8}} | 1 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fc878fcafa0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119909 | 2022-12-15 16:58:21 | 2022-12-15 17:59:26 | 2022-12-15 18:00:04 | 0:00:38 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke-singlehost/{0-distro$/{ubuntu_20.04} 1-start 2-services/rgw 3-final} | 1 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2fb5d98fd0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119910 | 2022-12-15 16:58:37 | 2022-12-15 17:59:46 | 2022-12-15 18:00:26 | 0:00:40 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fea7a6e0cd0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119911 | 2022-12-15 16:58:47 | 2022-12-15 17:59:47 | 2022-12-15 18:00:04 | 0:00:17 | smithi | main | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1d264c89a0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119912 | 2022-12-15 16:58:58 | 2022-12-15 17:59:47 | 2022-12-15 18:00:44 | 0:00:57 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/filestore-xfs rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f04f6a8baf0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119913 | 2022-12-15 16:59:14 | 2022-12-15 17:59:47 | 2022-12-15 18:00:27 | 0:00:40 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f9a20170d30>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119914 | 2022-12-15 16:59:25 | 2022-12-15 17:59:48 | 2022-12-15 18:00:27 | 0:00:39 | smithi | main | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/scrub} | 1 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7faef69bbee0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7119915 | 2022-12-15 16:59:35 | 2022-12-15 17:59:53 | 2022-12-15 18:00:55 | 0:01:02 | smithi | main | ubuntu | 18.04 | rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} | 3 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fee16871c10>: Failed to establish a new connection: [Errno 113] No route to host')) |