Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7124779 2022-12-22 15:20:08 2022-12-22 15:20:49 2022-12-22 15:44:44 0:23:55 0:09:25 0:14:30 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} 1
Failure Reason:

Command failed on smithi149 with status 1: 'sudo kubeadm init --node-name smithi149 --token abcdef.yu23pfi6eo7clg3f --pod-network-cidr 10.252.160.0/21'

pass 7124780 2022-12-22 15:20:10 2022-12-22 15:20:49 2022-12-22 15:58:32 0:37:43 0:24:26 0:13:17 smithi master ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 7124781 2022-12-22 15:20:11 2022-12-22 15:20:50 2022-12-22 15:45:51 0:25:01 0:07:17 0:17:44 smithi master ubuntu 20.04 rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-snappy} supported-random-distro$/{ubuntu_latest} tasks/dashboard} 2
Failure Reason:

Command failed on smithi067 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s'

pass 7124782 2022-12-22 15:20:12 2022-12-22 15:20:50 2022-12-22 18:34:08 3:13:18 3:01:17 0:12:01 smithi master ubuntu 18.04 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/classic thrashosds-health ubuntu_18.04} 4
pass 7124783 2022-12-22 15:20:13 2022-12-22 15:20:51 2022-12-22 16:00:23 0:39:32 0:26:30 0:13:02 smithi master centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
pass 7124784 2022-12-22 15:20:14 2022-12-22 15:20:51 2022-12-22 16:02:17 0:41:26 0:33:08 0:08:18 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7124785 2022-12-22 15:20:15 2022-12-22 15:20:51 2022-12-22 16:05:37 0:44:46 0:33:06 0:11:40 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 7124786 2022-12-22 15:20:17 2022-12-22 15:20:52 2022-12-22 15:44:37 0:23:45 0:12:37 0:11:08 smithi master centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi099 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=22be1b8fdf3b0a37691ef6abcf04f7c57402534b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7124787 2022-12-22 15:20:18 2022-12-22 15:20:52 2022-12-22 15:57:57 0:37:05 0:24:54 0:12:11 smithi master centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/small-objects} 2
fail 7124788 2022-12-22 15:20:19 2022-12-22 15:20:52 2022-12-22 15:42:07 0:21:15 0:08:29 0:12:46 smithi master rhel 8.4 rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed (workunit test post-file.sh) on smithi088 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=22be1b8fdf3b0a37691ef6abcf04f7c57402534b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh'

pass 7124789 2022-12-22 15:20:20 2022-12-22 15:20:53 2022-12-22 16:00:54 0:40:01 0:29:15 0:10:46 smithi master rhel 8.4 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
fail 7124790 2022-12-22 15:20:22 2022-12-22 15:20:53 2022-12-22 15:39:25 0:18:32 0:07:50 0:10:42 smithi master ubuntu 18.04 rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} 1
Failure Reason:

Command failed on smithi201 with status 1: 'sudo kubeadm init --node-name smithi201 --token abcdef.rm4kl72uct6fp5vm --pod-network-cidr 10.254.64.0/21'

dead 7124791 2022-12-22 15:20:23 2022-12-22 15:20:53 2022-12-22 15:36:43 0:15:50 0:05:14 0:10:36 smithi master rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

{'smithi154.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': True, 'cmd': ['cpan', 'Amazon::S3'], 'delta': '0:00:04.252742', 'end': '2022-12-22 15:36:02.393433', 'invocation': {'module_args': {'_raw_params': 'cpan Amazon::S3', '_uses_shell': False, 'argv': None, 'chdir': None, 'creates': None, 'executable': None, 'removes': None, 'stdin': None, 'stdin_add_newline': True, 'strip_empty_ends': True, 'warn': True}}, 'msg': 'non-zero return code', 'rc': 25, 'start': '2022-12-22 15:35:58.140691', 'stderr': '', 'stderr_lines': [], 'stdout': "Loading internal logger. Log::Log4perl recommended for better logging\nReading '/home/ubuntu/.cpan/Metadata'\n Database was generated on Fri, 12 Feb 2016 02:17:02 GMT\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/01mailrc.txt.gz\nReading '/home/ubuntu/.cpan/sources/authors/01mailrc.txt.gz'\n............................................................................DONE\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/modules/02packages.details.txt.gz\nReading '/home/ubuntu/.cpan/sources/modules/02packages.details.txt.gz'\n Database was generated on Fri, 12 Feb 2016 02:17:02 GMT\nWarning: This index file is 2505 days old.\n Please check the host you chose as your CPAN mirror for staleness.\n I'll continue but problems seem likely to happen.\x07\n............................................................................DONE\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/modules/03modlist.data.gz\nReading '/home/ubuntu/.cpan/sources/modules/03modlist.data.gz'\nDONE\nWriting /home/ubuntu/.cpan/Metadata\nRunning install for module 'Amazon::S3'\n\nWarning: checksum file '/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/CHECKSUMS' not conforming.\n\nThe cksum does not contain the key 'cpan_path' for 'Amazon-S3-0.45.tar.gz'.\nProceed nonetheless? [no] no\nAborted.", 'stdout_lines': ['Loading internal logger. Log::Log4perl recommended for better logging', "Reading '/home/ubuntu/.cpan/Metadata'", ' Database was generated on Fri, 12 Feb 2016 02:17:02 GMT', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/01mailrc.txt.gz', "Reading '/home/ubuntu/.cpan/sources/authors/01mailrc.txt.gz'", '............................................................................DONE', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/modules/02packages.details.txt.gz', "Reading '/home/ubuntu/.cpan/sources/modules/02packages.details.txt.gz'", ' Database was generated on Fri, 12 Feb 2016 02:17:02 GMT', 'Warning: This index file is 2505 days old.', ' Please check the host you chose as your CPAN mirror for staleness.', " I'll continue but problems seem likely to happen.\x07", '............................................................................DONE', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/modules/03modlist.data.gz', "Reading '/home/ubuntu/.cpan/sources/modules/03modlist.data.gz'", 'DONE', 'Writing /home/ubuntu/.cpan/Metadata', "Running install for module 'Amazon::S3'", '', "Warning: checksum file '/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/CHECKSUMS' not conforming.", '', "The cksum does not contain the key 'cpan_path' for 'Amazon-S3-0.45.tar.gz'.", 'Proceed nonetheless? [no] no', 'Aborted.']}, 'smithi093.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': True, 'cmd': ['cpan', 'Amazon::S3'], 'delta': '0:00:04.236598', 'end': '2022-12-22 15:36:02.530604', 'invocation': {'module_args': {'_raw_params': 'cpan Amazon::S3', '_uses_shell': False, 'argv': None, 'chdir': None, 'creates': None, 'executable': None, 'removes': None, 'stdin': None, 'stdin_add_newline': True, 'strip_empty_ends': True, 'warn': True}}, 'msg': 'non-zero return code', 'rc': 25, 'start': '2022-12-22 15:35:58.294006', 'stderr': '', 'stderr_lines': [], 'stdout': "Loading internal logger. Log::Log4perl recommended for better logging\nReading '/home/ubuntu/.cpan/Metadata'\n Database was generated on Fri, 12 Feb 2016 02:17:02 GMT\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/01mailrc.txt.gz\nReading '/home/ubuntu/.cpan/sources/authors/01mailrc.txt.gz'\n............................................................................DONE\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/modules/02packages.details.txt.gz\nReading '/home/ubuntu/.cpan/sources/modules/02packages.details.txt.gz'\n Database was generated on Fri, 12 Feb 2016 02:17:02 GMT\nWarning: This index file is 2505 days old.\n Please check the host you chose as your CPAN mirror for staleness.\n I'll continue but problems seem likely to happen.\x07\n............................................................................DONE\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/modules/03modlist.data.gz\nReading '/home/ubuntu/.cpan/sources/modules/03modlist.data.gz'\nDONE\nWriting /home/ubuntu/.cpan/Metadata\nRunning install for module 'Amazon::S3'\n\nWarning: checksum file '/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/CHECKSUMS' not conforming.\n\nThe cksum does not contain the key 'cpan_path' for 'Amazon-S3-0.45.tar.gz'.\nProceed nonetheless? [no] no\nAborted.", 'stdout_lines': ['Loading internal logger. Log::Log4perl recommended for better logging', "Reading '/home/ubuntu/.cpan/Metadata'", ' Database was generated on Fri, 12 Feb 2016 02:17:02 GMT', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/01mailrc.txt.gz', "Reading '/home/ubuntu/.cpan/sources/authors/01mailrc.txt.gz'", '............................................................................DONE', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/modules/02packages.details.txt.gz', "Reading '/home/ubuntu/.cpan/sources/modules/02packages.details.txt.gz'", ' Database was generated on Fri, 12 Feb 2016 02:17:02 GMT', 'Warning: This index file is 2505 days old.', ' Please check the host you chose as your CPAN mirror for staleness.', " I'll continue but problems seem likely to happen.\x07", '............................................................................DONE', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/modules/03modlist.data.gz', "Reading '/home/ubuntu/.cpan/sources/modules/03modlist.data.gz'", 'DONE', 'Writing /home/ubuntu/.cpan/Metadata', "Running install for module 'Amazon::S3'", '', "Warning: checksum file '/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/CHECKSUMS' not conforming.", '', "The cksum does not contain the key 'cpan_path' for 'Amazon-S3-0.45.tar.gz'.", 'Proceed nonetheless? [no] no', 'Aborted.']}}

pass 7124792 2022-12-22 15:20:24 2022-12-22 15:20:54 2022-12-22 15:44:13 0:23:19 0:14:45 0:08:34 smithi master rhel 8.4 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/cache-agent-small} 2
fail 7124793 2022-12-22 15:20:25 2022-12-22 15:49:50 1035 smithi master ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi142 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:22be1b8fdf3b0a37691ef6abcf04f7c57402534b shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aa5167fe-820e-11ed-97ac-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

fail 7124794 2022-12-22 15:20:26 2022-12-22 15:20:54 2022-12-22 15:42:03 0:21:09 0:12:04 0:09:05 smithi master centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi072 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=22be1b8fdf3b0a37691ef6abcf04f7c57402534b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7124795 2022-12-22 15:20:27 2022-12-22 15:20:55 2022-12-22 15:49:30 0:28:35 0:12:32 0:16:03 smithi master ubuntu 20.04 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/sync workloads/rados_5925} 2
pass 7124796 2022-12-22 15:20:28 2022-12-22 15:20:55 2022-12-22 15:44:46 0:23:51 0:10:35 0:13:16 smithi master ubuntu 20.04 rados/singleton/{all/divergent_priors mon_election/classic msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}} 1
pass 7124797 2022-12-22 15:20:30 2022-12-22 15:20:55 2022-12-22 16:00:53 0:39:58 0:26:10 0:13:48 smithi master centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
pass 7124798 2022-12-22 15:20:31 2022-12-22 15:20:56 2022-12-22 16:07:39 0:46:43 0:35:14 0:11:29 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7124799 2022-12-22 15:20:32 2022-12-22 15:20:56 2022-12-22 16:01:23 0:40:27 0:32:27 0:08:00 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 7124800 2022-12-22 15:20:33 2022-12-22 15:20:56 2022-12-22 16:03:44 0:42:48 0:31:29 0:11:19 smithi master centos 8.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-small-objects-many-deletes} 2
pass 7124801 2022-12-22 15:20:34 2022-12-22 15:20:57 2022-12-22 15:40:56 0:19:59 0:10:05 0:09:54 smithi master centos 8.stream rados/singleton/{all/erasure-code-nonregression mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} 1
fail 7124802 2022-12-22 15:20:35 2022-12-22 15:20:57 2022-12-22 15:49:55 0:28:58 0:11:10 0:17:48 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

Command failed on smithi085 with status 1: 'sudo kubeadm init --node-name smithi085 --token abcdef.w135m5xmka1fayjo --pod-network-cidr 10.250.160.0/21'

fail 7124803 2022-12-22 15:20:37 2022-12-22 15:20:57 2022-12-22 15:45:24 0:24:27 0:12:49 0:11:38 smithi master centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi083 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=22be1b8fdf3b0a37691ef6abcf04f7c57402534b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 7124804 2022-12-22 15:20:38 2022-12-22 15:20:58 2022-12-22 15:45:26 0:24:28 0:09:01 0:15:27 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test post-file.sh) on smithi064 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=22be1b8fdf3b0a37691ef6abcf04f7c57402534b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh'

fail 7124805 2022-12-22 15:20:39 2022-12-22 15:20:58 2022-12-22 15:42:45 0:21:47 0:09:00 0:12:47 smithi master rhel 8.4 rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

Command failed on smithi078 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:22be1b8fdf3b0a37691ef6abcf04f7c57402534b shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid afa83cd2-820e-11ed-97ac-001a4aab830c -- ceph mon dump -f json'

pass 7124806 2022-12-22 15:20:40 2022-12-22 15:20:58 2022-12-22 15:41:29 0:20:31 0:13:09 0:07:22 smithi master rhel 8.4 rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
dead 7124807 2022-12-22 15:20:41 2022-12-22 15:20:59 2022-12-22 15:35:43 0:14:44 0:04:44 0:10:00 smithi master rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

{'smithi156.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': True, 'cmd': ['cpan', 'Amazon::S3'], 'delta': '0:00:04.518571', 'end': '2022-12-22 15:34:30.859527', 'invocation': {'module_args': {'_raw_params': 'cpan Amazon::S3', '_uses_shell': False, 'argv': None, 'chdir': None, 'creates': None, 'executable': None, 'removes': None, 'stdin': None, 'stdin_add_newline': True, 'strip_empty_ends': True, 'warn': True}}, 'msg': 'non-zero return code', 'rc': 25, 'start': '2022-12-22 15:34:26.340956', 'stderr': '', 'stderr_lines': [], 'stdout': "Loading internal logger. Log::Log4perl recommended for better logging\nReading '/home/ubuntu/.cpan/Metadata'\n Database was generated on Fri, 12 Feb 2016 02:17:02 GMT\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/01mailrc.txt.gz\nReading '/home/ubuntu/.cpan/sources/authors/01mailrc.txt.gz'\n............................................................................DONE\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/modules/02packages.details.txt.gz\nReading '/home/ubuntu/.cpan/sources/modules/02packages.details.txt.gz'\n Database was generated on Fri, 12 Feb 2016 02:17:02 GMT\nWarning: This index file is 2505 days old.\n Please check the host you chose as your CPAN mirror for staleness.\n I'll continue but problems seem likely to happen.\x07\n............................................................................DONE\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/modules/03modlist.data.gz\nReading '/home/ubuntu/.cpan/sources/modules/03modlist.data.gz'\nDONE\nWriting /home/ubuntu/.cpan/Metadata\nRunning install for module 'Amazon::S3'\n\nWarning: checksum file '/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/CHECKSUMS' not conforming.\n\nThe cksum does not contain the key 'cpan_path' for 'Amazon-S3-0.45.tar.gz'.\nProceed nonetheless? [no] no\nAborted.", 'stdout_lines': ['Loading internal logger. Log::Log4perl recommended for better logging', "Reading '/home/ubuntu/.cpan/Metadata'", ' Database was generated on Fri, 12 Feb 2016 02:17:02 GMT', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/01mailrc.txt.gz', "Reading '/home/ubuntu/.cpan/sources/authors/01mailrc.txt.gz'", '............................................................................DONE', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/modules/02packages.details.txt.gz', "Reading '/home/ubuntu/.cpan/sources/modules/02packages.details.txt.gz'", ' Database was generated on Fri, 12 Feb 2016 02:17:02 GMT', 'Warning: This index file is 2505 days old.', ' Please check the host you chose as your CPAN mirror for staleness.', " I'll continue but problems seem likely to happen.\x07", '............................................................................DONE', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/modules/03modlist.data.gz', "Reading '/home/ubuntu/.cpan/sources/modules/03modlist.data.gz'", 'DONE', 'Writing /home/ubuntu/.cpan/Metadata', "Running install for module 'Amazon::S3'", '', "Warning: checksum file '/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/CHECKSUMS' not conforming.", '', "The cksum does not contain the key 'cpan_path' for 'Amazon-S3-0.45.tar.gz'.", 'Proceed nonetheless? [no] no', 'Aborted.']}, 'smithi162.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': True, 'cmd': ['cpan', 'Amazon::S3'], 'delta': '0:00:04.678719', 'end': '2022-12-22 15:34:30.917266', 'invocation': {'module_args': {'_raw_params': 'cpan Amazon::S3', '_uses_shell': False, 'argv': None, 'chdir': None, 'creates': None, 'executable': None, 'removes': None, 'stdin': None, 'stdin_add_newline': True, 'strip_empty_ends': True, 'warn': True}}, 'msg': 'non-zero return code', 'rc': 25, 'start': '2022-12-22 15:34:26.238547', 'stderr': '', 'stderr_lines': [], 'stdout': "Loading internal logger. Log::Log4perl recommended for better logging\nReading '/home/ubuntu/.cpan/Metadata'\n Database was generated on Fri, 12 Feb 2016 02:17:02 GMT\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/01mailrc.txt.gz\nReading '/home/ubuntu/.cpan/sources/authors/01mailrc.txt.gz'\n............................................................................DONE\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/modules/02packages.details.txt.gz\nReading '/home/ubuntu/.cpan/sources/modules/02packages.details.txt.gz'\n Database was generated on Fri, 12 Feb 2016 02:17:02 GMT\nWarning: This index file is 2505 days old.\n Please check the host you chose as your CPAN mirror for staleness.\n I'll continue but problems seem likely to happen.\x07\n............................................................................DONE\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/modules/03modlist.data.gz\nReading '/home/ubuntu/.cpan/sources/modules/03modlist.data.gz'\nDONE\nWriting /home/ubuntu/.cpan/Metadata\nRunning install for module 'Amazon::S3'\n\nWarning: checksum file '/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/CHECKSUMS' not conforming.\n\nThe cksum does not contain the key 'cpan_path' for 'Amazon-S3-0.45.tar.gz'.\nProceed nonetheless? [no] no\nAborted.", 'stdout_lines': ['Loading internal logger. Log::Log4perl recommended for better logging', "Reading '/home/ubuntu/.cpan/Metadata'", ' Database was generated on Fri, 12 Feb 2016 02:17:02 GMT', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/01mailrc.txt.gz', "Reading '/home/ubuntu/.cpan/sources/authors/01mailrc.txt.gz'", '............................................................................DONE', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/modules/02packages.details.txt.gz', "Reading '/home/ubuntu/.cpan/sources/modules/02packages.details.txt.gz'", ' Database was generated on Fri, 12 Feb 2016 02:17:02 GMT', 'Warning: This index file is 2505 days old.', ' Please check the host you chose as your CPAN mirror for staleness.', " I'll continue but problems seem likely to happen.\x07", '............................................................................DONE', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/modules/03modlist.data.gz', "Reading '/home/ubuntu/.cpan/sources/modules/03modlist.data.gz'", 'DONE', 'Writing /home/ubuntu/.cpan/Metadata', "Running install for module 'Amazon::S3'", '', "Warning: checksum file '/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/CHECKSUMS' not conforming.", '', "The cksum does not contain the key 'cpan_path' for 'Amazon-S3-0.45.tar.gz'.", 'Proceed nonetheless? [no] no', 'Aborted.']}}

fail 7124808 2022-12-22 15:20:42 2022-12-22 15:20:59 2022-12-22 15:45:38 0:24:39 0:12:52 0:11:47 smithi master centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi136 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=22be1b8fdf3b0a37691ef6abcf04f7c57402534b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 7124809 2022-12-22 15:20:43 2022-12-22 15:20:59 2022-12-22 15:44:14 0:23:15 0:10:37 0:12:38 smithi master ubuntu 18.04 rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} 3
Failure Reason:

Command failed on smithi006 with status 1: 'sudo kubeadm init --node-name smithi006 --token abcdef.fip15aqckktt2oxk --pod-network-cidr 10.248.40.0/21'