Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7141518 2023-01-13 16:44:56 2023-01-13 16:49:04 2023-01-13 17:04:48 0:15:44 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/radosbench} 3
Failure Reason:

Cannot connect to remote host smithi161

pass 7141519 2023-01-13 16:44:57 2023-01-13 16:49:04 2023-01-13 17:32:46 0:43:42 0:30:18 0:13:24 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/connectivity} 2
pass 7141520 2023-01-13 16:44:59 2023-01-13 16:49:05 2023-01-13 17:52:50 1:03:45 0:27:13 0:36:32 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/rados_api_tests} 2
fail 7141521 2023-01-13 16:45:00 2023-01-13 16:49:05 2023-01-13 16:58:51 0:09:46 smithi main ubuntu 20.04 rados/singleton/{all/peer mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi186 with status 100: 'sudo apt-get clean'

dead 7141522 2023-01-13 16:45:01 2023-01-13 16:49:05 2023-01-13 16:56:42 0:07:37 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Error reimaging machines: 'ssh_keyscan smithi156.front.sepia.ceph.com' reached maximum tries (5) after waiting for 5 seconds

dead 7141523 2023-01-13 16:45:02 2023-01-13 16:49:06 2023-01-13 16:58:55 0:09:49 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Error reimaging machines: 'ssh_keyscan smithi143.front.sepia.ceph.com' reached maximum tries (5) after waiting for 5 seconds

dead 7141524 2023-01-13 16:45:04 2023-01-13 16:49:06 2023-01-13 16:58:50 0:09:44 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/redirect} 2
Failure Reason:

SSH connection to smithi186 was lost: 'sudo apt-get update'

fail 7141525 2023-01-13 16:45:05 2023-01-13 16:49:06 2023-01-13 17:06:43 0:17:37 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
Failure Reason:

Cannot connect to remote host smithi086

dead 7141526 2023-01-13 16:45:06 2023-01-13 16:49:06 2023-01-13 17:03:14 0:14:08 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} 1
Failure Reason:

SSH connection to smithi145 was lost: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y install linux-image-generic'

pass 7141527 2023-01-13 16:45:07 2023-01-13 16:49:07 2023-01-13 17:24:12 0:35:05 0:24:16 0:10:49 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
dead 7141528 2023-01-13 16:45:09 2023-01-13 16:49:07 2023-01-13 17:00:34 0:11:27 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write} 1
Failure Reason:

SSH connection to smithi158 was lost: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y install linux-image-generic'

fail 7141529 2023-01-13 16:45:10 2023-01-13 16:49:07 2023-01-13 16:59:50 0:10:43 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi002 with status 100: 'sudo apt-get clean'

fail 7141530 2023-01-13 16:45:11 2023-01-13 16:49:08 2023-01-13 16:59:50 0:10:42 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi145 with status 100: 'sudo apt-get clean'

pass 7141531 2023-01-13 16:45:13 2023-01-13 16:49:18 2023-01-13 17:24:14 0:34:56 0:20:57 0:13:59 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
pass 7141532 2023-01-13 16:45:14 2023-01-13 16:49:28 2023-01-13 20:10:47 3:21:19 3:07:03 0:14:16 smithi main ubuntu 18.04 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/classic thrashosds-health ubuntu_18.04} 4
pass 7141533 2023-01-13 16:45:15 2023-01-13 16:50:49 2023-01-13 17:24:22 0:33:33 0:24:24 0:09:09 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
pass 7141534 2023-01-13 16:45:16 2023-01-13 16:52:49 2023-01-13 17:15:36 0:22:47 0:09:28 0:13:19 smithi main ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity objectstore/bluestore-comp-zlib supported-random-distro$/{ubuntu_latest} tasks/workunits} 2
pass 7141535 2023-01-13 16:45:18 2023-01-13 16:53:40 2023-01-13 17:27:48 0:34:08 0:12:54 0:21:14 smithi main ubuntu 18.04 rados/cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_18.04} 2-node-mgr orchestrator_cli} 2
pass 7141536 2023-01-13 16:45:19 2023-01-13 16:54:10 2023-01-13 17:26:47 0:32:37 0:22:15 0:10:22 smithi main ubuntu 20.04 rados/singleton/{all/radostool mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
pass 7141537 2023-01-13 16:45:20 2023-01-13 16:54:10 2023-01-13 17:16:36 0:22:26 0:11:15 0:11:11 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 7141538 2023-01-13 16:45:21 2023-01-13 16:54:41 2023-01-13 17:20:40 0:25:59 0:18:17 0:07:42 smithi main centos 8.stream rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/classic start} 2
pass 7141539 2023-01-13 16:45:23 2023-01-13 16:55:41 2023-01-13 17:45:27 0:49:46 0:26:24 0:23:22 smithi main ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} 2
pass 7141540 2023-01-13 16:45:24 2023-01-13 16:55:52 2023-01-13 17:40:37 0:44:45 0:31:51 0:12:54 smithi main ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 2
pass 7141541 2023-01-13 16:45:25 2023-01-13 16:56:12 2023-01-13 17:19:41 0:23:29 0:13:30 0:09:59 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
dead 7141542 2023-01-13 16:45:26 2023-01-13 16:56:12 2023-01-13 17:42:10 0:45:58 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Error reimaging machines: reached maximum tries (180) after waiting for 2700 seconds

pass 7141543 2023-01-13 16:45:28 2023-01-13 16:57:03 2023-01-13 17:16:49 0:19:46 0:09:20 0:10:26 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
fail 7141544 2023-01-13 16:45:29 2023-01-13 16:57:03 2023-01-13 17:16:21 0:19:18 0:11:54 0:07:24 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi080 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bcbf88bee4969f40f7fc319ee08e4d88e17faf44 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7141545 2023-01-13 16:45:30 2023-01-13 16:57:13 2023-01-13 17:32:04 0:34:51 0:23:06 0:11:45 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/small-objects} 2
pass 7141546 2023-01-13 16:45:32 2023-01-13 16:57:44 2023-01-13 17:17:48 0:20:04 0:08:50 0:11:14 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4K_seq_read} 1
pass 7141547 2023-01-13 16:45:33 2023-01-13 16:57:44 2023-01-13 17:50:05 0:52:21 0:40:06 0:12:15 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/cache-snaps} 3
pass 7141548 2023-01-13 16:45:34 2023-01-13 16:57:54 2023-01-13 17:41:32 0:43:38 0:29:43 0:13:55 smithi main ubuntu 20.04 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/many workloads/rados_mon_workunits} 2
fail 7141549 2023-01-13 16:45:35 2023-01-13 16:58:25 2023-01-13 17:16:42 0:18:17 0:10:13 0:08:04 smithi main centos 8.stream rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test post-file.sh) on smithi151 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bcbf88bee4969f40f7fc319ee08e4d88e17faf44 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh'

pass 7141550 2023-01-13 16:45:37 2023-01-13 16:58:25 2023-01-13 17:37:38 0:39:13 0:26:44 0:12:29 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects-balanced} 2
pass 7141551 2023-01-13 16:45:38 2023-01-13 16:58:25 2023-01-13 17:42:02 0:43:37 0:29:03 0:14:34 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 7141552 2023-01-13 16:45:39 2023-01-13 16:58:36 2023-01-13 17:18:43 0:20:07 0:07:44 0:12:23 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 7141553 2023-01-13 16:45:40 2023-01-13 16:58:56 2023-01-13 17:20:26 0:21:30 0:10:32 0:10:58 smithi main ubuntu 20.04 rados/singleton/{all/test-crash mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} 1
pass 7141554 2023-01-13 16:45:42 2023-01-13 16:58:56 2023-01-13 17:36:58 0:38:02 0:26:25 0:11:37 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/snaps-few-objects-localized} 2
dead 7141555 2023-01-13 16:45:43 2023-01-13 16:58:57 2023-01-13 17:14:37 0:15:40 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

SSH connection to smithi161 was lost: 'sudo apt-get update'

pass 7141556 2023-01-13 16:45:44 2023-01-13 16:59:07 2023-01-13 17:23:47 0:24:40 0:11:07 0:13:33 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} 1
pass 7141557 2023-01-13 16:45:45 2023-01-13 16:59:17 2023-01-13 17:19:48 0:20:31 0:08:33 0:11:58 smithi main ubuntu 20.04 rados/singleton/{all/test-noautoscale-flag mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} 1
pass 7141558 2023-01-13 16:45:47 2023-01-13 16:59:18 2023-01-13 17:37:53 0:38:35 0:24:38 0:13:57 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
dead 7141559 2023-01-13 16:45:48 2023-01-13 16:59:58 2023-01-13 17:45:07 0:45:09 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Error reimaging machines: reached maximum tries (180) after waiting for 2700 seconds

fail 7141560 2023-01-13 16:45:49 2023-01-13 16:59:58 2023-01-13 17:46:21 0:46:23 0:33:07 0:13:16 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/rbd_cls} 3
Failure Reason:

SSH connection to smithi002 was lost: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:bcbf88bee4969f40f7fc319ee08e4d88e17faf44 shell --fsid 71848cee-9366-11ed-821d-001a4aab830c -- ceph tell osd.4 flush_pg_stats'

pass 7141561 2023-01-13 16:45:50 2023-01-13 17:00:19 2023-01-13 17:54:32 0:54:13 0:38:45 0:15:28 smithi main ubuntu 20.04 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/one workloads/snaps-few-objects} 2
pass 7141562 2023-01-13 16:45:52 2023-01-13 17:00:59 2023-01-13 18:03:59 1:03:00 0:42:23 0:20:37 smithi main ubuntu 20.04 rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
pass 7141563 2023-01-13 16:45:53 2023-01-13 17:01:09 2023-01-13 17:38:07 0:36:58 0:24:20 0:12:38 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
pass 7141564 2023-01-13 16:45:55 2023-01-13 17:01:50 2023-01-13 17:45:37 0:43:47 0:23:58 0:19:49 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} 2
pass 7141565 2023-01-13 16:45:56 2023-01-13 17:02:20 2023-01-13 17:26:19 0:23:59 0:08:11 0:15:48 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_seq_read} 1
fail 7141566 2023-01-13 16:45:57 2023-01-13 17:02:21 2023-01-13 17:27:11 0:24:50 0:10:09 0:14:41 smithi main ubuntu 18.04 rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} 1
Failure Reason:

Command failed on smithi179 with status 1: 'sudo kubeadm init --node-name smithi179 --token abcdef.2hrdvp5jqoverg8r --pod-network-cidr 10.253.144.0/21'

pass 7141567 2023-01-13 16:45:58 2023-01-13 17:02:31 2023-01-13 18:21:29 1:18:58 0:58:52 0:20:06 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
fail 7141568 2023-01-13 16:46:00 2023-01-13 17:04:42 2023-01-13 17:24:25 0:19:43 0:07:07 0:12:36 smithi main rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=bcbf88bee4969f40f7fc319ee08e4d88e17faf44

fail 7141569 2023-01-13 16:46:01 2023-01-13 17:04:42 2023-01-13 17:15:05 0:10:23 smithi main ubuntu 20.04 rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 2
Failure Reason:

Command failed on smithi161 with status 100: 'sudo apt-get clean'

pass 7141570 2023-01-13 16:46:02 2023-01-13 17:04:52 2023-01-13 17:35:36 0:30:44 0:16:56 0:13:48 smithi main ubuntu 20.04 rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} 3
pass 7141571 2023-01-13 16:46:03 2023-01-13 17:06:53 2023-01-13 17:25:28 0:18:35 0:08:28 0:10:07 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4M_write} 1
pass 7141572 2023-01-13 16:46:05 2023-01-13 17:06:53 2023-01-13 17:50:08 0:43:15 0:15:38 0:27:37 smithi main ubuntu 20.04 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/pool-create-delete} 2
pass 7141573 2023-01-13 16:46:06 2023-01-13 17:07:13 2023-01-13 17:54:36 0:47:23 0:36:04 0:11:19 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} 3
pass 7141574 2023-01-13 16:46:07 2023-01-13 17:08:34 2023-01-13 17:41:28 0:32:54 0:21:15 0:11:39 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/cache-pool-snaps-readproxy} 2
fail 7141575 2023-01-13 16:46:08 2023-01-13 17:09:44 2023-01-13 17:41:55 0:32:11 0:18:18 0:13:53 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi029 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:bcbf88bee4969f40f7fc319ee08e4d88e17faf44 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b87b25e4-9367-11ed-821d-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

fail 7141576 2023-01-13 16:46:10 2023-01-13 17:09:55 2023-01-13 17:28:53 0:18:58 0:11:55 0:07:03 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi073 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bcbf88bee4969f40f7fc319ee08e4d88e17faf44 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7141577 2023-01-13 16:46:11 2023-01-13 17:09:55 2023-01-13 17:43:30 0:33:35 0:21:04 0:12:31 smithi main ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-balanced} 2
pass 7141578 2023-01-13 16:46:12 2023-01-13 17:11:36 2023-01-13 17:54:53 0:43:17 0:28:31 0:14:46 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 2
pass 7141579 2023-01-13 16:46:13 2023-01-13 17:12:36 2023-01-13 18:03:10 0:50:34 0:37:02 0:13:32 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/cache-snaps} 3
pass 7141580 2023-01-13 16:46:14 2023-01-13 17:15:37 2023-01-13 18:04:25 0:48:48 0:36:29 0:12:19 smithi main ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7141581 2023-01-13 16:46:16 2023-01-13 17:16:27 2023-01-13 17:48:51 0:32:24 0:21:38 0:10:46 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} tasks/rados_workunit_loadgen_mix} 2
pass 7141582 2023-01-13 16:46:17 2023-01-13 17:16:38 2023-01-13 17:50:32 0:33:54 0:23:49 0:10:05 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} 1
pass 7141583 2023-01-13 16:46:18 2023-01-13 17:16:38 2023-01-13 18:02:45 0:46:07 0:35:30 0:10:37 smithi main ubuntu 20.04 rados/singleton/{all/backfill-toofull mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} 1
pass 7141584 2023-01-13 16:46:19 2023-01-13 17:16:48 2023-01-13 18:01:58 0:45:10 0:27:01 0:18:09 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/cache-snaps-balanced} 2
pass 7141585 2023-01-13 16:46:21 2023-01-13 17:17:49 2023-01-13 17:47:45 0:29:56 0:18:00 0:11:56 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
pass 7141586 2023-01-13 16:46:22 2023-01-13 17:18:59 2023-01-13 17:39:44 0:20:45 0:08:48 0:11:57 smithi main ubuntu 20.04 rados/singleton/{all/deduptool mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} 1
pass 7141587 2023-01-13 16:46:23 2023-01-13 17:19:50 2023-01-13 17:59:19 0:39:29 0:15:51 0:23:38 smithi main ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 7141588 2023-01-13 16:46:24 2023-01-13 17:20:30 2023-01-13 17:39:57 0:19:27 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/radosbench} 3
Failure Reason:

Cannot connect to remote host smithi035

fail 7141589 2023-01-13 16:46:26 2023-01-13 17:22:11 2023-01-13 17:42:14 0:20:03 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
Failure Reason:

Cannot connect to remote host smithi195

pass 7141590 2023-01-13 16:46:27 2023-01-13 17:24:21 2023-01-13 20:08:25 2:44:04 2:34:32 0:09:32 smithi main ubuntu 20.04 rados/objectstore/{backends/filestore-idempotent supported-random-distro$/{ubuntu_latest}} 1
pass 7141591 2023-01-13 16:46:28 2023-01-13 17:24:21 2023-01-13 17:47:35 0:23:14 0:09:55 0:13:19 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/sample_fio} 1
pass 7141592 2023-01-13 16:46:29 2023-01-13 17:24:22 2023-01-13 17:59:18 0:34:56 0:23:40 0:11:16 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
pass 7141593 2023-01-13 16:46:30 2023-01-13 17:24:32 2023-01-13 18:15:06 0:50:34 0:27:14 0:23:20 smithi main ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7141594 2023-01-13 16:46:32 2023-01-13 17:24:32 2023-01-13 18:05:52 0:41:20 0:24:42 0:16:38 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} 2
pass 7141595 2023-01-13 16:46:33 2023-01-13 17:25:33 2023-01-13 18:55:22 1:29:49 0:18:58 1:10:51 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs 3-final} 2
pass 7141596 2023-01-13 16:46:34 2023-01-13 17:26:53 2023-01-13 20:28:47 3:01:54 2:50:15 0:11:39 smithi main ubuntu 18.04 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/connectivity thrashosds-health ubuntu_18.04} 4
pass 7141597 2023-01-13 16:46:35 2023-01-13 17:29:04 2023-01-13 18:36:45 1:07:41 0:53:05 0:14:36 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/snaps-few-objects} 3
pass 7141598 2023-01-13 16:46:36 2023-01-13 17:32:55 2023-01-13 18:06:49 0:33:54 0:24:11 0:09:43 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
pass 7141599 2023-01-13 16:46:38 2023-01-13 17:35:46 2023-01-13 18:01:12 0:25:26 0:18:10 0:07:16 smithi main centos 8.stream rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/classic start} 2
fail 7141600 2023-01-13 16:46:39 2023-01-13 17:35:46 2023-01-13 18:01:56 0:26:10 0:08:57 0:17:13 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

Command failed on smithi142 with status 1: 'sudo kubeadm init --node-name smithi142 --token abcdef.b4p6frbpdq1hx14y --pod-network-cidr 10.252.104.0/21'

fail 7141601 2023-01-13 16:46:40 2023-01-13 17:37:47 2023-01-13 17:56:54 0:19:07 0:11:55 0:07:12 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bcbf88bee4969f40f7fc319ee08e4d88e17faf44 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 7141602 2023-01-13 16:46:42 2023-01-13 17:37:47 2023-01-13 17:55:54 0:18:07 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Cannot connect to remote host smithi047

fail 7141603 2023-01-13 16:46:43 2023-01-13 17:37:57 2023-01-13 17:57:56 0:19:59 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/radosbench} 3
Failure Reason:

Cannot connect to remote host smithi032

pass 7141604 2023-01-13 16:46:44 2023-01-13 17:39:48 2023-01-13 18:17:50 0:38:02 0:23:46 0:14:16 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
fail 7141605 2023-01-13 16:46:45 2023-01-13 17:40:08 2023-01-13 17:58:39 0:18:31 0:06:59 0:11:32 smithi main rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=bcbf88bee4969f40f7fc319ee08e4d88e17faf44

pass 7141606 2023-01-13 16:46:46 2023-01-13 17:40:38 2023-01-13 18:03:31 0:22:53 0:09:05 0:13:48 smithi main ubuntu 20.04 rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} 3
pass 7141607 2023-01-13 16:46:48 2023-01-13 17:41:29 2023-01-13 18:13:42 0:32:13 0:19:54 0:12:19 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
pass 7141608 2023-01-13 16:46:49 2023-01-13 17:41:39 2023-01-13 18:04:31 0:22:52 0:10:34 0:12:18 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/filestore-xfs rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
fail 7141609 2023-01-13 16:46:50 2023-01-13 17:42:10 2023-01-13 18:00:59 0:18:49 0:11:50 0:06:59 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi083 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bcbf88bee4969f40f7fc319ee08e4d88e17faf44 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7141610 2023-01-13 16:46:51 2023-01-13 17:42:10 2023-01-13 18:15:02 0:32:52 0:22:11 0:10:41 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
pass 7141611 2023-01-13 16:46:53 2023-01-13 17:42:20 2023-01-13 18:19:11 0:36:51 0:22:19 0:14:32 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 2
pass 7141612 2023-01-13 16:46:54 2023-01-13 17:42:21 2023-01-13 18:22:29 0:40:08 0:26:11 0:13:57 smithi main ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 7141613 2023-01-13 16:46:55 2023-01-13 17:42:21 2023-01-13 18:03:44 0:21:23 0:08:58 0:12:25 smithi main ubuntu 18.04 rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} 3
Failure Reason:

Command failed on smithi046 with status 1: 'sudo kubeadm init --node-name smithi046 --token abcdef.osvfgt6tbnkee17j --pod-network-cidr 10.249.104.0/21'