Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7134303 2023-01-23 21:38:24 2023-01-24 01:07:06 2023-01-24 01:27:11 0:20:05 0:10:01 0:10:04 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} 1
pass 7134304 2023-01-23 21:38:25 2023-01-24 01:07:06 2023-01-24 01:35:21 0:28:15 0:17:28 0:10:47 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
pass 7134305 2023-01-23 21:38:26 2023-01-24 01:07:07 2023-01-24 01:41:34 0:34:27 0:23:30 0:10:57 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
fail 7134306 2023-01-23 21:38:28 2023-01-24 01:07:07 2023-01-24 01:25:55 0:18:48 0:05:30 0:13:18 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

Command failed on smithi023 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

pass 7134307 2023-01-23 21:38:29 2023-01-24 01:07:07 2023-01-24 01:32:51 0:25:44 0:19:04 0:06:40 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_cephadm} 1
pass 7134308 2023-01-23 21:38:30 2023-01-24 01:07:08 2023-01-24 01:26:42 0:19:34 0:08:16 0:11:18 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} 1
pass 7134309 2023-01-23 21:38:31 2023-01-24 01:07:58 2023-01-24 01:23:58 0:16:00 0:06:00 0:10:00 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_cephadm_repos} 1
pass 7134310 2023-01-23 21:38:32 2023-01-24 01:07:58 2023-01-24 01:25:52 0:17:54 0:08:12 0:09:42 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4K_seq_read} 1
pass 7134311 2023-01-23 21:38:33 2023-01-24 01:07:58 2023-01-24 01:48:28 0:40:30 0:31:18 0:09:12 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_nfs} 1
pass 7134312 2023-01-23 21:38:34 2023-01-24 01:07:59 2023-01-24 01:32:21 0:24:22 0:11:38 0:12:44 smithi main ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7134313 2023-01-23 21:38:35 2023-01-24 01:10:19 2023-01-24 01:52:52 0:42:33 0:31:32 0:11:01 smithi main ubuntu 20.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 1
pass 7134314 2023-01-23 21:38:37 2023-01-24 01:10:50 2023-01-24 01:33:06 0:22:16 0:14:48 0:07:28 smithi main rhel 8.6 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 7134315 2023-01-23 21:38:38 2023-01-24 01:12:10 2023-01-24 01:31:24 0:19:14 0:08:18 0:10:56 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_seq_read} 1
pass 7134316 2023-01-23 21:38:39 2023-01-24 01:13:31 2023-01-24 01:39:04 0:25:33 0:16:52 0:08:41 smithi main rhel 8.6 rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
fail 7134317 2023-01-23 21:38:40 2023-01-24 01:16:01 2023-01-24 01:45:38 0:29:37 0:19:20 0:10:17 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi007 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6985fac6e5b350c1d421063ad9cac9068a3467d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7134318 2023-01-23 21:38:41 2023-01-24 01:16:31 2023-01-24 01:33:53 0:17:22 0:06:08 0:11:14 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

Command failed on smithi008 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

pass 7134319 2023-01-23 21:38:42 2023-01-24 01:17:42 2023-01-24 03:11:15 1:53:33 1:41:41 0:11:52 smithi main ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-radosbench} 2
pass 7134320 2023-01-23 21:38:44 2023-01-24 01:19:02 2023-01-24 01:39:18 0:20:16 0:14:57 0:05:19 smithi main rhel 8.6 rados/cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.6_container_tools_rhel8} 1-start 2-services/rgw 3-final} 1
pass 7134321 2023-01-23 21:38:45 2023-01-24 01:19:03 2023-01-24 01:42:07 0:23:04 0:13:09 0:09:55 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 7134322 2023-01-23 21:38:46 2023-01-24 01:19:03 2023-01-24 01:36:31 0:17:28 0:08:22 0:09:06 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4M_write} 1
pass 7134323 2023-01-23 21:38:47 2023-01-24 01:19:03 2023-01-24 01:44:33 0:25:30 0:16:03 0:09:27 smithi main rhel 8.6 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7134324 2023-01-23 21:38:48 2023-01-24 01:21:24 2023-01-24 01:49:14 0:27:50 0:17:37 0:10:13 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
pass 7134325 2023-01-23 21:38:49 2023-01-24 01:21:24 2023-01-24 01:55:32 0:34:08 0:23:43 0:10:25 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
pass 7134326 2023-01-23 21:38:51 2023-01-24 01:21:34 2023-01-24 01:54:59 0:33:25 0:23:37 0:09:48 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} 1
pass 7134327 2023-01-23 21:38:52 2023-01-24 01:21:35 2023-01-24 01:49:46 0:28:11 0:18:06 0:10:05 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
pass 7134328 2023-01-23 21:38:53 2023-01-24 01:21:45 2023-01-24 01:51:06 0:29:21 0:17:50 0:11:31 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
pass 7134329 2023-01-23 21:38:54 2023-01-24 01:23:26 2023-01-24 02:42:11 1:18:45 1:07:39 0:11:06 smithi main ubuntu 20.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/misc} 1
fail 7134330 2023-01-23 21:38:55 2023-01-24 01:24:06 2023-01-24 01:40:15 0:16:09 0:05:24 0:10:45 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

Command failed on smithi174 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7134331 2023-01-23 21:38:56 2023-01-24 01:24:26 2023-01-24 01:40:11 0:15:45 0:06:30 0:09:15 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} 1
Failure Reason:

Command failed on smithi175 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s'

pass 7134332 2023-01-23 21:38:57 2023-01-24 01:25:06 2023-01-24 02:03:23 0:38:17 0:31:33 0:06:44 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_nfs} 1
pass 7134333 2023-01-23 21:38:58 2023-01-24 01:25:07 2023-01-24 01:47:43 0:22:36 0:16:26 0:06:10 smithi main rhel 8.6 rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
pass 7134334 2023-01-23 21:39:00 2023-01-24 01:25:17 2023-01-24 01:49:21 0:24:04 0:16:52 0:07:12 smithi main rhel 8.6 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7134335 2023-01-23 21:39:01 2023-01-24 01:26:07 2023-01-24 04:06:58 2:40:51 2:18:10 0:22:41 smithi main ubuntu 20.04 rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{ubuntu_latest}} 1
pass 7134336 2023-01-23 21:39:02 2023-01-24 01:26:48 2023-01-24 01:58:14 0:31:26 0:25:08 0:06:18 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/small-objects-localized} 2
fail 7134337 2023-01-23 21:39:03 2023-01-24 01:27:08 2023-01-24 01:57:43 0:30:35 0:18:54 0:11:41 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-snappy} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi115 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6985fac6e5b350c1d421063ad9cac9068a3467d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7134338 2023-01-23 21:39:04 2023-01-24 01:28:29 2023-01-24 01:44:34 0:16:05 0:06:06 0:09:59 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

Command failed on smithi089 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

pass 7134339 2023-01-23 21:39:05 2023-01-24 01:28:29 2023-01-24 01:52:49 0:24:20 0:11:48 0:12:32 smithi main ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7134340 2023-01-23 21:39:06 2023-01-24 01:30:40 2023-01-24 01:54:59 0:24:19 0:14:22 0:09:57 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/write_fadvise_dontneed} 2
dead 7134341 2023-01-24 01:30:41 2023-01-24 01:30:41 smithi main
Failure Reason:

'8e1bba12e584b7ae912e92e94afe9c075ab51fa3' not found in repo: https://git.ceph.com/teuthology.git!