Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7214575 2023-03-20 21:02:34 2023-03-22 12:43:07 2023-03-22 13:07:03 0:23:56 0:11:48 0:12:08 smithi main ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/classic random-objectstore$/{bluestore-comp-snappy} supported-random-distro$/{ubuntu_latest} tasks/prometheus} 2
pass 7214576 2023-03-20 21:02:35 2023-03-22 12:45:08 2023-03-22 13:42:17 0:57:09 0:44:50 0:12:19 smithi main centos 8.stream rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/force-sync-many workloads/rados_mon_osdmap_prune} 2
pass 7214577 2023-03-20 21:02:36 2023-03-22 12:47:09 2023-03-22 13:26:39 0:39:30 0:24:55 0:14:35 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/classic task/test_orch_cli_mon} 5
pass 7214578 2023-03-20 21:02:37 2023-03-22 12:50:10 2023-03-22 13:32:11 0:42:01 0:35:06 0:06:55 smithi main rhel 8.6 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/many workloads/rados_mon_workunits} 2
pass 7214579 2023-03-20 21:02:37 2023-03-22 12:50:40 2023-03-22 14:05:56 1:15:16 1:06:57 0:08:19 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-bitmap} tasks/dashboard} 2
fail 7214580 2023-03-20 21:02:38 2023-03-22 12:50:50 2023-03-22 13:06:51 0:16:01 0:06:10 0:09:51 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

Command failed on smithi082 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

dead 7214581 2023-03-20 21:02:39 2023-03-22 12:51:01 2023-03-23 01:04:25 12:13:24 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

hit max job timeout

pass 7214582 2023-03-20 21:02:40 2023-03-22 12:51:01 2023-03-22 14:26:41 1:35:40 1:27:16 0:08:24 smithi main rhel 8.6 rados/upgrade/parallel/{0-random-distro$/{rhel_8.6_container_tools_rhel8} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
pass 7214583 2023-03-20 21:02:40 2023-03-22 12:53:22 2023-03-22 13:19:27 0:26:05 0:15:45 0:10:20 smithi main centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} supported-random-distro$/{centos_8} tasks/failover} 2
pass 7214584 2023-03-20 21:02:41 2023-03-22 13:37:53 2021 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps} 3
pass 7214585 2023-03-20 21:02:42 2023-03-22 12:54:13 2023-03-22 13:21:19 0:27:06 0:15:46 0:11:20 smithi main centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-hybrid} supported-random-distro$/{centos_8} tasks/insights} 2
pass 7214586 2023-03-20 21:02:43 2023-03-22 12:56:13 2023-03-22 13:25:04 0:28:51 0:20:07 0:08:44 smithi main centos 8.stream rados/singleton/{all/osd-recovery mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} 1
pass 7214587 2023-03-20 21:02:44 2023-03-22 12:56:14 2023-03-22 13:19:43 0:23:29 0:16:12 0:07:17 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/repave-all} 2
fail 7214588 2023-03-20 21:02:44 2023-03-22 12:56:14 2023-03-22 13:34:12 0:37:58 0:26:29 0:11:29 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi116 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3b570e3a08e3641e3955386646062dfd64e2c411 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7214589 2023-03-20 21:02:45 2023-03-22 12:57:05 2023-03-22 13:14:37 0:17:32 0:06:20 0:11:12 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

Command failed on smithi003 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

pass 7214590 2023-03-20 21:02:46 2023-03-22 12:58:25 2023-03-22 13:35:58 0:37:33 0:30:50 0:06:43 smithi main rhel 8.6 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-hybrid} supported-random-distro$/{rhel_8} tasks/module_selftest} 2
pass 7214591 2023-03-20 21:02:47 2023-03-22 12:58:46 2023-03-22 13:33:56 0:35:10 0:27:34 0:07:36 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_orch_cli_mon} 5
pass 7214592 2023-03-20 21:02:47 2023-03-22 12:59:36 2023-03-22 13:32:56 0:33:20 0:21:39 0:11:41 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/rbd_cls} 3
pass 7214593 2023-03-20 21:02:48 2023-03-22 13:01:47 2023-03-22 13:29:07 0:27:20 0:17:33 0:09:47 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_cephadm} 1
pass 7214594 2023-03-20 21:02:49 2023-03-22 13:01:57 2023-03-22 14:08:06 1:06:09 0:55:00 0:11:09 smithi main ubuntu 20.04 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/many workloads/rados_mon_osdmap_prune} 2
pass 7214595 2023-03-20 21:02:50 2023-03-22 13:01:58 2023-03-22 14:20:05 1:18:07 1:06:53 0:11:14 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zstd} tasks/dashboard} 2
fail 7214596 2023-03-20 21:02:51 2023-03-22 13:01:58 2023-03-22 13:18:13 0:16:15 0:06:12 0:10:03 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

Command failed on smithi146 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

dead 7214597 2023-03-20 21:02:51 2023-03-22 13:02:08 2023-03-23 01:14:04 12:11:56 smithi main centos 8.stream rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
Failure Reason:

hit max job timeout

pass 7214598 2023-03-20 21:02:52 2023-03-22 13:02:09 2023-03-22 14:40:09 1:38:00 1:24:36 0:13:24 smithi main centos 8.stream rados/upgrade/parallel/{0-random-distro$/{centos_8.stream_container_tools} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
fail 7214599 2023-03-20 21:02:53 2023-03-22 13:06:20 2023-03-22 13:25:50 0:19:30 0:13:00 0:06:30 smithi main rhel 8.6 rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/centos_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi082 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3b570e3a08e3641e3955386646062dfd64e2c411 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

pass 7214600 2023-03-20 21:02:54 2023-03-22 13:07:00 2023-03-22 13:29:15 0:22:15 0:11:37 0:10:38 smithi main ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} supported-random-distro$/{ubuntu_latest} tasks/prometheus} 2
pass 7214601 2023-03-20 21:02:54 2023-03-22 13:07:11 2023-03-22 14:01:46 0:54:35 0:38:35 0:16:00 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/quincy backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} 3
pass 7214602 2023-03-20 21:02:55 2023-03-22 13:33:57 2023-03-22 15:01:50 1:27:53 1:17:12 0:10:41 smithi main ubuntu 20.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/mon} 1
pass 7214603 2023-03-20 21:02:56 2023-03-22 13:33:57 2023-03-22 14:17:39 0:43:42 0:30:50 0:12:52 smithi main ubuntu 20.04 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/one workloads/rados_mon_workunits} 2
fail 7214604 2023-03-20 21:02:57 2023-03-22 13:33:58 2023-03-22 13:52:56 0:18:58 0:13:00 0:05:58 smithi main rhel 8.6 rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/rhel_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi142 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3b570e3a08e3641e3955386646062dfd64e2c411 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

pass 7214605 2023-03-20 21:02:58 2023-03-22 13:33:58 2023-03-22 14:10:18 0:36:20 0:25:18 0:11:02 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_orch_cli_mon} 5
fail 7214606 2023-03-20 21:02:58 2023-03-22 13:35:19 2023-03-22 14:13:02 0:37:43 0:27:14 0:10:29 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zstd} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi053 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3b570e3a08e3641e3955386646062dfd64e2c411 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7214607 2023-03-20 21:02:59 2023-03-22 13:35:19 2023-03-22 13:54:01 0:18:42 0:06:29 0:12:13 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

Command failed on smithi026 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'