Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 6631787 2022-01-21 10:01:50 2022-01-24 02:54:13 2022-01-24 09:33:36 6:39:23 smithi master centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

hit max job timeout

dead 6631788 2022-01-21 10:01:51 2022-01-24 02:54:43 2022-01-24 09:33:43 6:39:00 smithi master centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

hit max job timeout

fail 6631789 2022-01-21 10:01:52 2022-01-24 02:55:13 2022-01-24 03:23:07 0:27:54 0:17:32 0:10:22 smithi master ubuntu 20.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/erasure-code} 1
Failure Reason:

Command failed (workunit test erasure-code/test-erasure-code.sh) on smithi204 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dc1128d5e692cbbee2fab855567dc913b704a736 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/erasure-code/test-erasure-code.sh'

dead 6631790 2022-01-21 10:01:53 2022-01-24 02:55:24 2022-01-24 09:35:41 6:40:17 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

hit max job timeout

pass 6631791 2022-01-21 10:01:54 2022-01-24 02:55:34 2022-01-24 03:52:32 0:56:58 0:47:38 0:09:20 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
pass 6631792 2022-01-21 10:01:55 2022-01-24 02:55:44 2022-01-24 03:22:48 0:27:04 0:20:08 0:06:56 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
dead 6631793 2022-01-21 10:01:56 2022-01-24 02:55:45 2022-01-24 09:34:33 6:38:48 smithi master centos 8.2 rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

hit max job timeout

dead 6631794 2022-01-21 10:01:57 2022-01-24 02:56:15 2022-01-24 09:35:48 6:39:33 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

hit max job timeout

pass 6631795 2022-01-21 10:01:58 2022-01-24 02:56:56 2022-01-24 03:42:24 0:45:28 0:37:18 0:08:10 smithi master rhel 8.4 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
dead 6631796 2022-01-21 10:01:59 2022-01-24 02:58:16 2022-01-24 09:38:19 6:40:03 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

hit max job timeout

pass 6631797 2022-01-21 10:02:00 2022-01-24 02:58:36 2022-01-24 04:00:14 1:01:38 0:52:02 0:09:36 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/connectivity msgr-failures/few msgr/async objectstore/filestore-xfs rados tasks/rados_api_tests validater/valgrind} 2
fail 6631798 2022-01-21 10:02:01 2022-01-24 02:58:47 2022-01-24 03:24:10 0:25:23 0:15:45 0:09:38 smithi master centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-low-osd-mem-target} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi003 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dc1128d5e692cbbee2fab855567dc913b704a736 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 6631799 2022-01-21 10:02:02 2022-01-24 02:59:28 2022-01-24 03:37:12 0:37:44 0:28:02 0:09:42 smithi master rados/cephadm/workunits/{agent/on mon_election/connectivity task/test_nfs} 1
pass 6631800 2022-01-21 10:02:03 2022-01-24 02:59:28 2022-01-24 05:34:16 2:34:48 2:28:13 0:06:35 smithi master rhel 8.4 rados/standalone/{supported-random-distro$/{rhel_8} workloads/scrub} 1
dead 6631801 2022-01-21 10:02:04 2022-01-24 02:59:28 2022-01-24 09:38:25 6:38:57 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

hit max job timeout