Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6781408 2022-04-07 18:47:15 2022-04-07 18:48:51 2022-04-07 19:11:55 0:23:04 0:15:47 0:07:17 smithi master rhel 8.4 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/connectivity start} 2
fail 6781409 2022-04-07 18:47:16 2022-04-07 18:48:51 2022-04-07 19:00:25 0:11:34 0:05:02 0:06:32 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} 1
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi135.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

fail 6781410 2022-04-07 18:47:17 2022-04-07 18:48:51 2022-04-07 19:06:16 0:17:25 0:09:58 0:07:27 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/mirror 3-final} 2
Failure Reason:

Command failed on smithi186 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7b5fc948beccc43b46ebd2c97f9ec1a5bfc4854f shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24787e5e-b6a5-11ec-8c36-001a4aab830c -- ceph mon dump -f json'

dead 6781411 2022-04-07 18:47:18 2022-04-07 18:48:52 2022-04-08 01:28:42 6:39:50 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

pass 6781412 2022-04-07 18:47:19 2022-04-07 18:48:52 2022-04-07 19:03:58 0:15:06 0:09:17 0:05:49 smithi master centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_adoption} 1
pass 6781413 2022-04-07 18:47:20 2022-04-07 18:48:52 2022-04-07 19:09:53 0:21:01 0:13:34 0:07:27 smithi master centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
pass 6781414 2022-04-07 18:47:21 2022-04-07 18:48:53 2022-04-07 19:00:33 0:11:40 0:05:24 0:06:16 smithi master centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm_repos} 1
pass 6781415 2022-04-07 18:47:22 2022-04-07 18:49:43 2022-04-07 19:26:36 0:36:53 0:30:22 0:06:31 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 6781416 2022-04-07 18:47:24 2022-04-07 18:49:43 2022-04-07 19:09:24 0:19:41 0:13:09 0:06:32 smithi master ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi085 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7b5fc948beccc43b46ebd2c97f9ec1a5bfc4854f shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b6251ff2-b6a4-11ec-8c36-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

pass 6781417 2022-04-07 18:47:25 2022-04-07 18:49:54 2022-04-07 19:09:16 0:19:22 0:13:12 0:06:10 smithi master centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
pass 6781418 2022-04-07 18:47:26 2022-04-07 18:50:24 2022-04-07 19:37:04 0:46:40 0:39:10 0:07:30 smithi master rhel 8.4 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 6781419 2022-04-07 18:47:27 2022-04-07 18:51:25 2022-04-07 19:16:25 0:25:00 0:17:42 0:07:18 smithi master rhel 8.4 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} 2
dead 6781420 2022-04-07 18:47:28 2022-04-07 18:53:25 2022-04-08 01:33:46 6:40:21 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

pass 6781421 2022-04-07 18:47:29 2022-04-07 18:54:06 2022-04-07 19:33:29 0:39:23 0:32:25 0:06:58 smithi master ubuntu 20.04 rados/singleton/{all/backfill-toofull mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} 1
pass 6781422 2022-04-07 18:47:30 2022-04-07 18:54:06 2022-04-07 19:17:16 0:23:10 0:16:13 0:06:57 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/osd_stale_reads mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 6781423 2022-04-07 18:47:31 2022-04-07 18:54:06 2022-04-07 19:18:42 0:24:36 0:16:07 0:08:29 smithi master rhel 8.4 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/connectivity start} 2
fail 6781424 2022-04-07 18:47:32 2022-04-07 18:55:27 2022-04-07 19:05:29 0:10:02 0:03:07 0:06:55 smithi master ubuntu 20.04 rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zstd} supported-random-distro$/{ubuntu_latest} tasks/dashboard} 2
Failure Reason:

Command failed on smithi190 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s'

dead 6781425 2022-04-07 18:47:33 2022-04-07 18:56:17 2022-04-08 01:38:08 6:41:51 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

fail 6781426 2022-04-07 18:47:34 2022-04-07 18:58:18 2022-04-07 19:12:51 0:14:33 0:05:26 0:09:07 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi005.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

pass 6781427 2022-04-07 18:47:35 2022-04-07 18:59:29 2022-04-07 19:14:56 0:15:27 0:08:33 0:06:54 smithi master centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_adoption} 1
fail 6781428 2022-04-07 18:47:36 2022-04-07 18:59:59 2022-04-07 19:18:22 0:18:23 0:13:14 0:05:09 smithi master centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi050.front.sepia.ceph.com: ['type=AVC msg=audit(1649358989.382:7592): avc: denied { ioctl } for pid=49637 comm="iptables" path="/var/lib/containers/storage/overlay/b9b1b9de5986f53aa13dde2bf6e5814cf108662ae692e78d0341e642483b8094/merged" dev="overlay" ino=3279185 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

pass 6781429 2022-04-07 18:47:37 2022-04-07 18:59:59 2022-04-07 19:11:21 0:11:22 0:05:26 0:05:56 smithi master centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm_repos} 1
fail 6781430 2022-04-07 18:47:38 2022-04-07 19:00:30 2022-04-07 19:21:06 0:20:36 0:11:12 0:09:24 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi184 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3ef0c7ee-b6a7-11ec-8c36-001a4aab830c -- ceph orch daemon add osd smithi184:vg_nvme/lv_4'

dead 6781431 2022-04-07 18:47:40 2022-04-07 19:04:00 2022-04-08 01:43:43 6:39:43 smithi master ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

hit max job timeout

pass 6781432 2022-04-07 18:47:41 2022-04-07 19:05:31 2022-04-07 19:25:14 0:19:43 0:13:23 0:06:20 smithi master centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
pass 6781433 2022-04-07 18:47:42 2022-04-07 19:06:11 2022-04-07 19:29:33 0:23:22 0:16:56 0:06:26 smithi master rhel 8.4 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} 2
dead 6781434 2022-04-07 18:47:43 2022-04-07 19:06:22 2022-04-08 01:49:32 6:43:10 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout