User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2023-03-17 15:54:37 | 2023-03-21 21:57:40 | 2023-03-21 23:35:38 | 1:37:58 | rados | quincy-release | smithi | 714f8ff | 5 | 10 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7211640 | 2023-03-17 15:55:54 | 2023-03-21 21:55:59 | 2023-03-21 22:19:57 | 0:23:58 | 0:09:05 | 0:14:53 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/pool-access mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7211641 | 2023-03-17 15:55:54 | 2023-03-21 22:13:32 | 363 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.7.2} | 1 | ||||
Failure Reason:
Command failed on smithi117 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
fail | 7211642 | 2023-03-17 15:55:55 | 2023-03-21 21:57:40 | 2023-03-21 22:35:20 | 0:37:40 | 0:28:45 | 0:08:55 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_nfs} | 1 | |
Failure Reason:
Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) |
||||||||||||||
fail | 7211643 | 2023-03-17 15:55:56 | 2023-03-21 21:58:00 | 2023-03-21 22:17:27 | 0:19:27 | 0:12:38 | 0:06:49 | smithi | main | rhel | 8.4 | rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi159 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=714f8ff94ab1a8a5b10ea54247535614e53b7234 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
pass | 7211644 | 2023-03-17 15:55:57 | 2023-03-21 21:58:01 | 2023-03-21 22:26:39 | 0:28:38 | 0:15:28 | 0:13:10 | smithi | main | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
fail | 7211645 | 2023-03-17 15:55:57 | 2023-03-21 22:05:12 | 2023-03-21 22:34:55 | 0:29:43 | 0:18:01 | 0:11:42 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-stupid} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi148 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=714f8ff94ab1a8a5b10ea54247535614e53b7234 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 7211646 | 2023-03-17 15:55:58 | 2023-03-21 22:07:53 | 2023-03-21 22:29:30 | 0:21:37 | 0:06:13 | 0:15:24 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/flannel rook/master} | 3 | |
Failure Reason:
Command failed on smithi018 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
pass | 7211647 | 2023-03-17 15:55:59 | 2023-03-21 22:13:34 | 2023-03-21 22:45:18 | 0:31:44 | 0:25:07 | 0:06:37 | smithi | main | rhel | 8.4 | rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8}} | 2 | |
fail | 7211648 | 2023-03-17 15:56:00 | 2023-03-21 22:13:55 | 2023-03-21 22:41:40 | 0:27:45 | 0:17:06 | 0:10:39 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi165.front.sepia.ceph.com: ['type=AVC msg=audit(1679438322.406:19604): avc: denied { ioctl } for pid=125375 comm="iptables" path="/var/lib/containers/storage/overlay/0a6326f716a65e5219dca766eeb18949f114290e7c945b96bdc781743c8c7daf/merged" dev="overlay" ino=3934140 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1', 'type=AVC msg=audit(1679438322.452:19605): avc: denied { ioctl } for pid=125378 comm="iptables" path="/var/lib/containers/storage/overlay/0a6326f716a65e5219dca766eeb18949f114290e7c945b96bdc781743c8c7daf/merged" dev="overlay" ino=3934140 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
pass | 7211649 | 2023-03-17 15:56:00 | 2023-03-21 22:14:25 | 2023-03-21 23:35:38 | 1:21:13 | 1:08:25 | 0:12:48 | smithi | main | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/misc} | 1 | |
pass | 7211650 | 2023-03-17 15:56:01 | 2023-03-21 22:17:37 | 2023-03-21 22:53:15 | 0:35:38 | 0:27:45 | 0:07:53 | smithi | main | rhel | 8.4 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mostlyread} | 2 | |
fail | 7211651 | 2023-03-17 15:56:02 | 2023-03-21 22:20:17 | 2023-03-21 22:40:16 | 0:19:59 | 0:06:11 | 0:13:48 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/host rook/1.7.2} | 1 | |
Failure Reason:
Command failed on smithi049 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
fail | 7211652 | 2023-03-17 15:56:03 | 2023-03-21 22:55:44 | 1494 | smithi | main | rhel | 8.4 | rados/cephadm/workunits/{0-distro/rhel_8.4_container_tools_rhel8 agent/off mon_election/classic task/test_nfs} | 1 | ||||
Failure Reason:
Test failure: test_non_existent_cluster (tasks.cephfs.test_nfs.TestNFS) |
||||||||||||||
fail | 7211653 | 2023-03-17 15:56:03 | 2023-03-21 22:24:39 | 2023-03-21 22:52:20 | 0:27:41 | 0:18:10 | 0:09:31 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi046 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=714f8ff94ab1a8a5b10ea54247535614e53b7234 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 7211654 | 2023-03-17 15:56:04 | 2023-03-21 22:24:39 | 2023-03-21 22:42:44 | 0:18:05 | 0:06:28 | 0:11:37 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
Command failed on smithi035 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
dead | 7211655 | 2023-03-21 22:24:40 | 2023-03-21 22:24:40 | smithi | main | — | ||||||||
Failure Reason:
'73b89aac0a5bd1ebc44f009c17952fa6438cc002' not found in repo: https://git.ceph.com/teuthology.git! |