User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2022-06-23 21:35:44 | 2022-06-23 23:22:18 | 2022-06-24 06:12:43 | 6:50:25 | rados | wip-yuri3-testing-2022-06-22-1121-pacific | smithi | 0e94459 | 13 | 18 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6895422 | 2022-06-23 21:37:59 | 2022-06-23 23:22:18 | 2022-06-23 23:45:42 | 0:23:24 | 0:17:06 | 0:06:18 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6895425 | 2022-06-23 21:38:00 | 2022-06-23 23:25:50 | 2022-06-23 23:58:42 | 0:32:52 | 0:26:54 | 0:05:58 | smithi | main | rhel | 8.4 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} tasks/rados_api_tests} | 2 | |
fail | 6895428 | 2022-06-23 21:38:02 | 2022-06-23 23:28:31 | 2022-06-23 23:44:05 | 0:15:34 | 0:04:40 | 0:10:54 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} | 1 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi146.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
fail | 6895431 | 2022-06-23 21:38:03 | 2022-06-23 23:29:13 | 2022-06-23 23:43:21 | 0:14:08 | 0:02:56 | 0:11:12 | smithi | main | ubuntu | 20.04 | rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Command failed on smithi165 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
fail | 6895434 | 2022-06-23 21:38:05 | 2022-06-23 23:30:24 | 2022-06-23 23:55:51 | 0:25:27 | 0:18:11 | 0:07:16 | smithi | main | centos | 8.stream | rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi088 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0e94459fbe80a110270e9df67c5aa03e7847ef44 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 6895437 | 2022-06-23 21:38:06 | 2022-06-23 23:33:36 | 2022-06-23 23:59:10 | 0:25:34 | 0:17:38 | 0:07:56 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi061 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3519a400-f34e-11ec-842b-001a4aab830c -e sha1=0e94459fbe80a110270e9df67c5aa03e7847ef44 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 6895440 | 2022-06-23 21:38:07 | 2022-06-23 23:34:28 | 2022-06-23 23:53:13 | 0:18:45 | 0:12:38 | 0:06:07 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
fail | 6895443 | 2022-06-23 21:38:09 | 2022-06-23 23:34:29 | 2022-06-24 06:12:43 | 6:38:14 | 6:28:30 | 0:09:44 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi002 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0e94459fbe80a110270e9df67c5aa03e7847ef44 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 6895445 | 2022-06-23 21:38:11 | 2022-06-23 23:38:00 | 2022-06-24 00:15:35 | 0:37:35 | 0:27:20 | 0:10:15 | smithi | main | ubuntu | 18.04 | rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} | 2 | |
pass | 6895447 | 2022-06-23 21:38:12 | 2022-06-23 23:38:21 | 2022-06-24 00:05:27 | 0:27:06 | 0:19:24 | 0:07:42 | smithi | main | rhel | 8.4 | rados/singleton/{all/test-crash mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 6895449 | 2022-06-23 21:38:14 | 2022-06-23 23:39:13 | 2022-06-23 23:58:55 | 0:19:42 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/radosbench} | 3 | |||
Failure Reason:
Cannot connect to remote host smithi099 |
||||||||||||||
fail | 6895451 | 2022-06-23 21:38:15 | 2022-06-23 23:41:24 | 2022-06-24 00:10:49 | 0:29:25 | 0:19:39 | 0:09:46 | smithi | main | ubuntu | 18.04 | rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} | 1 | |
Failure Reason:
'wait for operator' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
fail | 6895453 | 2022-06-23 21:38:17 | 2022-06-23 23:41:44 | 2022-06-24 00:05:50 | 0:24:06 | 0:14:22 | 0:09:44 | smithi | main | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi104 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:0e94459fbe80a110270e9df67c5aa03e7847ef44 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a9a64098-f34f-11ec-842b-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
fail | 6895455 | 2022-06-23 21:38:18 | 2022-06-23 23:43:05 | 2022-06-24 00:02:19 | 0:19:14 | 0:12:56 | 0:06:18 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi202.front.sepia.ceph.com: ['type=AVC msg=audit(1656028799.640:6412): avc: denied { ioctl } for pid=58537 comm="iptables" path="/var/lib/containers/storage/overlay/fba4f2d1d5fba66852888ad2c7022824ef5f5d9fd60343d22c5994812fc34b5f/merged/etc" dev="overlay" ino=3803089 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1', 'type=AVC msg=audit(1656028799.640:6412): avc: denied { ioctl } for pid=58537 comm="iptables" path="/var/lib/containers/storage/overlay/fba4f2d1d5fba66852888ad2c7022824ef5f5d9fd60343d22c5994812fc34b5f/merged" dev="overlay" ino=3803093 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
fail | 6895457 | 2022-06-23 21:38:20 | 2022-06-23 23:43:16 | 2022-06-24 00:08:28 | 0:25:12 | 0:18:51 | 0:06:21 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi130 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 80ccfc66-f34f-11ec-842b-001a4aab830c -e sha1=0e94459fbe80a110270e9df67c5aa03e7847ef44 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 6895459 | 2022-06-23 21:38:22 | 2022-06-23 23:43:47 | 2022-06-24 00:13:26 | 0:29:39 | 0:17:57 | 0:11:42 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
pass | 6895461 | 2022-06-23 21:38:23 | 2022-06-23 23:45:38 | 2022-06-24 00:28:31 | 0:42:53 | 0:37:17 | 0:05:36 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6895463 | 2022-06-23 21:38:25 | 2022-06-23 23:45:39 | 2022-06-24 01:04:52 | 1:19:13 | 1:12:59 | 0:06:14 | smithi | main | centos | 8.stream | rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zstd} supported-random-distro$/{centos_8} tasks/dashboard} | 2 | |
fail | 6895465 | 2022-06-23 21:38:26 | 2022-06-23 23:45:50 | 2022-06-24 00:03:56 | 0:18:06 | smithi | main | ubuntu | 18.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/connectivity thrashosds-health ubuntu_18.04} | 4 | |||
Failure Reason:
Cannot connect to remote host smithi190 |
||||||||||||||
fail | 6895467 | 2022-06-23 21:38:28 | 2022-06-23 23:46:31 | 2022-06-24 00:04:44 | 0:18:13 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/snaps-few-objects} | 3 | |||
Failure Reason:
Cannot connect to remote host smithi055 |
||||||||||||||
fail | 6895469 | 2022-06-23 21:38:29 | 2022-06-23 23:47:42 | 2022-06-24 00:13:23 | 0:25:41 | 0:18:27 | 0:07:14 | smithi | main | centos | 8.stream | rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi101 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0e94459fbe80a110270e9df67c5aa03e7847ef44 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 6895471 | 2022-06-23 21:38:31 | 2022-06-23 23:48:23 | 2022-06-24 00:13:19 | 0:24:56 | 0:17:54 | 0:07:02 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi044 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3201be22-f350-11ec-842b-001a4aab830c -e sha1=0e94459fbe80a110270e9df67c5aa03e7847ef44 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 6895473 | 2022-06-23 21:38:33 | 2022-06-23 23:48:24 | 2022-06-24 00:23:49 | 0:35:25 | 0:26:38 | 0:08:47 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
fail | 6895476 | 2022-06-23 21:38:35 | 2022-06-23 23:51:45 | 2022-06-24 00:08:33 | 0:16:48 | 0:05:34 | 0:11:14 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi001.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
pass | 6895477 | 2022-06-23 21:38:36 | 2022-06-23 23:52:56 | 2022-06-24 00:11:21 | 0:18:25 | 0:13:06 | 0:05:19 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
pass | 6895479 | 2022-06-23 21:38:38 | 2022-06-23 23:53:17 | 2022-06-24 00:23:25 | 0:30:08 | 0:22:01 | 0:08:07 | smithi | main | rhel | 8.4 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity objectstore/bluestore-bitmap supported-random-distro$/{rhel_8} tasks/prometheus} | 2 | |
pass | 6895481 | 2022-06-23 21:38:40 | 2022-06-23 23:55:58 | 2022-06-24 00:19:38 | 0:23:40 | 0:15:04 | 0:08:36 | smithi | main | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
fail | 6895483 | 2022-06-23 21:38:41 | 2022-06-23 23:56:09 | 2022-06-24 00:16:59 | 0:20:50 | 0:13:13 | 0:07:37 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi049.front.sepia.ceph.com: ['type=AVC msg=audit(1656029697.090:6420): avc: denied { ioctl } for pid=58622 comm="iptables" path="/var/lib/containers/storage/overlay/4dd880ea12e7076618092d8317824fd0aedb5d5707b694cc022e692b8375b108/merged" dev="overlay" ino=3803129 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
pass | 6895485 | 2022-06-23 21:38:43 | 2022-06-23 23:56:50 | 2022-06-24 00:22:29 | 0:25:39 | 0:19:00 | 0:06:39 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{centos_8}} | 2 | |
fail | 6895487 | 2022-06-23 21:38:45 | 2022-06-23 23:57:11 | 2022-06-24 00:23:50 | 0:26:39 | 0:18:31 | 0:08:08 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi041 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aa30a0f6-f351-11ec-842b-001a4aab830c -e sha1=0e94459fbe80a110270e9df67c5aa03e7847ef44 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 6895489 | 2022-06-23 21:38:48 | 2022-06-23 23:59:02 | 2022-06-24 00:16:39 | 0:17:37 | smithi | main | ubuntu | 18.04 | rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} | 3 | |||
Failure Reason:
Cannot connect to remote host smithi046 |