User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2022-04-27 13:45:54 | 2022-04-27 16:18:59 | 2022-04-27 23:25:55 | 7:06:56 | rados | pacific | smithi | 4fa079b | 20 | 13 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6808733 | 2022-04-27 13:47:41 | 2022-04-27 16:18:48 | 2022-04-27 16:56:23 | 0:37:35 | 0:30:15 | 0:07:20 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6808734 | 2022-04-27 13:47:42 | 2022-04-27 16:18:48 | 2022-04-27 17:02:38 | 0:43:50 | 0:37:17 | 0:06:33 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
fail | 6808735 | 2022-04-27 13:47:43 | 2022-04-27 16:18:59 | 2022-04-27 16:31:54 | 0:12:55 | 0:06:13 | 0:06:42 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} | 1 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi022.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
pass | 6808736 | 2022-04-27 13:47:44 | 2022-04-27 16:18:59 | 2022-04-27 16:51:59 | 0:33:00 | 0:19:55 | 0:13:05 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} | 2 | |
fail | 6808737 | 2022-04-27 13:47:45 | 2022-04-27 16:22:50 | 2022-04-27 16:46:34 | 0:23:44 | 0:16:09 | 0:07:35 | smithi | master | ubuntu | 18.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/classic thrashosds-health ubuntu_18.04} | 4 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi191 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=nautilus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
pass | 6808738 | 2022-04-27 13:47:46 | 2022-04-27 16:23:21 | 2022-04-27 16:59:52 | 0:36:31 | 0:29:22 | 0:07:09 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6808739 | 2022-04-27 13:47:47 | 2022-04-27 16:24:51 | 2022-04-27 16:51:57 | 0:27:06 | 0:20:31 | 0:06:35 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi078 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1f50e47c-c648-11ec-8c39-001a4aab830c -e sha1=4fa079ba14503defa8dc257d7c2d506ebefcfe6d -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 6808740 | 2022-04-27 13:47:48 | 2022-04-27 16:24:52 | 2022-04-27 16:48:04 | 0:23:12 | 0:14:03 | 0:09:09 | smithi | master | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/rados_python} | 2 | |
fail | 6808741 | 2022-04-27 13:47:49 | 2022-04-27 16:26:42 | 2022-04-27 16:47:40 | 0:20:58 | 0:14:40 | 0:06:18 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi146.front.sepia.ceph.com: ['type=AVC msg=audit(1651077910.150:17922): avc: denied { ioctl } for pid=106689 comm="iptables" path="/var/lib/containers/storage/overlay/47d927b158da383251582ed17772f7e4ab279aa3b544b53240386c80d67fe7b8/merged" dev="overlay" ino=3412404 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1', 'type=AVC msg=audit(1651077910.066:17920): avc: denied { ioctl } for pid=106673 comm="iptables" path="/var/lib/containers/storage/overlay/47d927b158da383251582ed17772f7e4ab279aa3b544b53240386c80d67fe7b8/merged" dev="overlay" ino=3412404 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
pass | 6808742 | 2022-04-27 13:47:50 | 2022-04-27 16:26:42 | 2022-04-27 17:03:48 | 0:37:06 | 0:30:26 | 0:06:40 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6808743 | 2022-04-27 13:47:51 | 2022-04-27 16:26:53 | 2022-04-27 16:49:40 | 0:22:47 | 0:17:04 | 0:05:43 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/classic start} | 2 | |
pass | 6808744 | 2022-04-27 13:47:52 | 2022-04-27 16:26:53 | 2022-04-27 16:58:25 | 0:31:32 | 0:24:30 | 0:07:02 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} | 2 | |
fail | 6808745 | 2022-04-27 13:47:53 | 2022-04-27 16:26:54 | 2022-04-27 16:53:47 | 0:26:53 | 0:21:03 | 0:05:50 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi038 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c31baa1a-c648-11ec-8c39-001a4aab830c -e sha1=4fa079ba14503defa8dc257d7c2d506ebefcfe6d -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 6808746 | 2022-04-27 13:47:54 | 2022-04-27 16:26:54 | 2022-04-27 16:59:01 | 0:32:07 | 0:20:10 | 0:11:57 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} | 2 | |
pass | 6808747 | 2022-04-27 13:47:55 | 2022-04-27 14:21:27 | 2022-04-27 14:44:12 | 0:22:45 | 0:15:18 | 0:07:27 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
fail | 6808748 | 2022-04-27 13:47:56 | 2022-04-27 16:29:45 | 2022-04-27 16:51:29 | 0:21:44 | 0:14:22 | 0:07:22 | smithi | master | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi187 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4fa079ba14503defa8dc257d7c2d506ebefcfe6d shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a358dc34-c648-11ec-8c39-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
pass | 6808749 | 2022-04-27 13:47:57 | 2022-04-27 16:30:05 | 2022-04-27 16:51:05 | 0:21:00 | 0:14:43 | 0:06:17 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
fail | 6808750 | 2022-04-27 13:47:58 | 2022-04-27 16:30:05 | 2022-04-27 17:01:03 | 0:30:58 | 0:21:22 | 0:09:36 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi022 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1e827898-c649-11ec-8c39-001a4aab830c -e sha1=4fa079ba14503defa8dc257d7c2d506ebefcfe6d -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 6808751 | 2022-04-27 13:47:59 | 2022-04-27 16:31:56 | 2022-04-27 17:16:59 | 0:45:03 | 0:38:41 | 0:06:22 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6808752 | 2022-04-27 13:48:00 | 2022-04-27 16:31:56 | 2022-04-27 17:06:58 | 0:35:02 | 0:27:52 | 0:07:10 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6808753 | 2022-04-27 13:48:01 | 2022-04-27 16:31:57 | 2022-04-27 17:01:19 | 0:29:22 | 0:19:42 | 0:09:40 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} | 2 | |
fail | 6808754 | 2022-04-27 13:48:02 | 2022-04-27 16:31:57 | 2022-04-27 16:55:50 | 0:23:53 | 0:15:58 | 0:07:55 | smithi | master | ubuntu | 18.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/connectivity thrashosds-health ubuntu_18.04} | 4 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi188 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=nautilus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 6808755 | 2022-04-27 13:48:03 | 2022-04-27 16:32:28 | 2022-04-27 16:56:53 | 0:24:25 | 0:12:02 | 0:12:23 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi185 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4c2a0ad0-c64a-11ec-8c39-001a4aab830c -- ceph orch daemon add osd smithi185:vg_nvme/lv_4' |
||||||||||||||
fail | 6808756 | 2022-04-27 13:48:04 | 2022-04-27 16:37:39 | 2022-04-27 17:04:34 | 0:26:55 | 0:20:18 | 0:06:37 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi047 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e359ce28-c649-11ec-8c39-001a4aab830c -e sha1=4fa079ba14503defa8dc257d7c2d506ebefcfe6d -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 6808757 | 2022-04-27 13:48:05 | 2022-04-27 16:37:39 | 2022-04-27 16:52:22 | 0:14:43 | 0:06:50 | 0:07:53 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi019.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
pass | 6808758 | 2022-04-27 13:48:06 | 2022-04-27 16:38:50 | 2022-04-27 17:00:00 | 0:21:10 | 0:14:32 | 0:06:38 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
pass | 6808759 | 2022-04-27 13:48:07 | 2022-04-27 16:38:50 | 2022-04-27 17:14:56 | 0:36:06 | 0:28:34 | 0:07:32 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6808760 | 2022-04-27 13:48:08 | 2022-04-27 16:39:30 | 2022-04-27 17:04:34 | 0:25:04 | 0:17:15 | 0:07:49 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/classic start} | 2 | |
pass | 6808761 | 2022-04-27 13:48:09 | 2022-04-27 16:41:41 | 2022-04-27 17:18:55 | 0:37:14 | 0:30:40 | 0:06:34 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6808762 | 2022-04-27 13:48:10 | 2022-04-27 16:41:51 | 2022-04-27 17:37:37 | 0:55:46 | 0:44:49 | 0:10:57 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 6808763 | 2022-04-27 13:48:11 | 2022-04-27 16:46:43 | 2022-04-27 17:16:05 | 0:29:22 | 0:20:14 | 0:09:08 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} | 2 | |
dead | 6808764 | 2022-04-27 13:48:12 | 2022-04-27 16:46:43 | 2022-04-27 23:25:55 | 6:39:12 | smithi | master | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6808765 | 2022-04-27 13:48:13 | 2022-04-27 16:47:33 | 2022-04-27 17:10:26 | 0:22:53 | 0:16:28 | 0:06:25 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi099.front.sepia.ceph.com: ['type=AVC msg=audit(1651079266.635:17923): avc: denied { ioctl } for pid=106785 comm="iptables" path="/var/lib/containers/storage/overlay/36f1d5a79b10eea6e3fd6fd7dc7d769b46736c8e600876699d9a6fb3d6a9bde2/merged" dev="overlay" ino=3412415 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
fail | 6808766 | 2022-04-27 13:48:14 | 2022-04-27 16:47:34 | 2022-04-27 17:14:46 | 0:27:12 | 0:20:30 | 0:06:42 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi136 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 499e680a-c64b-11ec-8c39-001a4aab830c -e sha1=4fa079ba14503defa8dc257d7c2d506ebefcfe6d -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |