User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
sage | 2021-10-06 22:20:40 | 2021-10-06 22:21:58 | 2021-10-06 22:48:49 | 0:26:51 | orch:cephadm:smoke-roleless | master | smithi | eb9c0f5 | 2 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6424924 | 2021-10-06 22:21:10 | 2021-10-06 22:21:57 | 2021-10-06 22:43:10 | 0:21:13 | 0:11:20 | 0:09:53 | smithi | kernel-hwe2 | centos | 8.3 | orch:cephadm:smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
Command failed on smithi023 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:eb9c0f59826e045e3c956160ee71fb7f29782851 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid eec4bb04-26f5-11ec-8c25-001a4aab830c -- ceph orch ls -f json' |
||||||||||||||
pass | 6424925 | 2021-10-06 22:21:11 | 2021-10-06 22:21:57 | 2021-10-06 22:48:49 | 0:26:52 | 0:21:38 | 0:05:14 | smithi | kernel-hwe2 | rhel | 8.4 | orch:cephadm:smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 1-start 2-services/nfs-ingress2 3-final} | 2 | |
pass | 6424926 | 2021-10-06 22:21:12 | 2021-10-06 22:21:58 | 2021-10-06 22:45:01 | 0:23:03 | 0:17:24 | 0:05:39 | smithi | kernel-hwe2 | rhel | 8.4 | orch:cephadm:smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 1-start 2-services/nfs 3-final} | 2 | |
fail | 6424927 | 2021-10-06 22:21:13 | 2021-10-06 22:21:58 | 2021-10-06 22:31:15 | 0:09:17 | 0:03:20 | 0:05:57 | smithi | kernel-hwe2 | ubuntu | 20.04 | orch:cephadm:smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/nfs2 3-final} | 2 | |
Failure Reason:
Command failed on smithi053 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |