User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
benhanokh | 2022-01-27 10:00:31 | 2022-01-27 10:04:21 | 2022-01-27 12:33:21 | 2:29:00 | rados | WIP_GBH_NCB_new_alloc_map_A6 | smithi | b958480 | 3 | 6 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6643312 | 2022-01-27 10:01:30 | 2022-01-27 10:04:21 | 2022-01-27 12:33:21 | 2:29:00 | 2:20:17 | 0:08:43 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados tasks/rados_cls_all validater/valgrind} | 2 | |
pass | 6643313 | 2022-01-27 10:01:31 | 2022-01-27 10:04:21 | 2022-01-27 10:40:54 | 0:36:33 | 0:25:42 | 0:10:51 | smithi | master | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/rados_api_tests} | 2 | |
pass | 6643314 | 2022-01-27 10:01:32 | 2022-01-27 10:05:22 | 2022-01-27 10:30:26 | 0:25:04 | 0:18:34 | 0:06:30 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
fail | 6643315 | 2022-01-27 10:01:33 | 2022-01-27 10:05:22 | 2022-01-27 10:20:47 | 0:15:25 | 0:05:11 | 0:10:14 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/flannel rook/1.7.2} | 1 | |
Failure Reason:
Command failed on smithi133 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo -n /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 6643316 | 2022-01-27 10:01:34 | 2022-01-27 10:05:22 | 2022-01-27 10:29:32 | 0:24:10 | 0:13:07 | 0:11:03 | smithi | master | rados/cephadm/workunits/{agent/on mon_election/classic task/test_orch_cli} | 1 | |||
Failure Reason:
Test failure: test_cephfs_mirror (tasks.cephadm_cases.test_cli.TestCephadmCLI) |
||||||||||||||
fail | 6643317 | 2022-01-27 10:01:35 | 2022-01-27 10:05:52 | 2022-01-27 10:23:25 | 0:17:33 | 0:06:42 | 0:10:51 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/host rook/master} | 3 | |
Failure Reason:
Command failed on smithi006 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo -n /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 6643318 | 2022-01-27 10:01:36 | 2022-01-27 10:05:53 | 2022-01-27 10:24:07 | 0:18:14 | 0:06:27 | 0:11:47 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.7.2} | 3 | |
Failure Reason:
Command failed on smithi118 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo -n /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 6643319 | 2022-01-27 10:01:37 | 2022-01-27 10:05:53 | 2022-01-27 10:27:08 | 0:21:15 | 0:10:55 | 0:10:20 | smithi | master | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8.stream} tasks/scrub_test} | 2 | |
Failure Reason:
"2022-01-27T10:25:04.637081+0000 osd.6 (osd.6) 43 : cluster [ERR] 2.7 shard 6 soid 2:e00ba09a:::benchmark_data_smithi084_41139_object1771:head : " in cluster log |
||||||||||||||
fail | 6643320 | 2022-01-27 10:01:38 | 2022-01-27 10:06:03 | 2022-01-27 10:24:44 | 0:18:41 | 0:05:11 | 0:13:30 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/1-node k8s/1.21 net/flannel rook/master} | 1 | |
Failure Reason:
Command failed on smithi196 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo -n /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |