User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
gabrioux | 2021-11-19 08:00:30 | 2021-11-19 08:00:47 | 2021-11-19 20:47:09 | 12:46:22 | rados:cephadm | wip-guits-testing-2021-11-18-1333-pacific | smithi | e77c7de | 57 | 23 | 10 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6513260 | 2021-11-19 08:00:38 | 2021-11-19 08:00:47 | 2021-11-19 08:24:10 | 0:23:23 | 0:17:44 | 0:05:39 | smithi | master | rhel | 8.4 | rados:cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
fail | 6513261 | 2021-11-19 08:00:39 | 2021-11-19 08:00:47 | 2021-11-19 08:21:37 | 0:20:50 | 0:09:00 | 0:11:50 | smithi | master | ubuntu | 18.04 | rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/pggrow thrashosds-health workloads/rbd_cls} | 3 | |
Failure Reason:
Command failed on smithi018 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e77c7dee6c987c6680b57de9907bbc4d4962f2b1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e7b59b84-4910-11ec-8c2c-001a4aab830c -- ceph orch daemon add osd smithi018:vg_nvme/lv_4' |
||||||||||||||
pass | 6513262 | 2021-11-19 08:00:40 | 2021-11-19 08:01:48 | 2021-11-19 08:47:37 | 0:45:49 | 0:34:49 | 0:11:00 | smithi | master | centos | 8.3 | rados:cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
pass | 6513263 | 2021-11-19 08:00:42 | 2021-11-19 08:02:08 | 2021-11-19 08:43:42 | 0:41:34 | 0:32:41 | 0:08:53 | smithi | master | centos | 8.2 | rados:cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} | 2 | |
fail | 6513265 | 2021-11-19 08:00:43 | 2021-11-19 08:02:29 | 2021-11-19 08:34:07 | 0:31:38 | 0:21:15 | 0:10:23 | smithi | master | centos | 8.3 | rados:cephadm/dashboard/{0-distro/centos_8.3_container_tools_3.0 task/test_e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi029 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e77c7dee6c987c6680b57de9907bbc4d4962f2b1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
dead | 6513267 | 2021-11-19 08:00:44 | 2021-11-19 08:02:39 | 2021-11-19 20:12:22 | 12:09:43 | smithi | master | centos | 8.stream | rados:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6513270 | 2021-11-19 08:00:45 | 2021-11-19 08:03:30 | 2021-11-19 08:21:07 | 0:17:37 | 0:06:26 | 0:11:11 | smithi | master | centos | 8.stream | rados:cephadm/mgr-nfs-upgrade/{0-centos_8.3_container_tools_3.0 0-centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi146 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
pass | 6513272 | 2021-11-19 08:00:46 | 2021-11-19 08:03:40 | 2021-11-19 08:27:17 | 0:23:37 | 0:14:19 | 0:09:18 | smithi | master | centos | 8.2 | rados:cephadm/orchestrator_cli/{0-random-distro$/{centos_8.2_container_tools_3.0} 2-node-mgr orchestrator_cli} | 2 | |
pass | 6513274 | 2021-11-19 08:00:47 | 2021-11-19 08:29:27 | 888 | smithi | master | centos | 8.2 | rados:cephadm/smoke/{0-nvme-loop distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | ||||
fail | 6513276 | 2021-11-19 08:00:48 | 2021-11-19 08:04:11 | 2021-11-19 08:21:49 | 0:17:38 | 0:06:44 | 0:10:54 | smithi | master | ubuntu | 18.04 | rados:cephadm/smoke-singlehost/{0-distro$/{ubuntu_18.04} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
Command failed on smithi140 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e77c7dee6c987c6680b57de9907bbc4d4962f2b1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 077c918e-4911-11ec-8c2c-001a4aab830c -- ceph orch daemon add osd smithi140:vg_nvme/lv_4' |
||||||||||||||
pass | 6513279 | 2021-11-19 08:00:49 | 2021-11-19 08:04:11 | 2021-11-19 08:45:00 | 0:40:49 | 0:29:42 | 0:11:07 | smithi | master | ubuntu | 20.04 | rados:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} | 2 | |
pass | 6513281 | 2021-11-19 08:00:50 | 2021-11-19 08:04:12 | 2021-11-19 08:28:39 | 0:24:27 | 0:14:23 | 0:10:04 | smithi | master | ubuntu | 20.04 | rados:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
fail | 6513283 | 2021-11-19 08:00:51 | 2021-11-19 08:04:42 | 2021-11-19 08:25:19 | 0:20:37 | 0:09:09 | 0:11:28 | smithi | master | ubuntu | 18.04 | rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/careful thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
Command failed on smithi038 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e77c7dee6c987c6680b57de9907bbc4d4962f2b1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 66cc7744-4911-11ec-8c2c-001a4aab830c -- ceph orch daemon add osd smithi038:vg_nvme/lv_4' |
||||||||||||||
pass | 6513286 | 2021-11-19 08:00:53 | 2021-11-19 08:05:12 | 2021-11-19 08:30:27 | 0:25:15 | 0:18:07 | 0:07:08 | smithi | master | rhel | 8.4 | rados:cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
pass | 6513288 | 2021-11-19 08:00:54 | 2021-11-19 08:05:13 | 2021-11-19 08:22:34 | 0:17:21 | 0:08:12 | 0:09:09 | smithi | master | centos | 8.stream | rados:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm_repos} | 1 | |
pass | 6513290 | 2021-11-19 08:00:55 | 2021-11-19 08:05:23 | 2021-11-19 08:31:03 | 0:25:40 | 0:15:11 | 0:10:29 | smithi | master | centos | 8.3 | rados:cephadm/smoke/{0-nvme-loop distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
pass | 6513293 | 2021-11-19 08:00:56 | 2021-11-19 08:06:03 | 2021-11-19 08:29:33 | 0:23:30 | 0:14:01 | 0:09:29 | smithi | master | centos | 8.2 | rados:cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
fail | 6513295 | 2021-11-19 08:00:57 | 2021-11-19 08:06:04 | 2021-11-19 08:27:41 | 0:21:37 | 0:09:11 | 0:12:26 | smithi | master | ubuntu | 18.04 | rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/default thrashosds-health workloads/test_rbd_api} | 3 | |
Failure Reason:
Command failed on smithi036 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e77c7dee6c987c6680b57de9907bbc4d4962f2b1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b7baafe0-4911-11ec-8c2c-001a4aab830c -- ceph orch daemon add osd smithi036:vg_nvme/lv_4' |
||||||||||||||
pass | 6513297 | 2021-11-19 08:00:58 | 2021-11-19 08:07:14 | 2021-11-19 08:42:42 | 0:35:28 | 0:23:02 | 0:12:26 | smithi | master | centos | 8.stream | rados:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 | |
fail | 6513299 | 2021-11-19 08:00:59 | 2021-11-19 08:09:15 | 2021-11-19 08:24:35 | 0:15:20 | 0:03:21 | 0:11:59 | smithi | master | ubuntu | 18.04 | rados:cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi105 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
pass | 6513302 | 2021-11-19 08:01:00 | 2021-11-19 08:10:55 | 2021-11-19 08:34:57 | 0:24:02 | 0:14:08 | 0:09:54 | smithi | master | centos | 8.stream | rados:cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/classic start} | 2 | |
fail | 6513304 | 2021-11-19 08:01:02 | 2021-11-19 08:11:16 | 2021-11-19 08:31:51 | 0:20:35 | 0:09:23 | 0:11:12 | smithi | master | ubuntu | 18.04 | rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
Command failed on smithi043 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e77c7dee6c987c6680b57de9907bbc4d4962f2b1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6176fd54-4912-11ec-8c2c-001a4aab830c -- ceph orch daemon add osd smithi043:vg_nvme/lv_4' |
||||||||||||||
pass | 6513306 | 2021-11-19 08:01:03 | 2021-11-19 08:12:06 | 2021-11-19 08:38:12 | 0:26:06 | 0:14:39 | 0:11:27 | smithi | master | centos | 8.3 | rados:cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw 3-final} | 2 | |
pass | 6513309 | 2021-11-19 08:01:04 | 2021-11-19 08:12:27 | 2021-11-19 08:57:53 | 0:45:26 | 0:35:27 | 0:09:59 | smithi | master | centos | 8.stream | rados:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} | 2 | |
pass | 6513310 | 2021-11-19 08:01:05 | 2021-11-19 08:12:47 | 2021-11-19 08:45:29 | 0:32:42 | 0:22:26 | 0:10:16 | smithi | master | centos | 8.2 | rados:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_nfs} | 1 | |
fail | 6513311 | 2021-11-19 08:01:06 | 2021-11-19 08:14:17 | 2021-11-19 08:33:59 | 0:19:42 | 0:09:27 | 0:10:15 | smithi | master | ubuntu | 18.04 | rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
Command failed on smithi078 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e77c7dee6c987c6680b57de9907bbc4d4962f2b1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b04b1faa-4912-11ec-8c2c-001a4aab830c -- ceph orch daemon add osd smithi078:vg_nvme/lv_4' |
||||||||||||||
pass | 6513312 | 2021-11-19 08:01:07 | 2021-11-19 08:14:18 | 2021-11-19 08:40:44 | 0:26:26 | 0:19:06 | 0:07:20 | smithi | master | rhel | 8.4 | rados:cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
pass | 6513313 | 2021-11-19 08:01:08 | 2021-11-19 08:15:18 | 2021-11-19 08:39:25 | 0:24:07 | 0:13:59 | 0:10:08 | smithi | master | ubuntu | 20.04 | rados:cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
pass | 6513314 | 2021-11-19 08:01:09 | 2021-11-19 08:15:29 | 2021-11-19 09:09:05 | 0:53:36 | 0:46:53 | 0:06:43 | smithi | master | rhel | 8.4 | rados:cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
pass | 6513315 | 2021-11-19 08:01:11 | 2021-11-19 08:15:39 | 2021-11-19 08:41:11 | 0:25:32 | 0:15:21 | 0:10:11 | smithi | master | centos | 8.stream | rados:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
dead | 6513316 | 2021-11-19 08:01:12 | 2021-11-19 08:15:59 | 2021-11-19 20:25:03 | 12:09:04 | smithi | master | centos | 8.stream | rados:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6513317 | 2021-11-19 08:01:13 | 2021-11-19 08:16:10 | 2021-11-19 08:56:15 | 0:40:05 | 0:28:34 | 0:11:31 | smithi | master | ubuntu | 20.04 | rados:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/connectivity} | 2 | |
fail | 6513318 | 2021-11-19 08:01:14 | 2021-11-19 08:16:40 | 2021-11-19 08:37:26 | 0:20:46 | 0:09:43 | 0:11:03 | smithi | master | ubuntu | 18.04 | rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/none thrashosds-health workloads/rbd_cls} | 3 | |
Failure Reason:
Command failed on smithi027 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e77c7dee6c987c6680b57de9907bbc4d4962f2b1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2eedf120-4913-11ec-8c2c-001a4aab830c -- ceph orch daemon add osd smithi027:vg_nvme/lv_4' |
||||||||||||||
pass | 6513319 | 2021-11-19 08:01:15 | 2021-11-19 08:17:41 | 2021-11-19 08:45:31 | 0:27:50 | 0:20:09 | 0:07:41 | smithi | master | rhel | 8.4 | rados:cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/classic start} | 2 | |
pass | 6513320 | 2021-11-19 08:01:16 | 2021-11-19 08:18:21 | 2021-11-19 08:47:47 | 0:29:26 | 0:19:50 | 0:09:36 | smithi | master | rhel | 8.4 | rados:cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
fail | 6513321 | 2021-11-19 08:01:17 | 2021-11-19 08:20:22 | 2021-11-19 08:41:23 | 0:21:01 | 0:09:40 | 0:11:21 | smithi | master | ubuntu | 18.04 | rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/pggrow thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
Command failed on smithi018 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e77c7dee6c987c6680b57de9907bbc4d4962f2b1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ba96230a-4913-11ec-8c2c-001a4aab830c -- ceph orch daemon add osd smithi018:vg_nvme/lv_4' |
||||||||||||||
pass | 6513322 | 2021-11-19 08:01:18 | 2021-11-19 08:21:42 | 2021-11-19 08:44:39 | 0:22:57 | 0:13:08 | 0:09:49 | smithi | master | centos | 8.2 | rados:cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
pass | 6513323 | 2021-11-19 08:01:19 | 2021-11-19 08:21:42 | 2021-11-19 09:05:46 | 0:44:04 | 0:34:28 | 0:09:36 | smithi | master | centos | 8.2 | rados:cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} | 2 | |
fail | 6513324 | 2021-11-19 08:01:20 | 2021-11-19 08:22:03 | 2021-11-19 08:39:50 | 0:17:47 | 0:06:28 | 0:11:19 | smithi | master | centos | 8.stream | rados:cephadm/mgr-nfs-upgrade/{0-centos_8.3_container_tools_3.0 0-centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi103 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
pass | 6513325 | 2021-11-19 08:01:21 | 2021-11-19 08:22:23 | 2021-11-19 08:45:41 | 0:23:18 | 0:14:02 | 0:09:16 | smithi | master | centos | 8.stream | rados:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli} | 1 | |
fail | 6513326 | 2021-11-19 08:01:23 | 2021-11-19 08:22:23 | 2021-11-19 08:37:24 | 0:15:01 | 0:03:22 | 0:11:39 | smithi | master | ubuntu | 18.04 | rados:cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi117 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
pass | 6513327 | 2021-11-19 08:01:24 | 2021-11-19 08:23:44 | 2021-11-19 09:05:21 | 0:41:37 | 0:34:50 | 0:06:47 | smithi | master | rhel | 8.4 | rados:cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
dead | 6513328 | 2021-11-19 08:01:25 | 2021-11-19 08:23:54 | 2021-11-19 08:42:45 | 0:18:51 | smithi | master | ubuntu | 18.04 | rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} | 3 | |||
Failure Reason:
SSH connection to smithi184 was lost: 'uname -r' |
||||||||||||||
pass | 6513329 | 2021-11-19 08:01:26 | 2021-11-19 08:24:15 | 2021-11-19 08:49:52 | 0:25:37 | 0:17:52 | 0:07:45 | smithi | master | rhel | 8.4 | rados:cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
pass | 6513330 | 2021-11-19 08:01:27 | 2021-11-19 08:24:25 | 2021-11-19 08:47:23 | 0:22:58 | 0:13:34 | 0:09:24 | smithi | master | centos | 8.3 | rados:cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
pass | 6513331 | 2021-11-19 08:01:28 | 2021-11-19 08:24:25 | 2021-11-19 08:52:34 | 0:28:09 | 0:17:41 | 0:10:28 | smithi | master | ubuntu | 20.04 | rados:cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} | 2 | |
fail | 6513332 | 2021-11-19 08:01:29 | 2021-11-19 08:24:46 | 2021-11-19 08:45:05 | 0:20:19 | 0:08:57 | 0:11:22 | smithi | master | ubuntu | 18.04 | rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/careful thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
Command failed on smithi084 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e77c7dee6c987c6680b57de9907bbc4d4962f2b1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3235e1ac-4914-11ec-8c2c-001a4aab830c -- ceph orch daemon add osd smithi084:vg_nvme/lv_4' |
||||||||||||||
fail | 6513333 | 2021-11-19 08:01:30 | 2021-11-19 08:25:16 | 2021-11-19 08:39:17 | 0:14:01 | 0:03:24 | 0:10:37 | smithi | master | ubuntu | 18.04 | rados:cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
Failure Reason:
Command failed on smithi163 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 6513334 | 2021-11-19 08:01:31 | 2021-11-19 08:25:27 | 2021-11-19 08:46:48 | 0:21:21 | 0:09:15 | 0:12:06 | smithi | master | ubuntu | 18.04 | rados:cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi050 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e77c7dee6c987c6680b57de9907bbc4d4962f2b1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8bf7b616-4914-11ec-8c2c-001a4aab830c -- ceph orch daemon add osd smithi050:vg_nvme/lv_4' |
||||||||||||||
pass | 6513335 | 2021-11-19 08:01:32 | 2021-11-19 08:27:07 | 2021-11-19 09:28:32 | 1:01:25 | 0:52:06 | 0:09:19 | smithi | master | centos | 8.stream | rados:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
dead | 6513336 | 2021-11-19 08:01:33 | 2021-11-19 08:27:17 | 2021-11-19 20:36:32 | 12:09:15 | smithi | master | centos | 8.stream | rados:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6513337 | 2021-11-19 08:01:34 | 2021-11-19 08:27:48 | 2021-11-19 08:53:04 | 0:25:16 | 0:15:19 | 0:09:57 | smithi | master | centos | 8.2 | rados:cephadm/smoke/{0-nvme-loop distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
pass | 6513338 | 2021-11-19 08:01:35 | 2021-11-19 08:27:48 | 2021-11-19 08:50:56 | 0:23:08 | 0:17:31 | 0:05:37 | smithi | master | rhel | 8.4 | rados:cephadm/smoke-singlehost/{0-distro$/{rhel_8.4_container_tools_rhel8} 1-start 2-services/rgw 3-final} | 1 | |
pass | 6513339 | 2021-11-19 08:01:36 | 2021-11-19 08:27:58 | 2021-11-19 09:04:15 | 0:36:17 | 0:25:53 | 0:10:24 | smithi | master | centos | 8.stream | rados:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} | 2 | |
pass | 6513340 | 2021-11-19 08:01:37 | 2021-11-19 08:28:19 | 2021-11-19 08:49:49 | 0:21:30 | 0:11:57 | 0:09:33 | smithi | master | centos | 8.stream | rados:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_adoption} | 1 | |
dead | 6513341 | 2021-11-19 08:01:38 | 2021-11-19 08:28:19 | 2021-11-19 08:48:58 | 0:20:39 | smithi | master | ubuntu | 18.04 | rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/radosbench} | 3 | |||
Failure Reason:
SSH connection to smithi196 was lost: 'uname -r' |
||||||||||||||
pass | 6513342 | 2021-11-19 08:01:39 | 2021-11-19 08:28:40 | 2021-11-19 08:54:42 | 0:26:02 | 0:12:57 | 0:13:05 | smithi | master | centos | 8.stream | rados:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
pass | 6513343 | 2021-11-19 08:01:41 | 2021-11-19 08:29:30 | 2021-11-19 08:57:53 | 0:28:23 | 0:18:02 | 0:10:21 | smithi | master | ubuntu | 20.04 | rados:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
fail | 6513344 | 2021-11-19 08:01:42 | 2021-11-19 08:29:40 | 2021-11-19 08:50:02 | 0:20:22 | 0:09:14 | 0:11:08 | smithi | master | ubuntu | 18.04 | rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/rbd_cls} | 3 | |
Failure Reason:
Command failed on smithi033 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e77c7dee6c987c6680b57de9907bbc4d4962f2b1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d4bc316a-4914-11ec-8c2c-001a4aab830c -- ceph orch daemon add osd smithi033:vg_nvme/lv_4' |
||||||||||||||
fail | 6513345 | 2021-11-19 08:01:43 | 2021-11-19 08:30:21 | 2021-11-19 08:49:42 | 0:19:21 | 0:09:16 | 0:10:05 | smithi | master | centos | 8.3 | rados:cephadm/smoke/{0-nvme-loop distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi170 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e77c7dee6c987c6680b57de9907bbc4d4962f2b1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 025b4b60-4915-11ec-8c2c-001a4aab830c -- ceph mon dump -f json' |
||||||||||||||
pass | 6513346 | 2021-11-19 08:01:44 | 2021-11-19 08:30:31 | 2021-11-19 09:06:58 | 0:36:27 | 0:26:07 | 0:10:20 | smithi | master | ubuntu | 20.04 | rados:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} | 2 | |
pass | 6513347 | 2021-11-19 08:01:45 | 2021-11-19 08:31:12 | 2021-11-19 08:55:05 | 0:23:53 | 0:17:25 | 0:06:28 | smithi | master | rhel | 8.4 | rados:cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
fail | 6513348 | 2021-11-19 08:01:46 | 2021-11-19 08:32:02 | 2021-11-19 08:53:04 | 0:21:02 | 0:09:33 | 0:11:29 | smithi | master | ubuntu | 18.04 | rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
Command failed on smithi043 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e77c7dee6c987c6680b57de9907bbc4d4962f2b1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4dcae3c6-4915-11ec-8c2c-001a4aab830c -- ceph orch daemon add osd smithi043:vg_nvme/lv_4' |
||||||||||||||
pass | 6513349 | 2021-11-19 08:01:47 | 2021-11-19 08:33:23 | 2021-11-19 08:59:07 | 0:25:44 | 0:15:01 | 0:10:43 | smithi | master | centos | 8.2 | rados:cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
pass | 6513350 | 2021-11-19 08:01:48 | 2021-11-19 08:59:43 | 918 | smithi | master | centos | 8.stream | rados:cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/connectivity start} | 2 | ||||
pass | 6513351 | 2021-11-19 08:01:49 | 2021-11-19 08:34:13 | 2021-11-19 08:59:08 | 0:24:55 | 0:16:40 | 0:08:15 | smithi | master | centos | 8.2 | rados:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_cephadm} | 1 | |
pass | 6513352 | 2021-11-19 08:01:50 | 2021-11-19 08:34:14 | 2021-11-19 09:15:27 | 0:41:13 | 0:31:20 | 0:09:53 | smithi | master | centos | 8.2 | rados:cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
fail | 6513353 | 2021-11-19 08:01:51 | 2021-11-19 08:34:14 | 2021-11-19 08:52:13 | 0:17:59 | 0:06:56 | 0:11:03 | smithi | master | centos | 8.stream | rados:cephadm/mgr-nfs-upgrade/{0-centos_8.3_container_tools_3.0 0-centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi063 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
dead | 6513354 | 2021-11-19 08:01:52 | 2021-11-19 08:34:54 | 2021-11-19 08:55:43 | 0:20:49 | smithi | master | ubuntu | 18.04 | rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/test_rbd_api} | 3 | |||
Failure Reason:
SSH connection to smithi157 was lost: 'uname -r' |
||||||||||||||
pass | 6513355 | 2021-11-19 08:01:53 | 2021-11-19 08:35:05 | 2021-11-19 08:58:27 | 0:23:22 | 0:13:33 | 0:09:49 | smithi | master | centos | 8.3 | rados:cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
pass | 6513356 | 2021-11-19 08:01:54 | 2021-11-19 08:35:15 | 2021-11-19 09:01:16 | 0:26:01 | 0:17:49 | 0:08:12 | smithi | master | rhel | 8.4 | rados:cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
pass | 6513357 | 2021-11-19 08:01:55 | 2021-11-19 09:02:42 | 1145 | smithi | master | rhel | 8.4 | rados:cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | ||||
dead | 6513358 | 2021-11-19 08:01:57 | 2021-11-19 08:37:26 | 2021-11-19 08:57:45 | 0:20:19 | smithi | master | ubuntu | 18.04 | rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/cache-snaps} | 3 | |||
Failure Reason:
SSH connection to smithi073 was lost: 'uname -r' |
||||||||||||||
pass | 6513359 | 2021-11-19 08:01:58 | 2021-11-19 08:37:27 | 2021-11-19 09:22:42 | 0:45:15 | 0:35:04 | 0:10:11 | smithi | master | centos | 8.2 | rados:cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
dead | 6513360 | 2021-11-19 08:01:59 | 2021-11-19 08:37:37 | 2021-11-19 20:47:09 | 12:09:32 | smithi | master | centos | 8.stream | rados:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6513361 | 2021-11-19 08:02:00 | 2021-11-19 08:38:17 | 2021-11-19 09:19:20 | 0:41:03 | 0:29:06 | 0:11:57 | smithi | master | ubuntu | 20.04 | rados:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls mon_election/connectivity} | 2 | |
pass | 6513362 | 2021-11-19 08:02:01 | 2021-11-19 08:39:18 | 2021-11-19 09:04:54 | 0:25:36 | 0:19:10 | 0:06:26 | smithi | master | rhel | 8.4 | rados:cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/connectivity start} | 2 | |
pass | 6513363 | 2021-11-19 08:02:02 | 2021-11-19 08:39:28 | 2021-11-19 09:03:17 | 0:23:49 | 0:12:54 | 0:10:55 | smithi | master | centos | 8.stream | rados:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
fail | 6513364 | 2021-11-19 08:02:03 | 2021-11-19 08:39:59 | 2021-11-19 09:00:10 | 0:20:11 | 0:09:55 | 0:10:16 | smithi | master | ubuntu | 18.04 | rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/careful thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
Command failed on smithi039 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e77c7dee6c987c6680b57de9907bbc4d4962f2b1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6323c5ca-4916-11ec-8c2c-001a4aab830c -- ceph orch daemon add osd smithi039:vg_nvme/lv_4' |
||||||||||||||
pass | 6513365 | 2021-11-19 08:02:04 | 2021-11-19 08:40:09 | 2021-11-19 08:58:15 | 0:18:06 | 0:08:06 | 0:10:00 | smithi | master | centos | 8.stream | rados:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm_repos} | 1 | |
dead | 6513366 | 2021-11-19 08:02:05 | 2021-11-19 08:40:49 | 2021-11-19 09:01:37 | 0:20:48 | smithi | master | ubuntu | 18.04 | rados:cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |||
Failure Reason:
SSH connection to smithi123 was lost: 'uname -r' |
||||||||||||||
pass | 6513367 | 2021-11-19 08:02:06 | 2021-11-19 08:41:20 | 2021-11-19 09:30:43 | 0:49:23 | 0:38:43 | 0:10:40 | smithi | master | centos | 8.stream | rados:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
dead | 6513368 | 2021-11-19 08:02:07 | 2021-11-19 08:41:30 | 2021-11-19 09:01:13 | 0:19:43 | smithi | master | ubuntu | 18.04 | rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/rbd_cls} | 3 | |||
Failure Reason:
SSH connection to smithi184 was lost: 'uname -r' |
||||||||||||||
fail | 6513369 | 2021-11-19 08:02:08 | 2021-11-19 08:42:51 | 2021-11-19 08:56:32 | 0:13:41 | 0:03:22 | 0:10:19 | smithi | master | ubuntu | 18.04 | rados:cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi008 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
pass | 6513370 | 2021-11-19 08:02:09 | 2021-11-19 08:42:51 | 2021-11-19 09:16:06 | 0:33:15 | 0:24:04 | 0:09:11 | smithi | master | centos | 8.3 | rados:cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 | |
pass | 6513371 | 2021-11-19 08:02:10 | 2021-11-19 08:42:51 | 2021-11-19 09:09:07 | 0:26:16 | 0:18:05 | 0:08:11 | smithi | master | rhel | 8.4 | rados:cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
pass | 6513372 | 2021-11-19 08:02:11 | 2021-11-19 08:43:52 | 2021-11-19 09:11:45 | 0:27:53 | 0:14:42 | 0:13:11 | smithi | master | ubuntu | 20.04 | rados:cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
fail | 6513373 | 2021-11-19 08:02:13 | 2021-11-19 08:44:42 | 2021-11-19 09:05:02 | 0:20:20 | 0:09:21 | 0:10:59 | smithi | master | ubuntu | 18.04 | rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
Command failed on smithi084 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e77c7dee6c987c6680b57de9907bbc4d4962f2b1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 05e878c8-4917-11ec-8c2c-001a4aab830c -- ceph orch daemon add osd smithi084:vg_nvme/lv_4' |
||||||||||||||
pass | 6513374 | 2021-11-19 08:02:14 | 2021-11-19 08:45:13 | 2021-11-19 09:13:56 | 0:28:43 | 0:18:20 | 0:10:23 | smithi | master | ubuntu | 20.04 | rados:cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} | 2 | |
pass | 6513375 | 2021-11-19 08:02:15 | 2021-11-19 08:45:13 | 2021-11-19 09:10:44 | 0:25:31 | 0:19:00 | 0:06:31 | smithi | master | rhel | 8.4 | rados:cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 |