Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6600945 2022-01-07 12:27:04 2022-01-07 12:28:37 2022-01-07 12:50:46 0:22:09 0:11:16 0:10:53 smithi master centos 8.stream rados:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
fail 6600946 2022-01-07 12:27:05 2022-01-07 12:29:28 2022-01-10 09:31:08 2 days, 21:01:40 2 days, 19:06:35 1:55:05 smithi master ubuntu 18.04 rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/pggrow thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi104 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e6d739d3acbd78e5fb4ee5f072efc21d7c0af8c0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 335a7786-6fc6-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi104:vg_nvme/lv_4'

pass 6600947 2022-01-07 12:27:06 2022-01-07 12:29:48 2022-01-07 13:15:04 0:45:16 0:35:01 0:10:15 smithi master centos 8.3 rados:cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
pass 6600948 2022-01-07 12:27:07 2022-01-07 12:29:48 2022-01-07 12:53:33 0:23:45 0:13:29 0:10:16 smithi master centos 8.2 rados:cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
pass 6600949 2022-01-07 12:27:08 2022-01-07 12:30:09 2022-01-07 13:12:11 0:42:02 0:30:47 0:11:15 smithi master centos 8.2 rados:cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
fail 6600950 2022-01-07 12:27:09 2022-01-07 12:31:09 2022-01-07 13:00:32 0:29:23 0:19:37 0:09:46 smithi master centos 8.3 rados:cephadm/dashboard/{0-distro/centos_8.3_container_tools_3.0 task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi003 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6d739d3acbd78e5fb4ee5f072efc21d7c0af8c0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 6600951 2022-01-07 12:27:10 2022-01-07 12:31:19 2022-01-07 13:08:55 0:37:36 0:28:00 0:09:36 smithi master centos 8.stream rados:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
dead 6600952 2022-01-07 12:27:11 2022-01-07 12:31:50 2022-01-08 00:43:56 12:12:06 smithi master centos 8.3 rados:cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

pass 6600953 2022-01-07 12:27:12 2022-01-07 12:32:00 2022-01-07 12:58:47 0:26:47 0:15:05 0:11:42 smithi master centos 8.2 rados:cephadm/orchestrator_cli/{0-random-distro$/{centos_8.2_container_tools_3.0} 2-node-mgr orchestrator_cli} 2
pass 6600954 2022-01-07 12:27:13 2022-01-07 12:33:51 2022-01-07 12:57:15 0:23:24 0:14:48 0:08:36 smithi master centos 8.2 rados:cephadm/smoke/{0-nvme-loop distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/classic start} 2
pass 6600955 2022-01-07 12:27:14 2022-01-07 12:34:11 2022-01-07 12:55:43 0:21:32 0:12:11 0:09:21 smithi master centos 8.2 rados:cephadm/smoke-singlehost/{0-distro$/{centos_8.2_container_tools_3.0} 1-start 2-services/basic 3-final} 1
pass 6600956 2022-01-07 12:27:15 2022-01-07 12:34:32 2022-01-07 13:14:15 0:39:43 0:30:05 0:09:38 smithi master ubuntu 20.04 rados:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} 2
fail 6600957 2022-01-07 12:27:16 2022-01-07 12:34:32 2022-01-07 12:55:21 0:20:49 0:08:51 0:11:58 smithi master ubuntu 18.04 rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/careful thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi098 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e6d739d3acbd78e5fb4ee5f072efc21d7c0af8c0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 43319a94-6fb8-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi098:vg_nvme/lv_4'

pass 6600958 2022-01-07 12:27:17 2022-01-07 12:35:12 2022-01-07 12:50:21 0:15:09 0:06:18 0:08:51 smithi master centos 8.stream rados:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm_repos} 1
pass 6600959 2022-01-07 12:27:18 2022-01-07 12:35:13 2022-01-07 12:58:41 0:23:28 0:14:17 0:09:11 smithi master centos 8.3 rados:cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} 2
pass 6600960 2022-01-07 12:27:19 2022-01-07 12:35:53 2022-01-07 12:59:57 0:24:04 0:17:41 0:06:23 smithi master rhel 8.4 rados:cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
pass 6600961 2022-01-07 12:27:20 2022-01-07 12:36:44 2022-01-07 13:03:17 0:26:33 0:16:06 0:10:27 smithi master centos 8.3 rados:cephadm/smoke/{0-nvme-loop distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/connectivity start} 2
fail 6600962 2022-01-07 12:27:21 2022-01-07 12:38:14 2022-01-07 12:58:55 0:20:41 0:08:52 0:11:49 smithi master ubuntu 18.04 rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/default thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi006 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e6d739d3acbd78e5fb4ee5f072efc21d7c0af8c0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c25080b0-6fb8-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi006:vg_nvme/lv_4'

pass 6600963 2022-01-07 12:27:22 2022-01-07 12:39:25 2022-01-07 13:10:55 0:31:30 0:21:17 0:10:13 smithi master centos 8.stream rados:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
pass 6600964 2022-01-07 12:27:23 2022-01-07 12:39:25 2022-01-07 13:03:14 0:23:49 0:12:56 0:10:53 smithi master centos 8.stream rados:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
pass 6600965 2022-01-07 12:27:24 2022-01-07 12:39:35 2022-01-07 13:03:14 0:23:39 0:12:24 0:11:15 smithi master centos 8.stream rados:cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/classic start} 2
fail 6600966 2022-01-07 12:27:25 2022-01-07 12:39:56 2022-01-07 13:00:00 0:20:04 0:09:27 0:10:37 smithi master ubuntu 18.04 rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi026 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e6d739d3acbd78e5fb4ee5f072efc21d7c0af8c0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0ec832e-6fb8-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi026:vg_nvme/lv_4'

pass 6600967 2022-01-07 12:27:26 2022-01-07 12:40:26 2022-01-07 13:28:00 0:47:34 0:37:42 0:09:52 smithi master centos 8.stream rados:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
pass 6600968 2022-01-07 12:27:28 2022-01-07 12:40:46 2022-01-07 13:13:52 0:33:06 0:22:41 0:10:25 smithi master centos 8.stream rados:cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 6600969 2022-01-07 12:27:29 2022-01-07 12:40:57 2022-01-07 13:04:20 0:23:23 0:17:34 0:05:49 smithi master rhel 8.4 rados:cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
pass 6600970 2022-01-07 12:27:30 2022-01-07 12:41:17 2022-01-07 13:14:53 0:33:36 0:23:33 0:10:03 smithi master centos 8.2 rados:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_nfs} 1
pass 6600971 2022-01-07 12:27:31 2022-01-07 12:41:18 2022-01-07 13:08:32 0:27:14 0:19:48 0:07:26 smithi master rhel 8.4 rados:cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
fail 6600972 2022-01-07 12:27:32 2022-01-07 12:41:18 2022-01-07 13:02:28 0:21:10 0:09:40 0:11:30 smithi master ubuntu 18.04 rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/radosbench} 3
Failure Reason:

Command failed on smithi016 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e6d739d3acbd78e5fb4ee5f072efc21d7c0af8c0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 435c8f8c-6fb9-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi016:vg_nvme/lv_4'

pass 6600973 2022-01-07 12:27:33 2022-01-07 12:42:48 2022-01-07 13:08:22 0:25:34 0:19:20 0:06:14 smithi master rhel 8.4 rados:cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/connectivity start} 2
pass 6600974 2022-01-07 12:27:34 2022-01-07 12:43:19 2022-01-07 13:37:53 0:54:34 0:46:53 0:07:41 smithi master rhel 8.4 rados:cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
fail 6600975 2022-01-07 12:27:35 2022-01-07 12:44:39 2022-01-07 13:09:58 0:25:19 0:13:57 0:11:22 smithi master centos 8.stream rados:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi064 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f6a7d286-6fb9-11ec-8c32-001a4aab830c -- bash -c \'ceph --format=json mds versions | jq -e ". | add == 4"\''

pass 6600976 2022-01-07 12:27:36 2022-01-07 12:44:40 2022-01-07 13:27:03 0:42:23 0:29:42 0:12:41 smithi master ubuntu 20.04 rados:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/connectivity} 2
fail 6600977 2022-01-07 12:27:37 2022-01-07 12:45:40 2022-01-07 13:05:41 0:20:01 0:09:23 0:10:38 smithi master ubuntu 18.04 rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/none thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi002 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e6d739d3acbd78e5fb4ee5f072efc21d7c0af8c0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c01e2274-6fb9-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi002:vg_nvme/lv_4'

pass 6600978 2022-01-07 12:27:38 2022-01-07 12:46:00 2022-01-07 13:14:06 0:28:06 0:21:03 0:07:03 smithi master rhel 8.4 rados:cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
dead 6600979 2022-01-07 12:27:39 2022-01-07 12:46:21 2022-01-07 13:07:32 0:21:11 smithi master ubuntu 18.04 rados:cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

SSH connection to smithi185 was lost: 'uname -r'

pass 6600980 2022-01-07 12:27:40 2022-01-07 12:47:11 2022-01-07 13:13:17 0:26:06 0:19:44 0:06:22 smithi master rhel 8.4 rados:cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/classic start} 2
fail 6600981 2022-01-07 12:27:41 2022-01-07 12:48:02 2022-01-07 13:08:34 0:20:32 0:09:42 0:10:50 smithi master ubuntu 18.04 rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/pggrow thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi038 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e6d739d3acbd78e5fb4ee5f072efc21d7c0af8c0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b7ca324-6fba-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi038:vg_nvme/lv_4'

pass 6600982 2022-01-07 12:27:42 2022-01-07 12:48:52 2022-01-07 13:34:08 0:45:16 0:34:48 0:10:28 smithi master centos 8.2 rados:cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
dead 6600983 2022-01-07 12:27:43 2022-01-07 12:49:02 2022-01-08 01:01:47 12:12:45 smithi master centos 8.3 rados:cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

pass 6600984 2022-01-07 12:27:44 2022-01-07 12:50:13 2022-01-07 13:11:27 0:21:14 0:12:01 0:09:13 smithi master centos 8.stream rados:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli} 1
fail 6600985 2022-01-07 12:27:45 2022-01-07 12:50:13 2022-01-07 13:04:25 0:14:12 0:03:20 0:10:52 smithi master ubuntu 18.04 rados:cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

Command failed on smithi050 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn'

fail 6600986 2022-01-07 12:27:46 2022-01-07 12:50:14 2022-01-07 13:04:09 0:13:55 0:03:22 0:10:33 smithi master ubuntu 18.04 rados:cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi089 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn'

pass 6600987 2022-01-07 12:27:47 2022-01-07 12:50:24 2022-01-07 13:32:07 0:41:43 0:35:13 0:06:30 smithi master rhel 8.4 rados:cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
dead 6600988 2022-01-07 12:27:48 2022-01-07 12:50:54 2022-01-07 13:11:21 0:20:27 smithi master ubuntu 18.04 rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

SSH connection to smithi150 was lost: 'uname -r'

pass 6600989 2022-01-07 12:27:49 2022-01-07 12:52:55 2022-01-07 13:18:38 0:25:43 0:13:47 0:11:56 smithi master ubuntu 20.04 rados:cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
pass 6600990 2022-01-07 12:27:51 2022-01-07 12:53:35 2022-01-07 13:19:57 0:26:22 0:14:21 0:12:01 smithi master ubuntu 20.04 rados:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs 3-final} 2
pass 6600991 2022-01-07 12:27:52 2022-01-07 12:54:16 2022-01-07 13:21:58 0:27:42 0:17:42 0:10:00 smithi master ubuntu 20.04 rados:cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
dead 6600992 2022-01-07 12:27:53 2022-01-07 12:54:16 2022-01-07 13:16:17 0:22:01 smithi master ubuntu 18.04 rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/careful thrashosds-health workloads/cache-snaps} 3
Failure Reason:

SSH connection to smithi154 was lost: 'uname -r'

dead 6600993 2022-01-07 12:27:54 2022-01-07 12:55:47 2022-01-07 13:15:34 0:19:47 smithi master ubuntu 18.04 rados:cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

SSH connection to smithi170 was lost: 'uname -r'

pass 6600994 2022-01-07 12:27:55 2022-01-07 12:57:17 2022-01-07 13:21:30 0:24:13 0:13:26 0:10:47 smithi master centos 8.2 rados:cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
pass 6600995 2022-01-07 12:27:56 2022-01-07 12:58:18 2022-01-07 13:55:33 0:57:15 0:47:29 0:09:46 smithi master centos 8.stream rados:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
pass 6600996 2022-01-07 12:27:57 2022-01-07 12:58:18 2022-01-07 13:36:05 0:37:47 0:28:18 0:09:29 smithi master centos 8.stream rados:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
dead 6600997 2022-01-07 12:27:58 2022-01-07 12:58:48 2022-01-08 01:10:46 12:11:58 smithi master centos 8.stream rados:cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

pass 6600998 2022-01-07 12:27:59 2022-01-07 12:58:49 2022-01-07 13:22:04 0:23:15 0:13:17 0:09:58 smithi master centos 8.2 rados:cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
pass 6600999 2022-01-07 12:28:00 2022-01-07 12:58:59 2022-01-07 13:24:30 0:25:31 0:15:30 0:10:01 smithi master centos 8.2 rados:cephadm/smoke/{0-nvme-loop distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/connectivity start} 2
pass 6601000 2022-01-07 12:28:01 2022-01-07 12:58:59 2022-01-07 13:24:10 0:25:11 0:17:54 0:07:17 smithi master rhel 8.4 rados:cephadm/smoke-singlehost/{0-distro$/{rhel_8.4_container_tools_rhel8} 1-start 2-services/rgw 3-final} 1
pass 6601001 2022-01-07 12:28:02 2022-01-07 12:59:10 2022-01-07 13:32:52 0:33:42 0:23:54 0:09:48 smithi master centos 8.stream rados:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} 2
pass 6601002 2022-01-07 12:28:03 2022-01-07 12:59:40 2022-01-07 13:18:43 0:19:03 0:09:53 0:09:10 smithi master centos 8.stream rados:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_adoption} 1
fail 6601003 2022-01-07 12:28:04 2022-01-07 12:59:40 2022-01-07 13:19:36 0:19:56 0:09:04 0:10:52 smithi master ubuntu 18.04 rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/radosbench} 3
Failure Reason:

Command failed on smithi026 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e6d739d3acbd78e5fb4ee5f072efc21d7c0af8c0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9c2eac88-6fbb-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi026:vg_nvme/lv_4'

dead 6601004 2022-01-07 12:28:05 2022-01-07 13:00:01 2022-01-07 13:19:00 0:18:59 smithi master ubuntu 18.04 rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/rbd_cls} 3
Failure Reason:

SSH connection to smithi198 was lost: 'uname -r'

pass 6601005 2022-01-07 12:28:06 2022-01-07 13:00:41 2022-01-07 13:28:39 0:27:58 0:16:49 0:11:09 smithi master centos 8.3 rados:cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
pass 6601006 2022-01-07 12:28:07 2022-01-07 13:01:12 2022-01-07 13:26:25 0:25:13 0:15:11 0:10:02 smithi master centos 8.3 rados:cephadm/smoke/{0-nvme-loop distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/classic start} 2
pass 6601007 2022-01-07 12:28:08 2022-01-07 13:01:12 2022-01-07 13:25:47 0:24:35 0:13:56 0:10:39 smithi master centos 8.3 rados:cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
pass 6601008 2022-01-07 12:28:09 2022-01-07 13:02:32 2022-01-07 13:38:16 0:35:44 0:25:50 0:09:54 smithi master ubuntu 20.04 rados:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
dead 6601009 2022-01-07 12:28:10 2022-01-07 13:02:33 2022-01-07 13:21:45 0:19:12 smithi master ubuntu 18.04 rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

SSH connection to smithi084 was lost: 'uname -r'

pass 6601010 2022-01-07 12:28:11 2022-01-07 13:03:23 2022-01-07 13:27:02 0:23:39 0:13:14 0:10:25 smithi master centos 8.stream rados:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/rgw 3-final} 2
pass 6601011 2022-01-07 12:28:12 2022-01-07 13:03:23 2022-01-07 13:24:54 0:21:31 0:12:30 0:09:01 smithi master centos 8.stream rados:cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/connectivity start} 2
pass 6601012 2022-01-07 12:28:13 2022-01-07 13:03:24 2022-01-07 13:29:14 0:25:50 0:16:43 0:09:07 smithi master centos 8.2 rados:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_cephadm} 1
pass 6601013 2022-01-07 12:28:14 2022-01-07 13:04:14 2022-01-07 13:45:36 0:41:22 0:31:28 0:09:54 smithi master centos 8.2 rados:cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
pass 6601014 2022-01-07 12:28:15 2022-01-07 13:04:24 2022-01-07 13:41:54 0:37:30 0:27:29 0:10:01 smithi master centos 8.3 rados:cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 6601015 2022-01-07 12:28:16 2022-01-07 13:04:35 2022-01-07 13:25:47 0:21:12 0:09:27 0:11:45 smithi master ubuntu 18.04 rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi002 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e6d739d3acbd78e5fb4ee5f072efc21d7c0af8c0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8b1a2c0a-6fbc-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi002:vg_nvme/lv_4'

pass 6601016 2022-01-07 12:28:17 2022-01-07 13:05:45 2022-01-07 13:30:30 0:24:45 0:17:46 0:06:59 smithi master rhel 8.4 rados:cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/basic 3-final} 2
pass 6601017 2022-01-07 12:28:18 2022-01-07 13:07:36 2022-01-07 13:29:38 0:22:02 0:10:56 0:11:06 smithi master centos 8.stream rados:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/repave-all} 2
pass 6601018 2022-01-07 12:28:19 2022-01-07 13:08:26 2022-01-07 13:35:50 0:27:24 0:19:59 0:07:25 smithi master rhel 8.4 rados:cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} 2
dead 6601019 2022-01-07 12:28:20 2022-01-07 13:08:37 2022-01-07 13:26:54 0:18:17 smithi master ubuntu 18.04 rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/cache-snaps} 3
Failure Reason:

SSH connection to smithi063 was lost: 'uname -r'

pass 6601020 2022-01-07 12:28:21 2022-01-07 13:08:37 2022-01-07 13:54:15 0:45:38 0:35:41 0:09:57 smithi master centos 8.2 rados:cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
pass 6601021 2022-01-07 12:28:22 2022-01-07 13:08:57 2022-01-07 13:47:05 0:38:08 0:28:23 0:09:45 smithi master centos 8.stream rados:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6601022 2022-01-07 12:28:23 2022-01-07 13:10:08 2022-01-07 13:50:11 0:40:03 0:29:05 0:10:58 smithi master ubuntu 20.04 rados:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls mon_election/connectivity} 2
pass 6601023 2022-01-07 12:28:24 2022-01-07 13:10:08 2022-01-07 13:36:37 0:26:29 0:18:13 0:08:16 smithi master rhel 8.4 rados:cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
pass 6601024 2022-01-07 12:28:25 2022-01-07 13:10:59 2022-01-07 13:36:42 0:25:43 0:19:19 0:06:24 smithi master rhel 8.4 rados:cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/connectivity start} 2
dead 6601025 2022-01-07 12:28:26 2022-01-07 13:11:29 2022-01-07 13:29:51 0:18:22 smithi master ubuntu 18.04 rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/careful thrashosds-health workloads/radosbench} 3
Failure Reason:

SSH connection to smithi150 was lost: 'uname -r'

pass 6601026 2022-01-07 12:28:27 2022-01-07 13:11:29 2022-01-07 13:27:30 0:16:01 0:06:27 0:09:34 smithi master centos 8.stream rados:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm_repos} 1
pass 6601027 2022-01-07 12:28:28 2022-01-07 13:12:20 2022-01-07 13:36:32 0:24:12 0:17:34 0:06:38 smithi master rhel 8.4 rados:cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
pass 6601028 2022-01-07 12:28:29 2022-01-07 13:13:20 2022-01-07 13:59:29 0:46:09 0:36:13 0:09:56 smithi master centos 8.stream rados:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} 2
dead 6601029 2022-01-07 12:28:30 2022-01-07 13:14:01 2022-01-08 01:26:03 12:12:02 smithi master centos 8.stream rados:cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

fail 6601030 2022-01-07 12:28:31 2022-01-07 13:14:11 2022-01-07 13:33:53 0:19:42 0:08:58 0:10:44 smithi master ubuntu 18.04 rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi008 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e6d739d3acbd78e5fb4ee5f072efc21d7c0af8c0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a33611c2-6fbd-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi008:vg_nvme/lv_4'

fail 6601031 2022-01-07 12:28:32 2022-01-07 13:14:21 2022-01-07 13:29:08 0:14:47 0:03:25 0:11:22 smithi master ubuntu 18.04 rados:cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

Command failed on smithi012 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn'

fail 6601032 2022-01-07 12:28:33 2022-01-07 13:14:42 2022-01-07 13:29:21 0:14:39 0:03:23 0:11:16 smithi master ubuntu 18.04 rados:cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi053 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn'

pass 6601033 2022-01-07 12:28:34 2022-01-07 13:15:12 2022-01-07 13:48:47 0:33:35 0:23:50 0:09:45 smithi master centos 8.3 rados:cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
dead 6601034 2022-01-07 12:28:35 2022-01-07 13:15:43 2022-01-07 13:34:38 0:18:55 smithi master ubuntu 18.04 rados:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

SSH connection to smithi135 was lost: 'uname -r'

pass 6601035 2022-01-07 12:28:36 2022-01-07 13:16:23 2022-01-07 13:45:24 0:29:01 0:15:10 0:13:51 smithi master ubuntu 20.04 rados:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/mirror 3-final} 2
pass 6601036 2022-01-07 12:28:37 2022-01-07 13:18:44 2022-01-07 13:44:25 0:25:41 0:18:07 0:07:34 smithi master rhel 8.4 rados:cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
pass 6601037 2022-01-07 12:28:38 2022-01-07 13:18:44 2022-01-07 13:46:49 0:28:05 0:17:29 0:10:36 smithi master ubuntu 20.04 rados:cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} 2