User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2022-01-12 15:28:42 | 2022-01-12 17:07:51 | 2022-01-13 05:53:17 | 12:45:26 | rados | wip-yuri-testing-2022-01-07-0928-pacific | smithi | 5a85908 | 8 | 33 | 10 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6610385 | 2022-01-12 15:30:34 | 2022-01-12 17:07:51 | 2022-01-12 17:27:32 | 0:19:41 | 0:09:05 | 0:10:36 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
Command failed on smithi058 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5a8590844d15aca67da79d282c8b3560052b3033 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16f9eed2-73cc-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi058:vg_nvme/lv_4' |
||||||||||||||
fail | 6610386 | 2022-01-12 15:30:35 | 2022-01-12 17:07:51 | 2022-01-12 17:28:47 | 0:20:56 | 0:09:39 | 0:11:17 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/rbd_cls} | 3 | |
Failure Reason:
Command failed on smithi006 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5a8590844d15aca67da79d282c8b3560052b3033 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5b85dfac-73cc-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi006:vg_nvme/lv_4' |
||||||||||||||
fail | 6610387 | 2022-01-12 15:30:36 | 2022-01-12 17:08:41 | 2022-01-12 17:22:42 | 0:14:01 | 0:03:21 | 0:10:40 | smithi | master | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
Command failed on smithi124 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 6610388 | 2022-01-12 15:30:37 | 2022-01-12 17:09:02 | 2022-01-12 17:25:25 | 0:16:23 | 0:05:51 | 0:10:32 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} | 1 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi149.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
dead | 6610389 | 2022-01-12 15:30:38 | 2022-01-12 17:09:12 | 2022-01-13 05:20:10 | 12:10:58 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6610390 | 2022-01-12 15:30:39 | 2022-01-12 17:10:13 | 2022-01-12 17:24:34 | 0:14:21 | 0:03:22 | 0:10:59 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi046 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 6610391 | 2022-01-12 15:30:41 | 2022-01-12 17:10:53 | 2022-01-12 17:25:09 | 0:14:16 | 0:03:25 | 0:10:51 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
Command failed on smithi003 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 6610392 | 2022-01-12 15:30:42 | 2022-01-12 17:11:13 | 2022-01-12 17:32:15 | 0:21:02 | 0:09:51 | 0:11:11 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
Command failed on smithi078 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5a8590844d15aca67da79d282c8b3560052b3033 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e5c14152-73cc-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi078:vg_nvme/lv_4' |
||||||||||||||
fail | 6610393 | 2022-01-12 15:30:43 | 2022-01-12 17:12:34 | 2022-01-12 17:33:16 | 0:20:42 | 0:09:13 | 0:11:29 | smithi | master | ubuntu | 18.04 | rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/root mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi066 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5a8590844d15aca67da79d282c8b3560052b3033 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0b21f464-73cd-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi066:vg_nvme/lv_4' |
||||||||||||||
dead | 6610394 | 2022-01-12 15:30:44 | 2022-01-12 17:12:54 | 2022-01-13 05:23:03 | 12:10:09 | smithi | master | centos | 8.3 | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6610395 | 2022-01-12 15:30:45 | 2022-01-12 17:13:05 | 2022-01-12 17:30:47 | 0:17:42 | 0:06:41 | 0:11:01 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-singlehost/{0-distro$/{ubuntu_18.04} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
Command failed on smithi168 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5a8590844d15aca67da79d282c8b3560052b3033 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aa4f06f4-73cc-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi168:vg_nvme/lv_4' |
||||||||||||||
fail | 6610396 | 2022-01-12 15:30:46 | 2022-01-12 17:13:15 | 2022-01-12 17:34:42 | 0:21:27 | 0:09:13 | 0:12:14 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} | 3 | |
Failure Reason:
Command failed on smithi037 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5a8590844d15aca67da79d282c8b3560052b3033 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 28e181a4-73cd-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi037:vg_nvme/lv_4' |
||||||||||||||
fail | 6610397 | 2022-01-12 15:30:47 | 2022-01-12 17:14:45 | 2022-01-12 17:34:27 | 0:19:42 | 0:09:49 | 0:09:53 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
Command failed on smithi057 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5a8590844d15aca67da79d282c8b3560052b3033 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 36e99a20-73cd-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi057:vg_nvme/lv_4' |
||||||||||||||
pass | 6610398 | 2022-01-12 15:30:48 | 2022-01-12 17:14:46 | 2022-01-12 18:12:03 | 0:57:17 | 0:48:26 | 0:08:51 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
pass | 6610399 | 2022-01-12 15:30:49 | 2022-01-12 17:14:56 | 2022-01-12 17:50:29 | 0:35:33 | 0:27:03 | 0:08:30 | smithi | master | centos | 8.2 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |
fail | 6610400 | 2022-01-12 15:30:50 | 2022-01-12 17:15:17 | 2022-01-12 17:36:14 | 0:20:57 | 0:08:59 | 0:11:58 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
Command failed on smithi012 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5a8590844d15aca67da79d282c8b3560052b3033 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 554c21fe-73cd-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi012:vg_nvme/lv_4' |
||||||||||||||
dead | 6610401 | 2022-01-12 15:30:51 | 2022-01-12 17:15:47 | 2022-01-13 05:25:38 | 12:09:51 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6610402 | 2022-01-12 15:30:52 | 2022-01-12 17:17:24 | 2022-01-12 18:01:43 | 0:44:19 | 0:30:07 | 0:14:12 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/connectivity} | 2 | |
pass | 6610403 | 2022-01-12 15:30:53 | 2022-01-12 17:18:24 | 2022-01-12 17:40:06 | 0:21:42 | 0:10:24 | 0:11:18 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} | 1 | |
fail | 6610404 | 2022-01-12 15:30:54 | 2022-01-12 17:18:25 | 2022-01-12 17:40:47 | 0:22:22 | 0:08:57 | 0:13:25 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/rbd_cls} | 3 | |
Failure Reason:
Command failed on smithi042 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5a8590844d15aca67da79d282c8b3560052b3033 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e00b7ee8-73cd-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi042:vg_nvme/lv_4' |
||||||||||||||
dead | 6610405 | 2022-01-12 15:30:55 | 2022-01-12 17:21:05 | 2022-01-13 05:31:10 | 12:10:05 | smithi | master | centos | 8.3 | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6610406 | 2022-01-12 15:30:56 | 2022-01-12 17:21:36 | 2022-01-12 18:01:33 | 0:39:57 | 0:29:03 | 0:10:54 | smithi | master | centos | 8.stream | rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | |
fail | 6610407 | 2022-01-12 15:30:57 | 2022-01-12 17:22:06 | 2022-01-12 17:36:15 | 0:14:09 | 0:03:21 | 0:10:48 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi124 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 6610408 | 2022-01-12 15:30:58 | 2022-01-12 17:22:46 | 2022-01-12 17:38:57 | 0:16:11 | 0:03:20 | 0:12:51 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
Failure Reason:
Command failed on smithi045 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 6610409 | 2022-01-12 15:30:59 | 2022-01-12 17:23:17 | 2022-01-12 17:44:02 | 0:20:45 | 0:08:55 | 0:11:50 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
Command failed on smithi129 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5a8590844d15aca67da79d282c8b3560052b3033 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5fc37622-73ce-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi129:vg_nvme/lv_4' |
||||||||||||||
pass | 6610410 | 2022-01-12 15:31:00 | 2022-01-12 17:24:17 | 2022-01-12 18:07:41 | 0:43:24 | 0:33:38 | 0:09:46 | smithi | master | centos | 8.2 | rados/singleton/{all/thrash-eio mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} | 2 | |
fail | 6610411 | 2022-01-12 15:31:01 | 2022-01-12 17:24:28 | 2022-01-12 17:38:10 | 0:13:42 | 0:03:25 | 0:10:17 | smithi | master | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
Command failed on smithi046 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
dead | 6610412 | 2022-01-12 15:31:02 | 2022-01-12 17:24:38 | 2022-01-13 05:35:14 | 12:10:36 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6610413 | 2022-01-12 15:31:03 | 2022-01-12 17:25:18 | 2022-01-12 17:46:29 | 0:21:11 | 0:08:45 | 0:12:26 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} | 3 | |
Failure Reason:
Command failed on smithi063 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5a8590844d15aca67da79d282c8b3560052b3033 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b5ebfb82-73ce-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi063:vg_nvme/lv_4' |
||||||||||||||
pass | 6610414 | 2022-01-12 15:31:04 | 2022-01-12 17:26:19 | 2022-01-12 18:08:50 | 0:42:31 | 0:30:47 | 0:11:44 | smithi | master | centos | 8.2 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
fail | 6610415 | 2022-01-12 15:31:06 | 2022-01-12 17:27:09 | 2022-01-12 17:47:13 | 0:20:04 | 0:09:09 | 0:10:55 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
Command failed on smithi058 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5a8590844d15aca67da79d282c8b3560052b3033 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid da4ba9e6-73ce-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi058:vg_nvme/lv_4' |
||||||||||||||
fail | 6610416 | 2022-01-12 15:31:07 | 2022-01-12 17:27:40 | 2022-01-12 17:48:19 | 0:20:39 | 0:09:39 | 0:11:00 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
Command failed on smithi006 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5a8590844d15aca67da79d282c8b3560052b3033 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 142dcc98-73cf-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi006:vg_nvme/lv_4' |
||||||||||||||
dead | 6610417 | 2022-01-12 15:31:08 | 2022-01-12 17:28:50 | 2022-01-13 05:38:38 | 12:09:48 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6610418 | 2022-01-12 15:31:09 | 2022-01-12 17:29:21 | 2022-01-12 17:50:44 | 0:21:23 | smithi | master | ubuntu | 18.04 | rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/packaged mon_election/connectivity msgr/async-v2only start tasks/rados_python} | 2 | |||
Failure Reason:
SSH connection to smithi027 was lost: 'uname -r' |
||||||||||||||
fail | 6610419 | 2022-01-12 15:31:10 | 2022-01-12 17:29:41 | 2022-01-12 17:43:39 | 0:13:58 | 0:03:30 | 0:10:28 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi132 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 6610420 | 2022-01-12 15:31:11 | 2022-01-12 17:30:12 | 2022-01-12 17:45:02 | 0:14:50 | 0:03:28 | 0:11:22 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
Failure Reason:
Command failed on smithi156 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 6610421 | 2022-01-12 15:31:12 | 2022-01-12 17:31:22 | 2022-01-12 17:51:56 | 0:20:34 | 0:09:06 | 0:11:28 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/rbd_cls} | 3 | |
Failure Reason:
Command failed on smithi078 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5a8590844d15aca67da79d282c8b3560052b3033 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 83c6b060-73cf-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi078:vg_nvme/lv_4' |
||||||||||||||
fail | 6610422 | 2022-01-12 15:31:13 | 2022-01-12 17:32:22 | 2022-01-12 17:53:13 | 0:20:51 | 0:09:47 | 0:11:04 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
Command failed on smithi161 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5a8590844d15aca67da79d282c8b3560052b3033 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c58dd9d8-73cf-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi161:vg_nvme/lv_4' |
||||||||||||||
dead | 6610423 | 2022-01-12 15:31:14 | 2022-01-12 17:33:03 | 2022-01-13 05:43:25 | 12:10:22 | smithi | master | centos | 8.3 | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6610424 | 2022-01-12 15:31:15 | 2022-01-12 17:33:23 | 2022-01-12 17:52:22 | 0:18:59 | 0:06:23 | 0:12:36 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi057.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
fail | 6610425 | 2022-01-12 15:31:16 | 2022-01-12 17:34:34 | 2022-01-12 17:54:32 | 0:19:58 | 0:09:02 | 0:10:56 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/test_rbd_api} | 3 | |
Failure Reason:
Command failed on smithi037 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5a8590844d15aca67da79d282c8b3560052b3033 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid dbf02032-73cf-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi037:vg_nvme/lv_4' |
||||||||||||||
fail | 6610426 | 2022-01-12 15:31:17 | 2022-01-12 17:34:44 | 2022-01-12 17:48:25 | 0:13:41 | 0:03:26 | 0:10:15 | smithi | master | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi005 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 6610427 | 2022-01-12 15:31:18 | 2022-01-12 17:34:44 | 2022-01-12 17:56:33 | 0:21:49 | 0:08:58 | 0:12:51 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
Command failed on smithi012 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5a8590844d15aca67da79d282c8b3560052b3033 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 209916a8-73d0-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi012:vg_nvme/lv_4' |
||||||||||||||
pass | 6610428 | 2022-01-12 15:31:19 | 2022-01-12 17:36:15 | 2022-01-12 18:13:40 | 0:37:25 | 0:26:59 | 0:10:26 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6610429 | 2022-01-12 15:31:20 | 2022-01-12 17:36:15 | 2022-01-12 17:57:49 | 0:21:34 | 0:09:03 | 0:12:31 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
Command failed on smithi046 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5a8590844d15aca67da79d282c8b3560052b3033 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 52d0f3fc-73d0-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi046:vg_nvme/lv_4' |
||||||||||||||
dead | 6610430 | 2022-01-12 15:31:21 | 2022-01-12 17:38:16 | 2022-01-13 05:48:13 | 12:09:57 | smithi | master | centos | 8.3 | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6610431 | 2022-01-12 15:31:22 | 2022-01-12 17:39:07 | 2022-01-12 18:50:06 | 1:10:59 | 1:02:15 | 0:08:44 | smithi | master | centos | 8.2 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi040 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5a8590844d15aca67da79d282c8b3560052b3033 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 6610432 | 2022-01-12 15:31:23 | 2022-01-12 17:39:07 | 2022-01-12 17:53:06 | 0:13:59 | 0:03:22 | 0:10:37 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi045 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 6610433 | 2022-01-12 15:31:24 | 2022-01-12 17:39:07 | 2022-01-12 17:52:45 | 0:13:38 | 0:03:24 | 0:10:14 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
Command failed on smithi043 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 6610434 | 2022-01-12 15:31:26 | 2022-01-12 17:39:18 | 2022-01-12 18:00:18 | 0:21:00 | 0:08:45 | 0:12:15 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/rbd_cls} | 3 | |
Failure Reason:
Command failed on smithi042 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5a8590844d15aca67da79d282c8b3560052b3033 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a3f7cb2a-73d0-11ec-8c32-001a4aab830c -- ceph orch daemon add osd smithi042:vg_nvme/lv_4' |
||||||||||||||
dead | 6610435 | 2022-01-12 15:31:27 | 2022-01-12 17:40:48 | 2022-01-13 05:53:17 | 12:12:29 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |