User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2022-01-21 17:35:35 | 2022-01-21 17:41:20 | 2022-01-22 00:40:24 | 6:59:04 | rados | wip-yuri5-testing-2022-01-20-0652-pacific | smithi | b2828da | 11 | 3 | 13 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 6632580 | 2022-01-21 17:37:27 | 2022-01-21 17:41:20 | 2022-01-22 00:22:02 | 6:40:42 | smithi | master | centos | 8.3 | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6632581 | 2022-01-21 17:37:28 | 2022-01-21 17:41:20 | 2022-01-21 18:21:22 | 0:40:02 | 0:30:08 | 0:09:54 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/cache-snaps} | 3 | |
dead | 6632582 | 2022-01-21 17:37:29 | 2022-01-21 17:41:41 | 2022-01-22 00:21:38 | 6:39:57 | smithi | master | centos | 8.3 | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6632583 | 2022-01-21 17:37:30 | 2022-01-21 17:41:41 | 2022-01-21 19:39:06 | 1:57:25 | 1:49:11 | 0:08:14 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-radosbench} | 2 | |
pass | 6632584 | 2022-01-21 17:37:31 | 2022-01-21 17:43:42 | 2022-01-21 18:21:53 | 0:38:11 | 0:27:45 | 0:10:26 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
dead | 6632585 | 2022-01-21 17:37:32 | 2022-01-21 17:44:52 | 2022-01-22 00:25:48 | 6:40:56 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6632586 | 2022-01-21 17:37:33 | 2022-01-21 17:45:02 | 2022-01-21 18:25:41 | 0:40:39 | 0:30:04 | 0:10:35 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/cache-snaps} | 3 | |
pass | 6632587 | 2022-01-21 17:37:34 | 2022-01-21 17:46:13 | 2022-01-21 19:34:24 | 1:48:11 | 1:38:57 | 0:09:14 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/radosbench} | 3 | |
dead | 6632588 | 2022-01-21 17:37:35 | 2022-01-21 17:46:53 | 2022-01-22 00:28:53 | 6:42:00 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6632589 | 2022-01-21 17:37:36 | 2022-01-21 17:49:14 | 2022-01-21 18:22:45 | 0:33:31 | 0:24:54 | 0:08:37 | smithi | master | ubuntu | 18.04 | rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/packaged mon_election/connectivity msgr/async-v2only start tasks/rados_python} | 2 | |
fail | 6632590 | 2022-01-21 17:37:37 | 2022-01-21 17:49:44 | 2022-01-21 18:13:53 | 0:24:09 | 0:13:04 | 0:11:05 | smithi | master | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi083 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:b2828da6c708bfada733c819d2073ac8aac99291 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4d3eedc0-7ae4-11ec-8c35-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
dead | 6632591 | 2022-01-21 17:37:38 | 2022-01-21 17:49:55 | 2022-01-21 18:08:09 | 0:18:14 | smithi | master | ubuntu | 18.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-octopus 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/connectivity thrashosds-health ubuntu_18.04} | 4 | |||
Failure Reason:
SSH connection to smithi190 was lost: 'uname -r' |
||||||||||||||
dead | 6632592 | 2022-01-21 17:37:39 | 2022-01-21 17:50:25 | 2022-01-22 00:30:45 | 6:40:20 | smithi | master | centos | 8.3 | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6632593 | 2022-01-21 17:37:40 | 2022-01-21 17:50:36 | 2022-01-21 18:06:57 | 0:16:21 | 0:06:11 | 0:10:10 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi016.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
dead | 6632594 | 2022-01-21 17:37:41 | 2022-01-21 17:51:16 | 2022-01-21 18:09:26 | 0:18:10 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/test_rbd_api} | 3 | |||
Failure Reason:
SSH connection to smithi087 was lost: 'uname -r' |
||||||||||||||
dead | 6632595 | 2022-01-21 17:37:42 | 2022-01-21 17:51:46 | 2022-01-21 18:10:19 | 0:18:33 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/radosbench} | 3 | |||
Failure Reason:
SSH connection to smithi110 was lost: 'uname -r' |
||||||||||||||
dead | 6632596 | 2022-01-21 17:37:43 | 2022-01-21 17:52:37 | 2022-01-22 00:33:09 | 6:40:32 | smithi | master | centos | 8.3 | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6632597 | 2022-01-21 17:37:45 | 2022-01-21 17:52:47 | 2022-01-22 00:32:55 | 6:40:08 | smithi | master | centos | 8.2 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6632598 | 2022-01-21 17:37:46 | 2022-01-21 17:53:58 | 2022-01-21 18:25:28 | 0:31:30 | 0:21:35 | 0:09:55 | smithi | master | centos | 8.2 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/small-objects-localized} | 2 | |
dead | 6632599 | 2022-01-21 17:37:47 | 2022-01-21 17:54:18 | 2022-01-22 00:37:28 | 6:43:10 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6632600 | 2022-01-21 17:37:48 | 2022-01-21 17:55:29 | 2022-01-21 18:33:36 | 0:38:07 | 0:28:15 | 0:09:52 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
dead | 6632601 | 2022-01-21 17:37:49 | 2022-01-21 17:55:29 | 2022-01-22 00:36:18 | 6:40:49 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6632602 | 2022-01-21 17:37:50 | 2022-01-21 17:56:09 | 2022-01-21 18:28:04 | 0:31:55 | 0:24:27 | 0:07:28 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 6632603 | 2022-01-21 17:37:51 | 2022-01-21 17:57:00 | 2022-01-21 18:33:42 | 0:36:42 | 0:26:32 | 0:10:10 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/test_rbd_api} | 3 | |
pass | 6632604 | 2022-01-21 17:37:52 | 2022-01-21 17:58:20 | 2022-01-21 18:41:32 | 0:43:12 | 0:30:23 | 0:12:49 | smithi | master | ubuntu | 18.04 | rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} | 3 | |
fail | 6632605 | 2022-01-21 17:37:53 | 2022-01-21 17:59:51 | 2022-01-21 18:23:11 | 0:23:20 | 0:13:32 | 0:09:48 | smithi | master | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
Command failed on smithi001 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:b2828da6c708bfada733c819d2073ac8aac99291 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8c9dc2b0-7ae5-11ec-8c35-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
dead | 6632606 | 2022-01-21 17:37:54 | 2022-01-21 17:59:51 | 2022-01-22 00:40:24 | 6:40:33 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |