User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2022-02-24 22:04:22 | 2022-02-24 22:19:49 | 2022-02-25 05:16:00 | 6:56:11 | rados | wip-yuri7-testing-2022-02-17-0852-pacific | smithi | 9f91d3c | 15 | 5 | 5 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6704752 | 2022-02-24 22:06:10 | 2022-02-24 22:19:49 | 2022-02-24 22:58:59 | 0:39:10 | 0:28:57 | 0:10:13 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6704753 | 2022-02-24 22:06:11 | 2022-02-24 22:19:49 | 2022-02-24 22:35:27 | 0:15:38 | 0:05:30 | 0:10:08 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} | 1 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi140.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
pass | 6704754 | 2022-02-24 22:06:12 | 2022-02-24 22:19:49 | 2022-02-24 23:00:59 | 0:41:10 | 0:30:42 | 0:10:28 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
dead | 6704755 | 2022-02-24 22:06:13 | 2022-02-24 22:19:50 | 2022-02-25 05:00:12 | 6:40:22 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6704756 | 2022-02-24 22:06:14 | 2022-02-24 22:21:50 | 2022-02-24 22:48:18 | 0:26:28 | 0:14:40 | 0:11:48 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
pass | 6704757 | 2022-02-24 22:06:15 | 2022-02-24 22:21:51 | 2022-02-24 22:59:20 | 0:37:29 | 0:27:06 | 0:10:23 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6704758 | 2022-02-24 22:06:16 | 2022-02-24 22:22:11 | 2022-02-24 22:51:25 | 0:29:14 | 0:18:33 | 0:10:41 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} | 2 | |
fail | 6704759 | 2022-02-24 22:06:17 | 2022-02-24 22:23:32 | 2022-02-24 22:48:30 | 0:24:58 | 0:11:13 | 0:13:45 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi173 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1bf57e9c-95c3-11ec-8c35-001a4aab830c -- ceph orch daemon add osd smithi173:vg_nvme/lv_4' |
||||||||||||||
fail | 6704760 | 2022-02-24 22:06:19 | 2022-02-24 22:27:12 | 2022-02-24 22:51:19 | 0:24:07 | 0:13:09 | 0:10:58 | smithi | master | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi149 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:9f91d3caa3f16637a5668f2b678fb3a44b6977ba shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b762afb8-95c2-11ec-8c35-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
pass | 6704761 | 2022-02-24 22:06:20 | 2022-02-24 22:27:43 | 2022-02-24 22:50:27 | 0:22:44 | 0:14:37 | 0:08:07 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
dead | 6704762 | 2022-02-24 22:06:21 | 2022-02-24 22:27:43 | 2022-02-25 05:07:15 | 6:39:32 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6704763 | 2022-02-24 22:06:22 | 2022-02-24 22:28:33 | 2022-02-24 23:08:05 | 0:39:32 | 0:29:08 | 0:10:24 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6704764 | 2022-02-24 22:06:23 | 2022-02-24 23:09:36 | 1749 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | ||||
dead | 6704765 | 2022-02-24 22:06:24 | 2022-02-24 22:30:34 | 2022-02-25 05:09:37 | 6:39:03 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6704766 | 2022-02-24 22:06:25 | 2022-02-24 22:31:15 | 2022-02-24 22:53:53 | 0:22:38 | 0:14:17 | 0:08:21 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
fail | 6704767 | 2022-02-24 22:06:26 | 2022-02-24 22:31:15 | 2022-02-24 22:46:47 | 0:15:32 | 0:05:36 | 0:09:56 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi083.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
pass | 6704768 | 2022-02-24 22:06:27 | 2022-02-24 22:31:25 | 2022-02-24 22:54:34 | 0:23:09 | 0:14:00 | 0:09:09 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
pass | 6704769 | 2022-02-24 22:06:28 | 2022-02-24 22:31:26 | 2022-02-24 23:10:20 | 0:38:54 | 0:28:02 | 0:10:52 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6704770 | 2022-02-24 22:06:29 | 2022-02-24 22:33:26 | 2022-02-24 23:01:56 | 0:28:30 | 0:17:13 | 0:11:17 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} | 2 | |
pass | 6704771 | 2022-02-24 22:06:30 | 2022-02-24 22:34:27 | 2022-02-24 23:44:14 | 1:09:47 | 0:59:05 | 0:10:42 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/radosbench} | 3 | |
fail | 6704772 | 2022-02-24 22:06:31 | 2022-02-24 22:34:37 | 2022-02-24 23:38:34 | 1:03:57 | 0:53:07 | 0:10:50 | smithi | master | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi085 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9f91d3caa3f16637a5668f2b678fb3a44b6977ba TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 6704773 | 2022-02-24 22:06:32 | 2022-02-24 22:35:37 | 2022-02-24 23:11:27 | 0:35:50 | 0:26:31 | 0:09:19 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
dead | 6704774 | 2022-02-24 22:06:33 | 2022-02-24 22:36:18 | 2022-02-25 05:15:40 | 6:39:22 | smithi | master | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6704775 | 2022-02-24 22:06:34 | 2022-02-24 22:36:58 | 2022-02-24 22:59:27 | 0:22:29 | 0:14:44 | 0:07:45 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
dead | 6704776 | 2022-02-24 22:06:35 | 2022-02-24 22:36:58 | 2022-02-25 05:16:00 | 6:39:02 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |