User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2022-04-26 02:18:32 | 2022-04-26 02:22:52 | 2022-04-26 09:17:53 | 6:55:01 | rados | wip-55324-pacific-backport | smithi | aa0c708 | 15 | 12 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6805595 | 2022-04-26 02:20:18 | 2022-04-26 02:22:52 | 2022-04-26 02:33:44 | 0:10:52 | 0:05:18 | 0:05:34 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm_repos} | 1 | |
pass | 6805596 | 2022-04-26 02:20:19 | 2022-04-26 02:22:52 | 2022-04-26 02:55:51 | 0:32:59 | 0:27:17 | 0:05:42 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6805597 | 2022-04-26 02:20:20 | 2022-04-26 02:23:13 | 2022-04-26 02:34:53 | 0:11:40 | 0:04:51 | 0:06:49 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} | 1 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi109.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
pass | 6805598 | 2022-04-26 02:20:21 | 2022-04-26 02:23:43 | 2022-04-26 02:42:52 | 0:19:09 | 0:11:39 | 0:07:30 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli} | 1 | |
fail | 6805599 | 2022-04-26 02:20:22 | 2022-04-26 02:23:43 | 2022-04-26 02:48:43 | 0:25:00 | 0:15:17 | 0:09:43 | smithi | master | ubuntu | 18.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/classic thrashosds-health ubuntu_18.04} | 4 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi157 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=nautilus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 6805600 | 2022-04-26 02:20:23 | 2022-04-26 02:24:54 | 2022-04-26 02:50:01 | 0:25:07 | 0:18:32 | 0:06:35 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi093 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6b9bfb4a-c509-11ec-8c39-001a4aab830c -e sha1=aa0c7084d7c33fa13e629854baf24f102c2ea55d -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 6805601 | 2022-04-26 02:20:24 | 2022-04-26 02:25:14 | 2022-04-26 03:01:15 | 0:36:01 | 0:28:37 | 0:07:24 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6805602 | 2022-04-26 02:20:25 | 2022-04-26 02:26:05 | 2022-04-26 02:45:05 | 0:19:00 | 0:12:04 | 0:06:56 | smithi | master | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity objectstore/bluestore-hybrid supported-random-distro$/{centos_8} tasks/failover} | 2 | |
pass | 6805603 | 2022-04-26 02:20:26 | 2022-04-26 02:26:05 | 2022-04-26 03:05:22 | 0:39:17 | 0:31:37 | 0:07:40 | smithi | master | centos | 8.stream | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6805604 | 2022-04-26 02:20:27 | 2022-04-26 02:26:35 | 2022-04-26 02:43:29 | 0:16:54 | 0:10:51 | 0:06:03 | smithi | master | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8} workloads/mgr} | 1 | |
fail | 6805605 | 2022-04-26 02:20:28 | 2022-04-26 02:26:36 | 2022-04-26 02:47:21 | 0:20:45 | 0:12:55 | 0:07:50 | smithi | master | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi129 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:aa0c7084d7c33fa13e629854baf24f102c2ea55d shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf779a4-c509-11ec-8c39-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
fail | 6805606 | 2022-04-26 02:20:29 | 2022-04-26 02:27:56 | 2022-04-26 02:56:01 | 0:28:05 | 0:20:36 | 0:07:29 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi018 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fc7a1ee4-c509-11ec-8c39-001a4aab830c -e sha1=aa0c7084d7c33fa13e629854baf24f102c2ea55d -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 6805607 | 2022-04-26 02:20:30 | 2022-04-26 02:28:17 | 2022-04-26 02:42:20 | 0:14:03 | 0:05:29 | 0:08:34 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm_repos} | 1 | |
pass | 6805608 | 2022-04-26 02:20:31 | 2022-04-26 02:29:27 | 2022-04-26 03:06:21 | 0:36:54 | 0:28:51 | 0:08:03 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6805609 | 2022-04-26 02:20:32 | 2022-04-26 02:31:18 | 2022-04-26 02:50:27 | 0:19:09 | 0:11:53 | 0:07:16 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli} | 1 | |
fail | 6805610 | 2022-04-26 02:20:33 | 2022-04-26 02:31:18 | 2022-04-26 02:40:51 | 0:09:33 | 0:02:59 | 0:06:34 | smithi | master | ubuntu | 20.04 | rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zlib} supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Command failed on smithi080 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
fail | 6805611 | 2022-04-26 02:20:34 | 2022-04-26 02:31:19 | 2022-04-26 02:53:30 | 0:22:11 | 0:14:42 | 0:07:29 | smithi | master | ubuntu | 18.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/connectivity thrashosds-health ubuntu_18.04} | 4 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi189 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=nautilus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 6805612 | 2022-04-26 02:20:35 | 2022-04-26 02:32:09 | 2022-04-26 02:58:07 | 0:25:58 | 0:18:45 | 0:07:13 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi039 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8c93159e-c50a-11ec-8c39-001a4aab830c -e sha1=aa0c7084d7c33fa13e629854baf24f102c2ea55d -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 6805613 | 2022-04-26 02:20:36 | 2022-04-26 02:32:29 | 2022-04-26 02:45:44 | 0:13:15 | 0:05:22 | 0:07:53 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi089.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
pass | 6805614 | 2022-04-26 02:20:37 | 2022-04-26 02:34:00 | 2022-04-26 02:53:45 | 0:19:45 | 0:11:46 | 0:07:59 | smithi | master | centos | 8.stream | rados/singleton/{all/max-pg-per-osd.from-primary mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} | 1 | |
fail | 6805615 | 2022-04-26 02:20:38 | 2022-04-26 02:35:01 | 2022-04-26 03:04:18 | 0:29:17 | 0:23:09 | 0:06:08 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi117 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 31ef1d9e-c50b-11ec-8c39-001a4aab830c -e sha1=aa0c7084d7c33fa13e629854baf24f102c2ea55d -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 6805616 | 2022-04-26 02:20:39 | 2022-04-26 02:35:31 | 2022-04-26 03:42:22 | 1:06:51 | 0:59:26 | 0:07:25 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/radosbench} | 3 | |
pass | 6805617 | 2022-04-26 02:20:40 | 2022-04-26 02:37:12 | 2022-04-26 03:13:54 | 0:36:42 | 0:31:06 | 0:05:36 | smithi | master | centos | 8.stream | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} | 1 | |
fail | 6805618 | 2022-04-26 02:20:41 | 2022-04-26 02:37:12 | 2022-04-26 09:13:05 | 6:35:53 | 6:27:56 | 0:07:57 | smithi | master | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi006 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=aa0c7084d7c33fa13e629854baf24f102c2ea55d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 6805619 | 2022-04-26 02:20:42 | 2022-04-26 02:37:52 | 2022-04-26 03:18:04 | 0:40:12 | 0:32:42 | 0:07:30 | smithi | master | centos | 8.stream | rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
dead | 6805620 | 2022-04-26 02:20:43 | 2022-04-26 02:38:33 | 2022-04-26 09:17:53 | 6:39:20 | smithi | master | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6805621 | 2022-04-26 02:20:44 | 2022-04-26 02:39:33 | 2022-04-26 03:33:16 | 0:53:43 | 0:47:41 | 0:06:02 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/snaps-few-objects} | 3 | |
fail | 6805622 | 2022-04-26 02:20:45 | 2022-04-26 02:40:04 | 2022-04-26 03:08:03 | 0:27:59 | 0:19:41 | 0:08:18 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi080 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a3819a68-c50b-11ec-8c39-001a4aab830c -e sha1=aa0c7084d7c33fa13e629854baf24f102c2ea55d -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |