User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2023-01-13 20:42:41 | 2023-01-13 20:44:58 | 2023-01-14 00:05:32 | 3:20:34 | rados | pacific_16.2.11_RC6.6 | smithi | bcbf88b | 19 | 12 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7142042 | 2023-01-13 20:44:38 | 2023-01-13 20:44:58 | 2023-01-13 22:32:29 | 1:47:31 | 1:37:07 | 0:10:24 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/radosbench} | 3 | |
pass | 7142043 | 2023-01-13 20:44:39 | 2023-01-13 20:44:59 | 2023-01-13 21:03:12 | 0:18:13 | 0:08:36 | 0:09:37 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/peer mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7142044 | 2023-01-13 20:44:40 | 2023-01-13 20:44:59 | 2023-01-13 21:14:55 | 0:29:56 | 0:18:50 | 0:11:06 | smithi | main | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
pass | 7142045 | 2023-01-13 20:44:41 | 2023-01-13 20:44:59 | 2023-01-13 21:33:04 | 0:48:05 | 0:37:21 | 0:10:44 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/rbd_cls} | 3 | |
pass | 7142046 | 2023-01-13 20:44:42 | 2023-01-13 20:45:19 | 2023-01-13 21:16:50 | 0:31:31 | 0:17:41 | 0:13:50 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/redirect} | 2 | |
pass | 7142047 | 2023-01-13 20:44:44 | 2023-01-13 20:48:40 | 2023-01-13 21:20:33 | 0:31:53 | 0:22:47 | 0:09:06 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} | 2 | |
fail | 7142048 | 2023-01-13 20:44:45 | 2023-01-13 20:49:01 | 2023-01-13 21:09:20 | 0:20:19 | 0:08:35 | 0:11:44 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} | 1 | |
Failure Reason:
Command failed on smithi135 with status 1: 'sudo kubeadm init --node-name smithi135 --token abcdef.tspkwygyc4uijunh --pod-network-cidr 10.252.48.0/21' |
||||||||||||||
pass | 7142049 | 2023-01-13 20:44:46 | 2023-01-13 20:49:01 | 2023-01-13 21:08:57 | 0:19:56 | 0:10:00 | 0:09:56 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write} | 1 | |
pass | 7142050 | 2023-01-13 20:44:47 | 2023-01-13 20:49:01 | 2023-01-13 21:56:03 | 1:07:02 | 0:55:23 | 0:11:39 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} | 3 | |
pass | 7142051 | 2023-01-13 20:44:48 | 2023-01-13 20:50:22 | 2023-01-13 21:24:29 | 0:34:07 | 0:23:15 | 0:10:52 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} | 2 | |
pass | 7142052 | 2023-01-13 20:44:50 | 2023-01-13 20:50:22 | 2023-01-13 21:38:59 | 0:48:37 | 0:35:03 | 0:13:34 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} | 3 | |
fail | 7142053 | 2023-01-13 20:44:51 | 2023-01-13 20:55:33 | 2023-01-13 21:13:58 | 0:18:25 | 0:12:19 | 0:06:06 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi033 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bcbf88bee4969f40f7fc319ee08e4d88e17faf44 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 7142054 | 2023-01-13 20:44:52 | 2023-01-13 20:55:33 | 2023-01-13 21:15:15 | 0:19:42 | 0:10:01 | 0:09:41 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test post-file.sh) on smithi158 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bcbf88bee4969f40f7fc319ee08e4d88e17faf44 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh' |
||||||||||||||
pass | 7142055 | 2023-01-13 20:44:53 | 2023-01-13 20:57:24 | 2023-01-13 21:59:45 | 1:02:21 | 0:51:41 | 0:10:40 | smithi | main | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} | 2 | |
pass | 7142056 | 2023-01-13 20:44:55 | 2023-01-13 20:57:54 | 2023-01-13 21:30:17 | 0:32:23 | 0:22:42 | 0:09:41 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} | 2 | |
pass | 7142057 | 2023-01-13 20:44:56 | 2023-01-13 20:58:24 | 2023-01-13 21:47:30 | 0:49:06 | 0:38:04 | 0:11:02 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/rbd_cls} | 3 | |
fail | 7142058 | 2023-01-13 20:44:57 | 2023-01-13 21:00:05 | 2023-01-13 21:17:54 | 0:17:49 | 0:08:17 | 0:09:32 | smithi | main | ubuntu | 18.04 | rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} | 1 | |
Failure Reason:
Command failed on smithi097 with status 1: 'sudo kubeadm init --node-name smithi097 --token abcdef.n88eg0xsnsbbhu6p --pod-network-cidr 10.251.0.0/21' |
||||||||||||||
fail | 7142059 | 2023-01-13 20:44:58 | 2023-01-13 21:00:05 | 2023-01-13 21:20:08 | 0:20:03 | 0:06:49 | 0:13:14 | smithi | main | rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} | 2 | |||
Failure Reason:
Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=bcbf88bee4969f40f7fc319ee08e4d88e17faf44 |
||||||||||||||
pass | 7142060 | 2023-01-13 20:44:59 | 2023-01-13 21:02:06 | 2023-01-13 21:33:06 | 0:31:00 | 0:19:56 | 0:11:04 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 2 | |
fail | 7142061 | 2023-01-13 20:45:00 | 2023-01-13 21:02:56 | 2023-01-13 21:28:54 | 0:25:58 | 0:16:39 | 0:09:19 | smithi | main | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi136 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:bcbf88bee4969f40f7fc319ee08e4d88e17faf44 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8890a500-9387-11ed-821d-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
fail | 7142062 | 2023-01-13 20:45:02 | 2023-01-13 21:03:17 | 2023-01-13 21:25:24 | 0:22:07 | 0:11:44 | 0:10:23 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi146 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bcbf88bee4969f40f7fc319ee08e4d88e17faf44 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 7142063 | 2023-01-13 20:45:03 | 2023-01-13 21:05:57 | 2023-01-14 00:05:32 | 2:59:35 | 2:46:45 | 0:12:50 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/radosbench} | 3 | |
pass | 7142064 | 2023-01-13 20:45:04 | 2023-01-13 21:06:38 | 2023-01-13 21:39:45 | 0:33:07 | 0:22:39 | 0:10:28 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} | 2 | |
pass | 7142065 | 2023-01-13 20:45:05 | 2023-01-13 21:07:48 | 2023-01-13 21:41:55 | 0:34:07 | 0:23:37 | 0:10:30 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} | 2 | |
fail | 7142066 | 2023-01-13 20:45:07 | 2023-01-13 21:07:48 | 2023-01-13 21:32:12 | 0:24:24 | 0:08:47 | 0:15:37 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
Command failed on smithi023 with status 1: 'sudo kubeadm init --node-name smithi023 --token abcdef.5wj4f5yxqgp93imx --pod-network-cidr 10.248.176.0/21' |
||||||||||||||
fail | 7142067 | 2023-01-13 20:45:08 | 2023-01-13 21:11:19 | 2023-01-13 21:30:08 | 0:18:49 | 0:12:10 | 0:06:39 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi050 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bcbf88bee4969f40f7fc319ee08e4d88e17faf44 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 7142068 | 2023-01-13 20:45:09 | 2023-01-13 21:11:19 | 2023-01-13 21:42:50 | 0:31:31 | 0:22:55 | 0:08:36 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} | 2 | |
pass | 7142069 | 2023-01-13 20:45:10 | 2023-01-13 21:11:20 | 2023-01-13 22:31:20 | 1:20:00 | 1:07:22 | 0:12:38 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/radosbench} | 3 | |
fail | 7142070 | 2023-01-13 20:45:11 | 2023-01-13 21:13:52 | 2023-01-13 21:32:49 | 0:18:57 | 0:06:59 | 0:11:58 | smithi | main | rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} | 2 | |||
Failure Reason:
Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=bcbf88bee4969f40f7fc319ee08e4d88e17faf44 |
||||||||||||||
fail | 7142071 | 2023-01-13 20:45:12 | 2023-01-13 21:14:51 | 2023-01-13 21:34:55 | 0:20:04 | 0:11:49 | 0:08:15 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi164 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bcbf88bee4969f40f7fc319ee08e4d88e17faf44 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 7142072 | 2023-01-13 20:45:14 | 2023-01-13 21:15:20 | 2023-01-13 21:33:41 | 0:18:21 | smithi | main | ubuntu | 18.04 | rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} | 3 | |||
Failure Reason:
Cannot connect to remote host smithi086 |