User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2022-01-31 16:18:27 | 2022-01-31 16:21:12 | 2022-01-31 23:22:22 | 7:01:10 | rados | wip-yuri-testing-2022-01-26-1810-pacific | smithi | ded593e | 11 | 10 | 6 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 6651076 | 2022-01-31 16:20:17 | 2022-01-31 16:21:12 | 2022-01-31 23:00:55 | 6:39:43 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6651077 | 2022-01-31 16:20:19 | 2022-01-31 16:21:12 | 2022-01-31 16:35:46 | 0:14:34 | 0:03:38 | 0:10:56 | smithi | master | ubuntu | 20.04 | rados/dashboard/{centos_8.2_container_tools_3.0 clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Command failed on smithi077 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
fail | 6651078 | 2022-01-31 16:20:20 | 2022-01-31 16:21:12 | 2022-01-31 16:28:48 | 0:07:36 | smithi | master | centos | 8.3 | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
Command failed on smithi139 with status 1: 'sudo yum install -y kernel' |
||||||||||||||
dead | 6651079 | 2022-01-31 16:20:21 | 2022-01-31 16:22:13 | 2022-01-31 23:09:30 | 6:47:17 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6651080 | 2022-01-31 16:20:22 | 2022-01-31 16:26:03 | 2022-01-31 16:51:04 | 0:25:01 | 0:19:26 | 0:05:35 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
pass | 6651081 | 2022-01-31 16:20:23 | 2022-01-31 16:26:04 | 2022-01-31 17:03:31 | 0:37:27 | 0:27:59 | 0:09:28 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/rbd_cls} | 3 | |
fail | 6651082 | 2022-01-31 16:20:24 | 2022-01-31 16:26:14 | 2022-01-31 16:33:43 | 0:07:29 | smithi | master | centos | 8.3 | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
Command failed on smithi188 with status 1: 'sudo yum install -y kernel' |
||||||||||||||
pass | 6651083 | 2022-01-31 16:20:25 | 2022-01-31 16:26:25 | 2022-01-31 16:55:12 | 0:28:47 | 0:17:21 | 0:11:26 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} | 2 | |
pass | 6651084 | 2022-01-31 16:20:26 | 2022-01-31 16:27:45 | 2022-01-31 16:53:51 | 0:26:06 | 0:15:26 | 0:10:40 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
dead | 6651085 | 2022-01-31 16:20:27 | 2022-01-31 16:27:45 | 2022-01-31 23:08:16 | 6:40:31 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6651086 | 2022-01-31 16:20:28 | 2022-01-31 16:28:56 | 2022-01-31 17:06:34 | 0:37:38 | 0:29:00 | 0:08:38 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/cache-snaps} | 3 | |
dead | 6651087 | 2022-01-31 16:20:29 | 2022-01-31 16:29:16 | 2022-01-31 23:11:10 | 6:41:54 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6651088 | 2022-01-31 16:20:30 | 2022-01-31 16:30:37 | 2022-01-31 16:54:34 | 0:23:57 | 0:13:25 | 0:10:32 | smithi | master | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi078 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:ded593e77a6b998a752825f38a3fe8ef9e1547fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid dce63780-82b4-11ec-8c35-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
fail | 6651089 | 2022-01-31 16:20:31 | 2022-01-31 16:30:57 | 2022-01-31 16:44:53 | 0:13:56 | 0:03:35 | 0:10:21 | smithi | master | ubuntu | 20.04 | rados/dashboard/{centos_8.2_container_tools_3.0 clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zstd} supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Command failed on smithi159 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
fail | 6651090 | 2022-01-31 16:20:32 | 2022-01-31 16:30:57 | 2022-01-31 16:39:05 | 0:08:08 | smithi | master | centos | 8.3 | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
Command failed on smithi070 with status 1: 'sudo yum install -y kernel' |
||||||||||||||
pass | 6651091 | 2022-01-31 16:20:34 | 2022-01-31 16:31:58 | 2022-01-31 17:11:21 | 0:39:23 | 0:27:37 | 0:11:46 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
fail | 6651092 | 2022-01-31 16:20:35 | 2022-01-31 16:33:48 | 2022-01-31 16:53:15 | 0:19:27 | 0:06:07 | 0:13:20 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi018.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
pass | 6651093 | 2022-01-31 16:20:36 | 2022-01-31 16:34:39 | 2022-01-31 17:00:44 | 0:26:05 | 0:19:44 | 0:06:21 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
dead | 6651094 | 2022-01-31 16:20:37 | 2022-01-31 16:35:19 | 2022-01-31 16:55:10 | 0:19:51 | smithi | master | rhel | 8.4 | rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/mon_clock_no_skews} | 3 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
pass | 6651095 | 2022-01-31 16:20:38 | 2022-01-31 16:35:50 | 2022-01-31 17:48:22 | 1:12:32 | 0:59:33 | 0:12:59 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/radosbench} | 3 | |
fail | 6651096 | 2022-01-31 16:20:39 | 2022-01-31 16:38:10 | 2022-01-31 16:45:47 | 0:07:37 | smithi | master | centos | 8.3 | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
Command failed on smithi107 with status 1: 'sudo yum install -y kernel' |
||||||||||||||
pass | 6651097 | 2022-01-31 16:20:40 | 2022-01-31 16:38:31 | 2022-01-31 17:06:25 | 0:27:54 | 0:17:01 | 0:10:53 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} | 2 | |
dead | 6651098 | 2022-01-31 16:20:41 | 2022-01-31 16:39:11 | 2022-01-31 23:22:22 | 6:43:11 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6651099 | 2022-01-31 16:20:42 | 2022-01-31 16:41:02 | 2022-01-31 16:48:02 | 0:07:00 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |||
Failure Reason:
Command failed on smithi119 with status 1: 'sudo yum install -y kernel' |
||||||||||||||
pass | 6651100 | 2022-01-31 16:20:44 | 2022-01-31 16:41:02 | 2022-01-31 17:39:21 | 0:58:19 | 0:47:26 | 0:10:53 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/snaps-few-objects} | 3 | |
pass | 6651101 | 2022-01-31 16:20:45 | 2022-01-31 16:41:53 | 2022-01-31 17:48:44 | 1:06:51 | 1:00:43 | 0:06:08 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |
fail | 6651102 | 2022-01-31 16:20:46 | 2022-01-31 16:41:53 | 2022-01-31 17:05:03 | 0:23:10 | 0:13:19 | 0:09:51 | smithi | master | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
Command failed on smithi042 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:ded593e77a6b998a752825f38a3fe8ef9e1547fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 52c098dc-82b6-11ec-8c35-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |