User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2022-01-26 15:53:14 | 2022-01-26 15:56:28 | 2022-01-26 22:50:56 | 6:54:28 | rados | wip-yuri5-testing-2022-01-25-1419-pacific | smithi | 23fb62b | 15 | 5 | 10 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 6641487 | 2022-01-26 15:55:04 | 2022-01-26 15:56:27 | 2022-01-26 22:36:52 | 6:40:25 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6641488 | 2022-01-26 15:55:05 | 2022-01-26 15:56:28 | 2022-01-26 16:23:49 | 0:27:21 | 0:17:04 | 0:10:17 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} | 2 | |
pass | 6641489 | 2022-01-26 15:55:06 | 2022-01-26 15:56:28 | 2022-01-26 16:23:29 | 0:27:01 | 0:18:06 | 0:08:55 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
fail | 6641490 | 2022-01-26 15:55:07 | 2022-01-26 15:56:28 | 2022-01-26 16:11:01 | 0:14:33 | 0:03:39 | 0:10:54 | smithi | master | ubuntu | 20.04 | rados/dashboard/{centos_8.2_container_tools_3.0 clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{filestore-xfs} supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Command failed on smithi027 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
dead | 6641491 | 2022-01-26 15:55:09 | 2022-01-26 15:56:28 | 2022-01-26 22:36:39 | 6:40:11 | smithi | master | centos | 8.3 | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6641492 | 2022-01-26 15:55:10 | 2022-01-26 15:56:39 | 2022-01-26 17:04:19 | 1:07:40 | 0:58:42 | 0:08:58 | smithi | master | centos | 8.2 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
dead | 6641493 | 2022-01-26 15:55:11 | 2022-01-26 15:57:09 | 2022-01-26 22:38:48 | 6:41:39 | smithi | master | centos | 8.3 | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6641494 | 2022-01-26 15:55:12 | 2022-01-26 15:58:50 | 2022-01-26 16:57:19 | 0:58:29 | 0:46:17 | 0:12:12 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} | 3 | |
pass | 6641495 | 2022-01-26 15:55:13 | 2022-01-26 15:59:40 | 2022-01-26 16:37:05 | 0:37:25 | 0:28:18 | 0:09:07 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
dead | 6641496 | 2022-01-26 15:55:14 | 2022-01-26 15:59:51 | 2022-01-26 22:40:30 | 6:40:39 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6641497 | 2022-01-26 15:55:15 | 2022-01-26 16:00:11 | 2022-01-26 16:36:58 | 0:36:47 | 0:26:29 | 0:10:18 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} | 3 | |
pass | 6641498 | 2022-01-26 15:55:16 | 2022-01-26 16:01:02 | 2022-01-26 16:43:06 | 0:42:04 | 0:30:13 | 0:11:51 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/cache-snaps} | 3 | |
pass | 6641499 | 2022-01-26 15:55:17 | 2022-01-26 16:03:32 | 2022-01-26 16:38:51 | 0:35:19 | 0:25:41 | 0:09:38 | smithi | master | centos | 8.2 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/lockdep} | 2 | |
pass | 6641500 | 2022-01-26 15:55:18 | 2022-01-26 16:03:33 | 2022-01-26 17:58:47 | 1:55:14 | 1:44:57 | 0:10:17 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/radosbench} | 3 | |
dead | 6641501 | 2022-01-26 15:55:20 | 2022-01-26 16:04:43 | 2022-01-26 22:46:31 | 6:41:48 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6641502 | 2022-01-26 15:55:21 | 2022-01-26 16:06:24 | 2022-01-26 16:33:55 | 0:27:31 | 0:17:23 | 0:10:08 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} | 2 | |
pass | 6641503 | 2022-01-26 15:55:22 | 2022-01-26 16:06:24 | 2022-01-26 16:44:02 | 0:37:38 | 0:27:33 | 0:10:05 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/rbd_cls} | 3 | |
fail | 6641504 | 2022-01-26 15:55:23 | 2022-01-26 16:06:45 | 2022-01-26 16:30:34 | 0:23:49 | 0:13:27 | 0:10:22 | smithi | master | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi110 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:23fb62befde8bb16248ea6842bde546ffd81c3f1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a885fb50-7ec3-11ec-8c35-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
fail | 6641505 | 2022-01-26 15:55:24 | 2022-01-26 16:07:15 | 2022-01-26 16:21:23 | 0:14:08 | 0:03:43 | 0:10:25 | smithi | master | ubuntu | 20.04 | rados/dashboard/{centos_8.2_container_tools_3.0 clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-hybrid} supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Command failed on smithi105 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
dead | 6641506 | 2022-01-26 15:55:25 | 2022-01-26 16:07:55 | 2022-01-26 22:48:20 | 6:40:25 | smithi | master | centos | 8.3 | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6641507 | 2022-01-26 15:55:26 | 2022-01-26 16:07:56 | 2022-01-26 16:32:17 | 0:24:21 | 0:12:47 | 0:11:34 | smithi | master | ubuntu | 18.04 | rados/cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_18.04} 2-node-mgr orchestrator_cli} | 2 | |
fail | 6641508 | 2022-01-26 15:55:27 | 2022-01-26 16:08:06 | 2022-01-26 16:26:33 | 0:18:27 | 0:06:39 | 0:11:48 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi016.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
dead | 6641509 | 2022-01-26 15:55:28 | 2022-01-26 16:08:57 | 2022-01-26 22:49:29 | 6:40:32 | smithi | master | centos | 8.3 | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6641510 | 2022-01-26 15:55:30 | 2022-01-26 16:09:07 | 2022-01-26 22:50:31 | 6:41:24 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6641511 | 2022-01-26 15:55:31 | 2022-01-26 16:09:57 | 2022-01-26 22:50:55 | 6:40:58 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6641512 | 2022-01-26 15:55:32 | 2022-01-26 16:10:38 | 2022-01-26 22:50:56 | 6:40:18 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6641513 | 2022-01-26 15:55:33 | 2022-01-26 16:10:38 | 2022-01-26 17:08:26 | 0:57:48 | 0:47:31 | 0:10:17 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/osd-backfill mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6641514 | 2022-01-26 15:55:34 | 2022-01-26 16:10:48 | 2022-01-26 16:56:10 | 0:45:22 | 0:36:45 | 0:08:37 | smithi | master | ubuntu | 18.04 | rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/root mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
fail | 6641515 | 2022-01-26 15:55:35 | 2022-01-26 16:10:49 | 2022-01-26 16:35:31 | 0:24:42 | 0:13:18 | 0:11:24 | smithi | master | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
Command failed on smithi027 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:23fb62befde8bb16248ea6842bde546ffd81c3f1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 841a8e74-7ec4-11ec-8c35-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
pass | 6641516 | 2022-01-26 15:55:36 | 2022-01-26 16:11:09 | 2022-01-26 16:52:10 | 0:41:01 | 0:29:05 | 0:11:56 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 |