User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2021-11-18 15:26:33 | 2021-11-18 15:33:53 | 2021-11-19 04:08:12 | 12:34:19 | rados | wip-yuri7-testing-2021-11-11-1339-pacific | smithi | 6d7d283 | 19 | 12 | 8 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6512076 | 2021-11-18 15:28:47 | 2021-11-18 15:33:53 | 2021-11-18 16:00:29 | 0:26:36 | 0:15:28 | 0:11:08 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 1-start 2-services/mirror 3-final} | 2 | |
fail | 6512077 | 2021-11-18 15:28:48 | 2021-11-18 15:35:23 | 2021-11-18 16:11:36 | 0:36:13 | 0:24:38 | 0:11:35 | smithi | master | centos | 8.2 | rados/dashboard/{centos_8.2_container_tools_3.0 clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} supported-random-distro$/{centos_8} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_ganesha (unittest.loader._FailedTest) |
||||||||||||||
fail | 6512078 | 2021-11-18 15:28:49 | 2021-11-18 15:35:24 | 2021-11-18 16:07:26 | 0:32:02 | 0:20:57 | 0:11:05 | smithi | master | centos | 8.2 | rados/cephadm/dashboard/{0-distro/centos_8.2_container_tools_3.0 task/test_e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi142 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6d7d283b6a57970d3fb3807555e123dbaed5f01a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
dead | 6512079 | 2021-11-18 15:28:50 | 2021-11-18 15:35:44 | 2021-11-19 03:44:47 | 12:09:03 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6512080 | 2021-11-18 15:28:51 | 2021-11-18 15:36:26 | 2021-11-18 15:38:26 | 0:02:00 | 0 | smithi | master | rados/cephadm/osds/2-ops/rm-zap-flag | — | ||||
Failure Reason:
list index out of range |
||||||||||||||
fail | 6512081 | 2021-11-18 15:28:52 | 2021-11-18 15:36:24 | 2021-11-18 15:52:15 | 0:15:51 | 0:06:02 | 0:09:49 | smithi | master | centos | 8.3 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi107 with status 5: 'sudo systemctl stop ceph-59ad08c4-4887-11ec-8c2c-001a4aab830c@mon.a' |
||||||||||||||
pass | 6512082 | 2021-11-18 15:28:53 | 2021-11-18 15:37:05 | 2021-11-18 16:01:02 | 0:23:57 | 0:14:09 | 0:09:48 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/rebuild-mondb mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} | 1 | |
dead | 6512083 | 2021-11-18 15:28:54 | 2021-11-18 15:37:05 | 2021-11-19 03:46:03 | 12:08:58 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6512084 | 2021-11-18 15:28:54 | 2021-11-18 15:37:26 | 2021-11-18 15:53:18 | 0:15:52 | 0:06:19 | 0:09:33 | smithi | master | ubuntu | 18.04 | rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} | 1 | |
Failure Reason:
Command failed on smithi097 with status 1: 'sudo kubeadm init --node-name smithi097 --token abcdef.3lsfdaundf4s15xa --pod-network-cidr 10.251.0.0/21' |
||||||||||||||
dead | 6512085 | 2021-11-18 15:28:56 | 2021-11-18 15:37:46 | 2021-11-19 03:46:34 | 12:08:48 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6512086 | 2021-11-18 15:28:57 | 2021-11-18 15:37:46 | 2021-11-18 16:03:17 | 0:25:31 | 0:14:50 | 0:10:41 | smithi | master | centos | 8.2 | rados/cephadm/smoke/{distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
pass | 6512087 | 2021-11-18 15:28:58 | 2021-11-18 15:38:07 | 2021-11-18 16:15:34 | 0:37:27 | 0:28:19 | 0:09:08 | smithi | master | centos | 8.3 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.3_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
dead | 6512088 | 2021-11-18 15:28:59 | 2021-11-18 15:38:17 | 2021-11-19 03:47:27 | 12:09:10 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6512089 | 2021-11-18 15:29:00 | 2021-11-18 15:39:17 | 2021-11-18 15:56:24 | 0:17:07 | 0:06:12 | 0:10:55 | smithi | master | centos | 8.3 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/connectivity} | 2 | |
Failure Reason:
Command failed on smithi043 with status 5: 'sudo systemctl stop ceph-b6614c6a-4887-11ec-8c2c-001a4aab830c@mon.a' |
||||||||||||||
fail | 6512090 | 2021-11-18 15:29:01 | 2021-11-18 15:39:28 | 2021-11-18 16:14:45 | 0:35:17 | 0:23:54 | 0:11:23 | smithi | master | centos | 8.2 | rados/dashboard/{centos_8.2_container_tools_3.0 clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{centos_8} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_ganesha (unittest.loader._FailedTest) |
||||||||||||||
fail | 6512091 | 2021-11-18 15:29:02 | 2021-11-18 15:40:59 | 2021-11-18 16:10:56 | 0:29:57 | 0:19:12 | 0:10:45 | smithi | master | centos | 8.2 | rados/cephadm/dashboard/{0-distro/centos_8.2_container_tools_3.0 task/test_e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi038 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6d7d283b6a57970d3fb3807555e123dbaed5f01a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
dead | 6512092 | 2021-11-18 15:29:03 | 2021-11-18 15:41:29 | 2021-11-19 03:50:24 | 12:08:55 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6512093 | 2021-11-18 15:29:04 | 2021-11-18 15:42:12 | 2021-11-18 15:44:11 | 0:01:59 | 0 | smithi | master | rados/cephadm/osds/2-ops/rm-zap-flag | — | ||||
Failure Reason:
list index out of range |
||||||||||||||
fail | 6512094 | 2021-11-18 15:29:04 | 2021-11-18 15:42:10 | 2021-11-18 16:01:24 | 0:19:14 | 0:06:29 | 0:12:45 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
Command failed on smithi125 with status 1: 'sudo kubeadm init --node-name smithi125 --token abcdef.uc4cqaljs2b0p6tp --pod-network-cidr 10.251.224.0/21' |
||||||||||||||
dead | 6512095 | 2021-11-18 15:29:06 | 2021-11-18 15:43:10 | 2021-11-19 03:51:44 | 12:08:34 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6512096 | 2021-11-18 15:29:07 | 2021-11-18 15:43:10 | 2021-11-18 16:51:07 | 1:07:57 | 0:58:18 | 0:09:39 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/radosbench} | 3 | |
pass | 6512097 | 2021-11-18 15:29:07 | 2021-11-18 15:43:21 | 2021-11-18 16:14:02 | 0:30:41 | 0:23:14 | 0:07:27 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 6512098 | 2021-11-18 15:29:08 | 2021-11-18 15:44:31 | 2021-11-18 16:56:42 | 1:12:11 | 0:58:56 | 0:13:15 | smithi | master | centos | 8.2 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
pass | 6512099 | 2021-11-18 15:29:09 | 2021-11-18 15:47:22 | 2021-11-18 16:29:06 | 0:41:44 | 0:32:13 | 0:09:31 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 2 | |
pass | 6512100 | 2021-11-18 15:29:10 | 2021-11-18 15:47:23 | 2021-11-18 16:26:20 | 0:38:57 | 0:27:04 | 0:11:53 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/rbd_cls} | 3 | |
pass | 6512101 | 2021-11-18 15:29:11 | 2021-11-18 15:48:33 | 2021-11-18 16:32:11 | 0:43:38 | 0:33:51 | 0:09:47 | smithi | master | centos | 8.2 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |
dead | 6512102 | 2021-11-18 15:29:12 | 2021-11-18 15:48:44 | 2021-11-19 03:58:00 | 12:09:16 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6512103 | 2021-11-18 15:29:13 | 2021-11-18 15:49:54 | 2021-11-18 16:15:06 | 0:25:12 | 0:15:03 | 0:10:09 | smithi | master | centos | 8.2 | rados/cephadm/smoke/{distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
pass | 6512104 | 2021-11-18 15:29:14 | 2021-11-18 15:49:55 | 2021-11-18 16:10:07 | 0:20:12 | 0:09:00 | 0:11:12 | smithi | master | centos | 8.2 | rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/filestore-xfs rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 3 | |
pass | 6512105 | 2021-11-18 15:29:15 | 2021-11-18 15:50:25 | 2021-11-18 16:23:34 | 0:33:09 | 0:24:33 | 0:08:36 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 6512106 | 2021-11-18 15:29:16 | 2021-11-18 15:52:16 | 2021-11-18 16:07:39 | 0:15:23 | 0:06:00 | 0:09:23 | smithi | master | centos | 8.3 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi098 with status 5: 'sudo systemctl stop ceph-89c901f0-4889-11ec-8c2c-001a4aab830c@mon.a' |
||||||||||||||
pass | 6512107 | 2021-11-18 15:29:17 | 2021-11-18 15:52:16 | 2021-11-18 16:31:46 | 0:39:30 | 0:30:18 | 0:09:12 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_18.04 workloads/radosbench_omap_write} | 1 | |
pass | 6512108 | 2021-11-18 15:29:19 | 2021-11-18 15:52:16 | 2021-11-18 16:32:42 | 0:40:26 | 0:29:10 | 0:11:16 | smithi | master | centos | 8.2 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/sync-many workloads/snaps-few-objects} | 2 | |
pass | 6512109 | 2021-11-18 15:29:19 | 2021-11-18 15:53:27 | 2021-11-18 16:49:09 | 0:55:42 | 0:45:39 | 0:10:03 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/snaps-few-objects} | 3 | |
pass | 6512110 | 2021-11-18 15:29:20 | 2021-11-18 15:53:27 | 2021-11-18 16:21:13 | 0:27:46 | 0:16:24 | 0:11:22 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
pass | 6512111 | 2021-11-18 15:29:22 | 2021-11-18 15:56:28 | 2021-11-18 16:35:53 | 0:39:25 | 0:29:19 | 0:10:06 | smithi | master | centos | 8.3 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.3_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
fail | 6512112 | 2021-11-18 15:29:23 | 2021-11-18 15:56:48 | 2021-11-18 16:15:34 | 0:18:46 | 0:06:27 | 0:12:19 | smithi | master | ubuntu | 18.04 | rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} | 3 | |
Failure Reason:
Command failed on smithi112 with status 1: 'sudo kubeadm init --node-name smithi112 --token abcdef.vti4ccwo4yd4rkf9 --pod-network-cidr 10.251.120.0/21' |
||||||||||||||
dead | 6512113 | 2021-11-18 15:29:24 | 2021-11-18 15:57:49 | 2021-11-19 04:08:12 | 12:10:23 | smithi | master | centos | 8.3 | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6512114 | 2021-11-18 15:29:24 | 2021-11-18 16:00:30 | 2021-11-18 16:29:17 | 0:28:47 | 0:23:16 | 0:05:31 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 1-start 2-services/nfs-ingress2 3-final} | 2 |