User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
sage | 2021-11-29 14:23:14 | 2021-11-29 14:24:20 | 2021-11-30 02:35:57 | 12:11:37 | rados | wip-sage4-testing-2021-11-28-1626 | smithi | 76db814 | 29 | 12 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6533558 | 2021-11-29 14:23:32 | 2021-11-29 14:24:20 | 2021-11-29 14:45:47 | 0:21:27 | 0:12:25 | 0:09:02 | smithi | master | centos | 8.3 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{centos_8} tasks/prometheus} | 2 | |
Failure Reason:
Test failure: test_standby (tasks.mgr.test_prometheus.TestPrometheus) |
||||||||||||||
pass | 6533559 | 2021-11-29 14:23:33 | 2021-11-29 14:24:20 | 2021-11-29 15:03:49 | 0:39:29 | 0:30:08 | 0:09:21 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
dead | 6533560 | 2021-11-29 14:23:34 | 2021-11-29 14:24:20 | 2021-11-30 02:35:57 | 12:11:37 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6533561 | 2021-11-29 14:23:35 | 2021-11-29 14:24:21 | 2021-11-29 16:36:15 | 2:11:54 | 2:05:29 | 0:06:25 | smithi | master | rhel | 8.4 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/scrub} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi092 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=76db81421d171ab44a5bc7e9572f870733e5c8e3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh' |
||||||||||||||
pass | 6533562 | 2021-11-29 14:23:36 | 2021-11-29 14:24:21 | 2021-11-29 17:04:44 | 2:40:23 | 2:28:49 | 0:11:34 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-radosbench} | 2 | |
fail | 6533563 | 2021-11-29 14:23:37 | 2021-11-29 14:24:21 | 2021-11-29 14:44:04 | 0:19:43 | 0:09:53 | 0:09:50 | smithi | master | centos | 8.stream | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_8.stream_container_tools} mon_election/classic msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
Command failed on smithi019 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:76db81421d171ab44a5bc7e9572f870733e5c8e3 pull' |
||||||||||||||
pass | 6533564 | 2021-11-29 14:23:38 | 2021-11-29 14:24:21 | 2021-11-29 14:55:51 | 0:31:30 | 0:25:20 | 0:06:10 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/redirect} | 2 | |
pass | 6533565 | 2021-11-29 14:23:39 | 2021-11-29 14:24:22 | 2021-11-29 14:52:23 | 0:28:01 | 0:16:51 | 0:11:10 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8.stream} thrashers/mapgap thrashosds-health workloads/redirect_set_object} | 2 | |
pass | 6533566 | 2021-11-29 14:23:40 | 2021-11-29 14:24:52 | 2021-11-29 14:50:40 | 0:25:48 | 0:14:53 | 0:10:55 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
pass | 6533567 | 2021-11-29 14:23:40 | 2021-11-29 14:24:52 | 2021-11-29 15:03:02 | 0:38:10 | 0:27:05 | 0:11:05 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} | 2 | |
pass | 6533568 | 2021-11-29 14:23:41 | 2021-11-29 14:25:03 | 2021-11-29 14:56:17 | 0:31:14 | 0:25:46 | 0:05:28 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6533569 | 2021-11-29 14:23:42 | 2021-11-29 14:25:03 | 2021-11-29 14:50:41 | 0:25:38 | 0:13:39 | 0:11:59 | smithi | master | rados/cephadm/workunits/{agent/on mon_election/classic task/test_orch_cli} | 1 | |||
pass | 6533570 | 2021-11-29 14:23:43 | 2021-11-29 14:25:44 | 2021-11-29 14:51:07 | 0:25:23 | 0:16:26 | 0:08:57 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8.stream} thrashers/careful thrashosds-health workloads/cache} | 2 | |
fail | 6533571 | 2021-11-29 14:23:44 | 2021-11-29 14:25:54 | 2021-11-29 14:45:21 | 0:19:27 | 0:09:35 | 0:09:52 | smithi | master | centos | 8.stream | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_8.stream_container_tools} mon_election/connectivity msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
Command failed on smithi042 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:76db81421d171ab44a5bc7e9572f870733e5c8e3 pull' |
||||||||||||||
pass | 6533572 | 2021-11-29 14:23:45 | 2021-11-29 14:26:14 | 2021-11-29 15:13:30 | 0:47:16 | 0:40:48 | 0:06:28 | smithi | master | rhel | 8.4 | rados/monthrash/{ceph clusters/9-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/sync workloads/rados_mon_workunits} | 2 | |
pass | 6533573 | 2021-11-29 14:23:46 | 2021-11-29 14:26:15 | 2021-11-29 14:50:05 | 0:23:50 | 0:14:08 | 0:09:42 | smithi | master | centos | 8.2 | rados/cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
pass | 6533574 | 2021-11-29 14:23:47 | 2021-11-29 14:26:25 | 2021-11-29 15:11:35 | 0:45:10 | 0:33:53 | 0:11:17 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/host rook/master} | 3 | |
pass | 6533575 | 2021-11-29 14:23:48 | 2021-11-29 14:27:35 | 2021-11-29 14:51:17 | 0:23:42 | 0:14:29 | 0:09:13 | smithi | master | centos | 8.3 | rados/cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
pass | 6533576 | 2021-11-29 14:23:49 | 2021-11-29 14:27:46 | 2021-11-29 15:09:14 | 0:41:28 | 0:31:10 | 0:10:18 | smithi | master | centos | 8.3 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zstd} supported-random-distro$/{centos_8} tasks/module_selftest} | 2 | |
fail | 6533577 | 2021-11-29 14:23:50 | 2021-11-29 14:27:46 | 2021-11-29 14:47:39 | 0:19:53 | 0:09:44 | 0:10:09 | smithi | master | centos | 8.stream | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/pacific backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_8.stream_container_tools} mon_election/classic msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/rbd_cls} | 3 | |
Failure Reason:
Command failed on smithi036 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:76db81421d171ab44a5bc7e9572f870733e5c8e3 pull' |
||||||||||||||
fail | 6533578 | 2021-11-29 14:23:51 | 2021-11-29 14:28:26 | 2021-11-29 14:58:05 | 0:29:39 | 0:19:08 | 0:10:31 | smithi | master | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi038 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=76db81421d171ab44a5bc7e9572f870733e5c8e3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
pass | 6533579 | 2021-11-29 14:23:52 | 2021-11-29 14:28:37 | 2021-11-29 14:50:40 | 0:22:03 | 0:12:24 | 0:09:39 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 6533580 | 2021-11-29 14:23:53 | 2021-11-29 14:28:57 | 2021-11-29 14:53:37 | 0:24:40 | 0:13:09 | 0:11:31 | smithi | master | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/mgr} | 1 | |
pass | 6533581 | 2021-11-29 14:23:54 | 2021-11-29 14:29:07 | 2021-11-29 15:08:46 | 0:39:39 | 0:29:14 | 0:10:25 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6533582 | 2021-11-29 14:23:55 | 2021-11-29 14:29:18 | 2021-11-29 14:50:10 | 0:20:52 | 0:09:46 | 0:11:06 | smithi | master | centos | 8.stream | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_8.stream_container_tools} mon_election/connectivity msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
Command failed on smithi162 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:76db81421d171ab44a5bc7e9572f870733e5c8e3 pull' |
||||||||||||||
dead | 6533583 | 2021-11-29 14:23:56 | 2021-11-29 14:30:58 | 2021-11-29 14:46:02 | 0:15:04 | smithi | master | centos | 8.3 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{centos_8} tasks/prometheus} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
pass | 6533584 | 2021-11-29 14:23:57 | 2021-11-29 14:30:59 | 2021-11-29 15:14:37 | 0:43:38 | 0:33:01 | 0:10:37 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
pass | 6533585 | 2021-11-29 14:23:58 | 2021-11-29 14:31:09 | 2021-11-29 15:13:28 | 0:42:19 | 0:31:33 | 0:10:46 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6533586 | 2021-11-29 14:23:58 | 2021-11-29 14:31:29 | 2021-11-29 16:06:17 | 1:34:48 | 1:24:14 | 0:10:34 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/radosbench} | 2 | |
pass | 6533587 | 2021-11-29 14:23:59 | 2021-11-29 14:33:00 | 2021-11-29 14:58:57 | 0:25:57 | 0:19:43 | 0:06:14 | smithi | master | rhel | 8.4 | rados/objectstore/{backends/fusestore supported-random-distro$/{rhel_8}} | 1 | |
fail | 6533588 | 2021-11-29 14:24:00 | 2021-11-29 14:33:40 | 2021-11-29 14:53:14 | 0:19:34 | 0:09:41 | 0:09:53 | smithi | master | centos | 8.stream | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_8.stream_container_tools} mon_election/classic msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} | 3 | |
Failure Reason:
Command failed on smithi143 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:76db81421d171ab44a5bc7e9572f870733e5c8e3 pull' |
||||||||||||||
pass | 6533589 | 2021-11-29 14:24:01 | 2021-11-29 14:34:11 | 2021-11-29 15:13:37 | 0:39:26 | 0:28:58 | 0:10:28 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/snaps-few-objects} | 2 | |
pass | 6533590 | 2021-11-29 14:24:02 | 2021-11-29 14:34:21 | 2021-11-29 14:51:54 | 0:17:33 | 0:07:25 | 0:10:08 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6533591 | 2021-11-29 14:24:03 | 2021-11-29 14:34:21 | 2021-11-29 18:14:03 | 3:39:42 | 3:33:18 | 0:06:24 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi179 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=76db81421d171ab44a5bc7e9572f870733e5c8e3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 6533592 | 2021-11-29 14:24:04 | 2021-11-29 14:34:32 | 2021-11-29 14:55:54 | 0:21:22 | 0:09:36 | 0:11:46 | smithi | master | centos | 8.stream | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_8.stream_container_tools} mon_election/connectivity msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
Command failed on smithi090 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:76db81421d171ab44a5bc7e9572f870733e5c8e3 pull' |
||||||||||||||
pass | 6533593 | 2021-11-29 14:24:05 | 2021-11-29 14:36:12 | 2021-11-29 15:03:53 | 0:27:41 | 0:20:17 | 0:07:24 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
pass | 6533594 | 2021-11-29 14:24:06 | 2021-11-29 14:36:43 | 2021-11-29 15:03:09 | 0:26:26 | 0:14:28 | 0:11:58 | smithi | master | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
pass | 6533595 | 2021-11-29 14:24:07 | 2021-11-29 14:39:54 | 2021-11-29 15:09:01 | 0:29:07 | 0:19:04 | 0:10:03 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/dedup-io-snaps} | 2 | |
fail | 6533596 | 2021-11-29 14:24:08 | 2021-11-29 14:40:04 | 2021-11-29 15:10:19 | 0:30:15 | 0:18:43 | 0:11:32 | smithi | master | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zstd} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi045 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=76db81421d171ab44a5bc7e9572f870733e5c8e3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 6533597 | 2021-11-29 14:24:09 | 2021-11-29 14:41:14 | 2021-11-29 15:00:26 | 0:19:12 | 0:09:34 | 0:09:38 | smithi | master | centos | 8.stream | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_8.stream_container_tools} mon_election/classic msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
Command failed on smithi089 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:76db81421d171ab44a5bc7e9572f870733e5c8e3 pull' |
||||||||||||||
pass | 6533598 | 2021-11-29 14:24:10 | 2021-11-29 14:41:15 | 2021-11-29 15:03:42 | 0:22:27 | 0:10:56 | 0:11:31 | smithi | master | centos | 8.stream | rados/singleton/{all/peer mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6533599 | 2021-11-29 14:24:11 | 2021-11-29 14:42:15 | 2021-11-29 15:06:56 | 0:24:41 | 0:13:39 | 0:11:02 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8.stream} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 6533600 | 2021-11-29 14:24:12 | 2021-11-29 14:43:16 | 2021-11-29 15:25:29 | 0:42:13 | 0:30:29 | 0:11:44 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 |