User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
lflores | 2021-12-19 18:26:29 | 2021-12-19 18:30:00 | 2021-12-20 06:45:30 | 12:15:30 | rados | wip-bluestore-zero-detection | smithi | 2f8289c | 12 | 3 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6572634 | 2021-12-19 18:28:43 | 2021-12-19 18:30:00 | 2021-12-19 19:07:19 | 0:37:19 | 0:26:50 | 0:10:29 | smithi | master | centos | 8.3 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/force-sync-many workloads/rados_mon_workunits} | 2 | |
pass | 6572635 | 2021-12-19 18:28:44 | 2021-12-19 18:30:00 | 2021-12-19 19:18:49 | 0:48:49 | 0:43:47 | 0:05:02 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} | 2 | |
pass | 6572636 | 2021-12-19 18:28:45 | 2021-12-19 18:30:01 | 2021-12-19 19:05:03 | 0:35:02 | 0:28:20 | 0:06:42 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6572637 | 2021-12-19 18:28:46 | 2021-12-19 18:30:01 | 2021-12-19 18:55:21 | 0:25:20 | 0:15:03 | 0:10:17 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
fail | 6572638 | 2021-12-19 18:28:47 | 2021-12-19 18:30:01 | 2021-12-19 18:57:45 | 0:27:44 | 0:17:01 | 0:10:43 | smithi | master | rados/cephadm/workunits/{agent/on mon_election/classic task/test_orch_cli} | 1 | |||
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 6572639 | 2021-12-19 18:28:48 | 2021-12-19 18:30:02 | 2021-12-19 19:25:50 | 0:55:48 | 0:45:48 | 0:10:00 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/ec-inconsistent-hinfo mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6572640 | 2021-12-19 18:28:49 | 2021-12-19 18:30:02 | 2021-12-19 19:10:02 | 0:40:00 | 0:28:55 | 0:11:05 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/pool-snaps-few-objects} | 2 | |
pass | 6572641 | 2021-12-19 18:28:50 | 2021-12-19 18:30:32 | 2021-12-19 19:07:16 | 0:36:44 | 0:26:17 | 0:10:27 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/rados_api_tests} | 2 | |
pass | 6572642 | 2021-12-19 18:28:51 | 2021-12-19 18:30:43 | 2021-12-19 19:15:24 | 0:44:41 | 0:31:20 | 0:13:21 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/host rook/master} | 3 | |
fail | 6572643 | 2021-12-19 18:28:52 | 2021-12-19 18:33:23 | 2021-12-19 18:56:37 | 0:23:14 | 0:17:15 | 0:05:59 | smithi | master | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-hybrid} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi055 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2f8289cc42d98c773f754113826bbb51244a032f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
dead | 6572644 | 2021-12-19 18:28:53 | 2021-12-19 18:33:24 | 2021-12-20 06:43:35 | 12:10:11 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6572645 | 2021-12-19 18:28:54 | 2021-12-19 18:34:44 | 2021-12-19 19:28:58 | 0:54:14 | 0:43:43 | 0:10:31 | smithi | master | ubuntu | 20.04 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/force-sync-many workloads/rados_mon_osdmap_prune} | 2 | |
pass | 6572646 | 2021-12-19 18:28:55 | 2021-12-19 18:34:55 | 2021-12-19 19:10:11 | 0:35:16 | 0:28:43 | 0:06:33 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6572647 | 2021-12-19 18:28:56 | 2021-12-19 18:35:05 | 2021-12-19 19:05:01 | 0:29:56 | 0:21:52 | 0:08:04 | smithi | master | rhel | 8.4 | rados/singleton/{all/pg-autoscaler mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} | 2 | |
fail | 6572648 | 2021-12-19 18:28:57 | 2021-12-19 18:35:25 | 2021-12-19 18:58:47 | 0:23:22 | 0:16:01 | 0:07:21 | smithi | master | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zlib} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi084 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2f8289cc42d98c773f754113826bbb51244a032f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
pass | 6572649 | 2021-12-19 18:28:58 | 2021-12-19 18:35:26 | 2021-12-19 19:15:24 | 0:39:58 | 0:28:23 | 0:11:35 | smithi | master | rados/cephadm/workunits/{agent/on mon_election/connectivity task/test_nfs} | 1 | |||
dead | 6572650 | 2021-12-19 18:28:59 | 2021-12-19 18:35:36 | 2021-12-20 06:45:30 | 12:09:54 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |