User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2023-06-27 15:45:02 | 2023-06-27 15:47:18 | 2023-06-28 04:06:56 | 12:19:38 | rados | wip-yuri-testing-2023-06-23-0831 | smithi | f424ac4 | 46 | 9 | 10 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 7317909 | 2023-06-27 15:46:11 | 2023-06-27 15:47:12 | 2023-06-27 15:54:25 | 0:07:13 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |||
Failure Reason:
Error reimaging machines: [Errno 104] Connection reset by peer |
||||||||||||||
pass | 7317910 | 2023-06-27 15:46:12 | 2023-06-27 15:47:12 | 2023-06-27 16:18:07 | 0:30:55 | 0:19:03 | 0:11:52 | smithi | main | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
pass | 7317911 | 2023-06-27 15:46:13 | 2023-06-27 15:47:13 | 2023-06-27 16:34:09 | 0:46:56 | 0:26:57 | 0:19:59 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/lost-unfound-delete mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_20.04}} | 1 | |
fail | 7317912 | 2023-06-27 15:46:13 | 2023-06-27 15:47:13 | 2023-06-27 19:14:36 | 3:27:23 | 3:16:03 | 0:11:20 | smithi | main | centos | 8.stream | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi156 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f424ac457fa02811c2a787afcb876dab43bd0065 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
pass | 7317913 | 2023-06-27 15:46:14 | 2023-06-27 15:47:13 | 2023-06-27 17:15:58 | 1:28:45 | 1:11:26 | 0:17:19 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} tasks/dashboard} | 2 | |
dead | 7317914 | 2023-06-27 15:46:15 | 2023-06-27 15:47:14 | 2023-06-28 04:03:42 | 12:16:28 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_20.04}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7317915 | 2023-06-27 15:46:16 | 2023-06-27 15:47:14 | 2023-06-27 16:20:08 | 0:32:54 | 0:16:21 | 0:16:33 | smithi | main | ubuntu | 22.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/c2c} | 1 | |
pass | 7317916 | 2023-06-27 15:46:16 | 2023-06-27 15:47:15 | 2023-06-27 16:14:42 | 0:27:27 | 0:16:28 | 0:10:59 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} | 2 | |
dead | 7317917 | 2023-06-27 15:46:17 | 2023-06-27 15:47:15 | 2023-06-27 15:52:10 | 0:04:55 | smithi | main | ubuntu | 22.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} | 1 | |||
Failure Reason:
Error reimaging machines: [Errno 104] Connection reset by peer |
||||||||||||||
pass | 7317918 | 2023-06-27 15:46:18 | 2023-06-27 15:47:15 | 2023-06-27 16:23:06 | 0:35:51 | 0:27:14 | 0:08:37 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/redirect} | 2 | |
pass | 7317919 | 2023-06-27 15:46:19 | 2023-06-27 15:47:16 | 2023-06-27 16:34:45 | 0:47:29 | 0:29:56 | 0:17:33 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_20.04} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7317920 | 2023-06-27 15:46:20 | 2023-06-27 15:47:16 | 2023-06-27 16:08:39 | 0:21:23 | 0:10:49 | 0:10:34 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
dead | 7317921 | 2023-06-27 15:46:21 | 2023-06-27 15:47:16 | 2023-06-27 15:51:51 | 0:04:35 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_set_mon_crush_locations} | 3 | |||
Failure Reason:
Error reimaging machines: [Errno 104] Connection reset by peer |
||||||||||||||
pass | 7317922 | 2023-06-27 15:46:21 | 2023-06-27 15:47:17 | 2023-06-27 16:22:16 | 0:34:59 | 0:19:47 | 0:15:12 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
pass | 7317923 | 2023-06-27 15:46:22 | 2023-06-27 15:47:17 | 2023-06-27 16:23:21 | 0:36:04 | 0:19:51 | 0:16:13 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
fail | 7317924 | 2023-06-27 15:46:23 | 2023-06-27 15:47:18 | 2023-06-27 19:13:00 | 3:25:42 | 3:11:46 | 0:13:56 | smithi | main | ubuntu | 22.04 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi144 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f424ac457fa02811c2a787afcb876dab43bd0065 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 7317925 | 2023-06-27 15:46:24 | 2023-06-27 15:47:18 | 2023-06-27 16:40:53 | 0:53:35 | 0:33:28 | 0:20:07 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-stupid} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi082 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f424ac457fa02811c2a787afcb876dab43bd0065 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
pass | 7317926 | 2023-06-27 15:46:24 | 2023-06-27 15:47:18 | 2023-06-27 16:09:16 | 0:21:58 | 0:16:55 | 0:05:03 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7317927 | 2023-06-27 15:46:25 | 2023-06-27 15:47:19 | 2023-06-27 17:29:08 | 1:41:49 | 1:24:48 | 0:17:01 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/radosbench} | 3 | |
pass | 7317928 | 2023-06-27 15:46:26 | 2023-06-27 15:47:19 | 2023-06-27 16:05:41 | 0:18:22 | 0:09:06 | 0:09:16 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_cephadm_repos} | 1 | |
pass | 7317929 | 2023-06-27 15:46:27 | 2023-06-27 15:47:19 | 2023-06-27 16:16:44 | 0:29:25 | 0:13:08 | 0:16:17 | smithi | main | rhel | 8.6 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/rados_striper} | 2 | |
pass | 7317930 | 2023-06-27 15:46:27 | 2023-06-27 15:47:20 | 2023-06-27 16:22:32 | 0:35:12 | 0:20:30 | 0:14:42 | smithi | main | centos | 8.stream | rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} | 2 | |
pass | 7317931 | 2023-06-27 15:46:28 | 2023-06-27 15:47:20 | 2023-06-27 16:12:53 | 0:25:33 | 0:10:49 | 0:14:44 | smithi | main | ubuntu | 22.04 | rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} | 2 | |
pass | 7317932 | 2023-06-27 15:46:29 | 2023-06-27 15:47:20 | 2023-06-27 16:09:58 | 0:22:38 | 0:17:30 | 0:05:08 | smithi | main | rhel | 8.6 | rados/cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.6_container_tools_3.0} 1-start 2-services/rgw 3-final} | 1 | |
pass | 7317933 | 2023-06-27 15:46:30 | 2023-06-27 15:47:21 | 2023-06-27 16:37:17 | 0:49:56 | 0:32:40 | 0:17:16 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/snaps-few-objects} | 2 | |
dead | 7317934 | 2023-06-27 15:46:31 | 2023-06-27 15:47:21 | 2023-06-27 16:01:51 | 0:14:30 | smithi | main | ubuntu | 22.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} tasks/rados_workunit_loadgen_big} | 2 | |||
Failure Reason:
Error reimaging machines: [Errno 104] Connection reset by peer |
||||||||||||||
pass | 7317935 | 2023-06-27 15:46:31 | 2023-06-27 15:47:22 | 2023-06-27 16:23:00 | 0:35:38 | 0:21:23 | 0:14:15 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 2 | |
pass | 7317936 | 2023-06-27 15:46:32 | 2023-06-27 15:47:22 | 2023-06-27 16:29:23 | 0:42:01 | 0:24:07 | 0:17:54 | smithi | main | ubuntu | 22.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} | 1 | |
pass | 7317937 | 2023-06-27 15:46:33 | 2023-06-27 15:47:22 | 2023-06-27 16:12:35 | 0:25:13 | 0:09:18 | 0:15:55 | smithi | main | ubuntu | 20.04 | rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_20.04} tasks/mon_clock_with_skews} | 2 | |
pass | 7317938 | 2023-06-27 15:46:34 | 2023-06-27 15:47:23 | 2023-06-27 16:21:16 | 0:33:53 | 0:16:23 | 0:17:30 | smithi | main | ubuntu | 22.04 | rados/singleton/{all/rebuild-mondb mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7317939 | 2023-06-27 15:46:35 | 2023-06-27 15:47:23 | 2023-06-27 16:25:35 | 0:38:12 | 0:21:58 | 0:16:14 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/rbd_cls} | 3 | |
pass | 7317940 | 2023-06-27 15:46:36 | 2023-06-27 15:47:23 | 2023-06-27 16:22:18 | 0:34:55 | 0:17:12 | 0:17:43 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
pass | 7317941 | 2023-06-27 15:46:36 | 2023-06-27 15:47:24 | 2023-06-27 16:17:34 | 0:30:10 | 0:16:11 | 0:13:59 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
pass | 7317942 | 2023-06-27 15:46:37 | 2023-06-27 15:47:24 | 2023-06-27 16:26:57 | 0:39:33 | 0:23:05 | 0:16:28 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-small-objects-balanced} | 2 | |
pass | 7317943 | 2023-06-27 15:46:38 | 2023-06-27 15:47:25 | 2023-06-27 16:31:28 | 0:44:03 | 0:26:12 | 0:17:51 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/classic rados supported-random-distro$/{ubuntu_20.04}} | 1 | |
dead | 7317944 | 2023-06-27 15:46:39 | 2023-06-27 15:47:25 | 2023-06-27 16:01:00 | 0:13:35 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/lockdep} | 2 | |||
Failure Reason:
Error reimaging machines: [Errno 104] Connection reset by peer |
||||||||||||||
pass | 7317945 | 2023-06-27 15:46:39 | 2023-06-27 15:47:25 | 2023-06-27 16:23:15 | 0:35:50 | 0:19:44 | 0:16:06 | smithi | main | rhel | 8.6 | rados/singleton/{all/resolve_stuck_peering mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8}} | 2 | |
pass | 7317946 | 2023-06-27 15:46:40 | 2023-06-27 15:47:26 | 2023-06-27 16:28:07 | 0:40:41 | 0:22:33 | 0:18:08 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7317947 | 2023-06-27 15:46:41 | 2023-06-27 15:47:26 | 2023-06-27 16:16:03 | 0:28:37 | 0:14:59 | 0:13:38 | smithi | main | ubuntu | 22.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/sample_fio} | 1 | |
pass | 7317948 | 2023-06-27 15:46:42 | 2023-06-27 15:47:37 | 2023-06-27 16:21:38 | 0:34:01 | 0:15:19 | 0:18:42 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/cache-agent-small} | 2 | |
fail | 7317949 | 2023-06-27 15:46:43 | 2023-06-27 15:47:37 | 2023-06-27 19:11:39 | 3:24:02 | 3:17:19 | 0:06:43 | smithi | main | rhel | 8.6 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi019 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f424ac457fa02811c2a787afcb876dab43bd0065 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
pass | 7317950 | 2023-06-27 15:46:44 | 2023-06-27 15:47:37 | 2023-06-27 16:44:56 | 0:57:19 | 0:39:09 | 0:18:10 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_orch_cli_mon} | 5 | |
pass | 7317951 | 2023-06-27 15:46:44 | 2023-06-27 15:47:38 | 2023-06-27 16:20:21 | 0:32:43 | 0:12:55 | 0:19:48 | smithi | main | ubuntu | 22.04 | rados/singleton/{all/test-crash mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7317952 | 2023-06-27 15:46:45 | 2023-06-27 15:47:38 | 2023-06-27 16:17:54 | 0:30:16 | 0:18:02 | 0:12:14 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/version-number-sanity mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7317953 | 2023-06-27 15:46:46 | 2023-06-27 15:47:38 | 2023-06-27 16:20:59 | 0:33:21 | 0:20:59 | 0:12:22 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_20.04} thrashers/default thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |
dead | 7317954 | 2023-06-27 15:46:47 | 2023-06-27 15:47:39 | 2023-06-28 04:00:49 | 12:13:10 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7317955 | 2023-06-27 15:46:47 | 2023-06-27 15:47:39 | 2023-06-27 19:41:07 | 3:53:28 | 3:26:02 | 0:27:26 | smithi | main | ubuntu | 22.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/mon-stretch} | 1 | |
Failure Reason:
Command failed (workunit test mon-stretch/mon-stretch-fail-recovery.sh) on smithi067 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f424ac457fa02811c2a787afcb876dab43bd0065 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh' |
||||||||||||||
fail | 7317956 | 2023-06-27 15:46:48 | 2023-06-27 15:52:00 | 2023-06-27 16:17:12 | 0:25:12 | 0:10:30 | 0:14:42 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/centos_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_20.04}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi141 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f424ac457fa02811c2a787afcb876dab43bd0065 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
pass | 7317957 | 2023-06-27 15:46:49 | 2023-06-27 15:52:01 | 2023-06-27 16:23:05 | 0:31:04 | 0:15:52 | 0:15:12 | smithi | main | ubuntu | 22.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/readwrite} | 2 | |
dead | 7317958 | 2023-06-27 15:46:49 | 2023-06-27 15:52:21 | 2023-06-28 04:06:56 | 12:14:35 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_set_mon_crush_locations} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7317959 | 2023-06-27 15:46:50 | 2023-06-27 15:54:42 | 2023-06-27 16:41:15 | 0:46:33 | 0:29:59 | 0:16:34 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_20.04} thrashers/one workloads/rados_mon_workunits} | 2 | |
pass | 7317960 | 2023-06-27 15:46:51 | 2023-06-27 16:01:13 | 2023-06-27 16:33:53 | 0:32:40 | 0:21:54 | 0:10:46 | smithi | main | ubuntu | 22.04 | rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 2 | |
dead | 7317961 | 2023-06-27 15:46:52 | 2023-06-27 16:02:04 | 2023-06-27 16:06:46 | 0:04:42 | smithi | main | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8} workloads/mon} | 1 | |||
Failure Reason:
Error reimaging machines: [Errno 104] Connection reset by peer |
||||||||||||||
pass | 7317962 | 2023-06-27 15:46:53 | 2023-06-27 16:02:14 | 2023-06-27 16:30:37 | 0:28:23 | 0:14:19 | 0:14:04 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/cache} | 2 | |
pass | 7317963 | 2023-06-27 15:46:53 | 2023-06-27 16:07:05 | 2023-06-27 16:26:58 | 0:19:53 | 0:08:24 | 0:11:29 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{ubuntu_20.04}} | 1 | |
dead | 7317964 | 2023-06-27 15:46:54 | 2023-06-27 16:08:46 | 2023-06-27 16:14:47 | 0:06:01 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |||
Failure Reason:
Error reimaging machines: [Errno 104] Connection reset by peer |
||||||||||||||
pass | 7317965 | 2023-06-27 15:46:55 | 2023-06-27 16:10:07 | 2023-06-27 16:42:31 | 0:32:24 | 0:23:09 | 0:09:15 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7317966 | 2023-06-27 15:46:56 | 2023-06-27 16:12:58 | 2023-06-27 16:42:14 | 0:29:16 | 0:18:09 | 0:11:07 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados tasks/mon_recovery validater/lockdep} | 2 | |
fail | 7317967 | 2023-06-27 15:46:56 | 2023-06-27 16:14:48 | 2023-06-27 16:34:55 | 0:20:07 | 0:14:55 | 0:05:12 | smithi | main | rhel | 8.6 | rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/rhel_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi033 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f424ac457fa02811c2a787afcb876dab43bd0065 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
fail | 7317968 | 2023-06-27 15:46:57 | 2023-06-27 16:14:59 | 2023-06-27 19:37:57 | 3:22:58 | 3:13:15 | 0:09:43 | smithi | main | centos | 8.stream | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi040 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f424ac457fa02811c2a787afcb876dab43bd0065 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 7317969 | 2023-06-27 15:46:58 | 2023-06-27 16:14:59 | 2023-06-27 16:54:02 | 0:39:03 | 0:26:35 | 0:12:28 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zstd} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi121 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f424ac457fa02811c2a787afcb876dab43bd0065 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
pass | 7317970 | 2023-06-27 15:46:59 | 2023-06-27 16:16:50 | 2023-06-27 16:42:28 | 0:25:38 | 0:14:48 | 0:10:50 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
pass | 7317971 | 2023-06-27 15:46:59 | 2023-06-27 16:17:20 | 2023-06-27 17:26:52 | 1:09:32 | 0:52:38 | 0:16:54 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
pass | 7317972 | 2023-06-27 15:47:00 | 2023-06-27 16:17:41 | 2023-06-27 16:42:54 | 0:25:13 | 0:17:59 | 0:07:14 | smithi | main | rhel | 8.6 | rados/cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.6_container_tools_3.0} 1-start 2-services/rgw 3-final} | 1 | |
pass | 7317973 | 2023-06-27 15:47:01 | 2023-06-27 16:18:01 | 2023-06-27 16:41:49 | 0:23:48 | 0:14:34 | 0:09:14 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 |