User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2024-02-16 15:30:50 | 2024-02-16 16:10:46 | 2024-02-17 17:26:34 | 1 day, 1:15:48 | rados | wip-yuri-testing-2024-02-13-0903 | smithi | 2fad629 | 25 | 27 | 87 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 7562464 | 2024-02-16 15:32:13 | 2024-02-16 16:10:46 | 2024-02-17 04:24:24 | 12:13:38 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7562465 | 2024-02-16 15:32:14 | 2024-02-16 16:15:47 | 2024-02-16 16:54:17 | 0:38:30 | 0:28:34 | 0:09:56 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 3 | |
dead | 7562466 | 2024-02-16 15:32:14 | 2024-02-16 16:16:28 | 2024-02-17 04:43:48 | 12:27:20 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_orch_cli_mon} | 5 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562467 | 2024-02-16 15:32:15 | 2024-02-16 16:29:30 | 2024-02-16 21:54:35 | 5:25:05 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-small-objects} | 2 | |||
dead | 7562468 | 2024-02-16 15:32:16 | 2024-02-16 16:31:51 | 2024-02-17 04:41:00 | 12:09:09 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_rgw_multisite} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562469 | 2024-02-16 15:32:17 | 2024-02-16 16:31:51 | 2024-02-17 04:40:34 | 12:08:43 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/quincy backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/test_rbd_api} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562470 | 2024-02-16 15:32:18 | 2024-02-16 16:31:52 | 2024-02-16 21:54:04 | 5:22:12 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
dead | 7562471 | 2024-02-16 15:32:18 | 2024-02-16 16:31:52 | 2024-02-17 04:41:42 | 12:09:50 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7562472 | 2024-02-16 15:32:19 | 2024-02-16 16:32:42 | 2024-02-16 18:35:10 | 2:02:28 | 1:51:07 | 0:11:21 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
pass | 7562473 | 2024-02-16 15:32:20 | 2024-02-16 16:34:03 | 2024-02-16 17:06:02 | 0:31:59 | 0:18:22 | 0:13:37 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{centos_latest}} | 1 | |
dead | 7562474 | 2024-02-16 15:32:21 | 2024-02-16 16:36:54 | 2024-02-17 04:46:00 | 12:09:06 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_set_mon_crush_locations} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7562475 | 2024-02-16 15:32:22 | 2024-02-16 16:37:24 | 2024-02-16 19:12:38 | 2:35:14 | 2:22:40 | 0:12:34 | smithi | main | ubuntu | 22.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/scrub} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-scrub-test.sh) on smithi067 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2fad629fd3f128a0e41fe61243031e0d2b287b9d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-test.sh' |
||||||||||||||
fail | 7562476 | 2024-02-16 15:32:22 | 2024-02-16 16:38:15 | 2024-02-16 20:08:23 | 3:30:08 | 3:17:02 | 0:13:06 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi134 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2fad629fd3f128a0e41fe61243031e0d2b287b9d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
dead | 7562477 | 2024-02-16 15:32:23 | 2024-02-16 16:40:25 | 2024-02-17 04:49:17 | 12:08:52 | smithi | main | ubuntu | 22.04 | rados/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7562478 | 2024-02-16 15:32:24 | 2024-02-16 16:40:26 | 2024-02-16 17:58:10 | 1:17:44 | 1:08:11 | 0:09:33 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
Failure Reason:
reached maximum tries (501) after waiting for 3000 seconds |
||||||||||||||
dead | 7562479 | 2024-02-16 15:32:25 | 2024-02-16 16:40:26 | 2024-02-16 21:55:10 | 5:14:44 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |||
dead | 7562480 | 2024-02-16 15:32:26 | 2024-02-16 16:41:07 | 2024-02-17 04:51:00 | 12:09:53 | smithi | main | ubuntu | 22.04 | rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562481 | 2024-02-16 15:32:26 | 2024-02-16 16:41:47 | 2024-02-17 04:54:31 | 12:12:44 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_ca_signed_key} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7562482 | 2024-02-16 15:32:27 | 2024-02-16 16:45:38 | 2024-02-16 20:05:22 | 3:19:44 | 3:10:47 | 0:08:57 | smithi | main | ubuntu | 22.04 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi195 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2fad629fd3f128a0e41fe61243031e0d2b287b9d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
dead | 7562483 | 2024-02-16 15:32:28 | 2024-02-16 16:45:39 | 2024-02-16 21:55:02 | 5:09:23 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
dead | 7562484 | 2024-02-16 15:32:29 | 2024-02-16 16:47:19 | 2024-02-17 04:56:03 | 12:08:44 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/cache-snaps} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562485 | 2024-02-16 15:32:29 | 2024-02-16 16:47:20 | 2024-02-17 04:57:36 | 12:10:16 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{centos_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7562486 | 2024-02-16 15:32:30 | 2024-02-16 16:47:20 | 2024-02-16 18:45:43 | 1:58:23 | 1:47:22 | 0:11:01 | smithi | main | centos | 9.stream | rados/upgrade/parallel/{0-random-distro$/{centos_9.stream} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
"2024-02-16T17:16:48.689588+0000 mon.a (mon.0) 69 : cluster 3 [WRN] MON_DOWN: 1/3 mons down, quorum a,b" in cluster log |
||||||||||||||
pass | 7562487 | 2024-02-16 15:32:31 | 2024-02-16 16:47:21 | 2024-02-16 17:12:31 | 0:25:10 | 0:14:48 | 0:10:22 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/redirect_promote_tests} | 2 | |
dead | 7562488 | 2024-02-16 15:32:32 | 2024-02-16 16:47:21 | 2024-02-16 16:48:45 | 0:01:24 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi188 |
||||||||||||||
dead | 7562489 | 2024-02-16 15:32:33 | 2024-02-16 16:47:42 | 2024-02-16 16:48:46 | 0:01:04 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi161 |
||||||||||||||
dead | 7562490 | 2024-02-16 15:32:33 | 2024-02-16 16:47:42 | 2024-02-16 16:48:46 | 0:01:04 | smithi | main | centos | 9.stream | rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/basic 3-final} | 1 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi129 |
||||||||||||||
fail | 7562491 | 2024-02-16 15:32:34 | 2024-02-16 16:47:42 | 2024-02-16 17:06:14 | 0:18:32 | 0:08:09 | 0:10:23 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi176 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2fad629fd3f128a0e41fe61243031e0d2b287b9d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
dead | 7562492 | 2024-02-16 15:32:35 | 2024-02-16 16:49:41 | 2024-02-16 21:54:50 | 5:05:09 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/small-objects-balanced} | 2 | |||
dead | 7562493 | 2024-02-16 15:32:36 | 2024-02-16 16:52:12 | 2024-02-17 05:01:23 | 12:09:11 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_cephadm_timeout} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7562494 | 2024-02-16 15:32:36 | 2024-02-16 16:52:53 | 2024-02-16 17:31:03 | 0:38:10 | 0:27:08 | 0:11:02 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} | 2 | |
pass | 7562495 | 2024-02-16 15:32:37 | 2024-02-16 16:53:23 | 2024-02-16 17:23:18 | 0:29:55 | 0:20:04 | 0:09:51 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/small-objects-localized} | 2 | |
pass | 7562496 | 2024-02-16 15:32:38 | 2024-02-16 16:54:24 | 2024-02-16 17:31:33 | 0:37:09 | 0:27:13 | 0:09:56 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
dead | 7562497 | 2024-02-16 15:32:39 | 2024-02-16 16:54:24 | 2024-02-17 05:04:01 | 12:09:37 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7562498 | 2024-02-16 15:32:40 | 2024-02-16 16:55:04 | 2024-02-16 20:15:12 | 3:20:08 | 3:07:20 | 0:12:48 | smithi | main | centos | 9.stream | rados/standalone/{supported-random-distro$/{centos_latest} workloads/crush} | 1 | |
Failure Reason:
Command failed (workunit test crush/crush-choose-args.sh) on smithi094 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2fad629fd3f128a0e41fe61243031e0d2b287b9d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/crush/crush-choose-args.sh' |
||||||||||||||
pass | 7562499 | 2024-02-16 15:32:40 | 2024-02-16 16:57:55 | 2024-02-16 17:31:59 | 0:34:04 | 0:23:14 | 0:10:50 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/small-objects} | 2 | |
dead | 7562500 | 2024-02-16 15:32:41 | 2024-02-16 16:57:56 | 2024-02-17 05:07:23 | 12:09:27 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/radosbench} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7562501 | 2024-02-16 15:32:42 | 2024-02-16 16:58:46 | 2024-02-16 18:04:24 | 1:05:38 | 0:54:43 | 0:10:55 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
dead | 7562502 | 2024-02-16 15:32:43 | 2024-02-16 16:58:47 | 2024-02-17 05:07:05 | 12:08:18 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_extra_daemon_features} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562503 | 2024-02-16 15:32:44 | 2024-02-16 16:58:47 | 2024-02-16 21:54:57 | 4:56:10 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |||
dead | 7562504 | 2024-02-16 15:32:44 | 2024-02-16 17:00:38 | 2024-02-17 05:09:46 | 12:09:08 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_host_drain} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562505 | 2024-02-16 15:32:45 | 2024-02-16 17:00:38 | 2024-02-17 05:11:31 | 12:10:53 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562506 | 2024-02-16 15:32:46 | 2024-02-16 17:03:09 | 2024-02-16 21:54:56 | 4:51:47 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 3 | |||
dead | 7562507 | 2024-02-16 15:32:47 | 2024-02-16 17:03:19 | 2024-02-17 05:12:32 | 12:09:13 | smithi | main | ubuntu | 22.04 | rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562508 | 2024-02-16 15:32:47 | 2024-02-16 17:03:20 | 2024-02-16 21:56:30 | 4:53:10 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/snaps-few-objects} | 2 | |||
dead | 7562509 | 2024-02-16 15:32:48 | 2024-02-16 17:03:21 | 2024-02-17 05:10:56 | 12:07:35 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562510 | 2024-02-16 15:32:49 | 2024-02-16 17:03:21 | 2024-02-16 21:55:50 | 4:52:29 | smithi | main | centos | 9.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} | 2 | |||
pass | 7562511 | 2024-02-16 15:32:50 | 2024-02-16 17:06:02 | 2024-02-16 17:33:31 | 0:27:29 | 0:15:57 | 0:11:32 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/write_fadvise_dontneed} | 2 | |
dead | 7562512 | 2024-02-16 15:32:51 | 2024-02-16 17:07:53 | 2024-02-16 21:55:52 | 4:47:59 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
dead | 7562513 | 2024-02-16 15:32:51 | 2024-02-16 17:11:13 | 2024-02-17 05:20:39 | 12:09:26 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_monitoring_stack_basic} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7562514 | 2024-02-16 15:32:52 | 2024-02-16 17:12:34 | 2024-02-16 20:30:41 | 3:18:07 | 3:08:22 | 0:09:45 | smithi | main | centos | 9.stream | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi129 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2fad629fd3f128a0e41fe61243031e0d2b287b9d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
pass | 7562515 | 2024-02-16 15:32:53 | 2024-02-16 17:13:15 | 2024-02-16 17:46:15 | 0:33:00 | 0:22:41 | 0:10:19 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
dead | 7562516 | 2024-02-16 15:32:54 | 2024-02-16 17:13:35 | 2024-02-17 05:27:02 | 12:13:27 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562517 | 2024-02-16 15:32:55 | 2024-02-16 17:19:37 | 2024-02-17 05:28:04 | 12:08:27 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/rbd_cls} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562518 | 2024-02-16 15:32:55 | 2024-02-16 17:19:38 | 2024-02-17 05:27:59 | 12:08:21 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_orch_cli} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562519 | 2024-02-16 15:32:56 | 2024-02-16 17:19:38 | 2024-02-17 05:28:40 | 12:09:02 | smithi | main | centos | 9.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-hybrid} tasks/e2e} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562520 | 2024-02-16 15:32:57 | 2024-02-16 17:19:39 | 2024-02-17 05:32:00 | 12:12:21 | smithi | main | ubuntu | 22.04 | rados/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562521 | 2024-02-16 15:32:58 | 2024-02-16 17:23:20 | 2024-02-16 21:55:39 | 4:32:19 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |||
dead | 7562522 | 2024-02-16 15:32:58 | 2024-02-16 22:33:14 | 2024-02-17 10:40:59 | 12:07:45 | smithi | main | centos | 9.stream | rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream_runc} 1-start 2-services/rgw 3-final} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7562523 | 2024-02-16 15:32:59 | 2024-02-16 22:33:14 | 2024-02-16 22:56:43 | 0:23:29 | 0:14:27 | 0:09:02 | smithi | main | ubuntu | 22.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-hybrid} supported-random-distro$/{ubuntu_latest} tasks/module_selftest} | 2 | |
Failure Reason:
Test failure: test_diskprediction_local (tasks.mgr.test_module_selftest.TestModuleSelftest) |
||||||||||||||
fail | 7562524 | 2024-02-16 15:33:00 | 2024-02-16 22:33:14 | 2024-02-16 22:52:47 | 0:19:33 | 0:09:47 | 0:09:46 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats' |
||||||||||||||
dead | 7562525 | 2024-02-16 15:33:01 | 2024-02-16 22:33:15 | 2024-02-17 10:56:01 | 12:22:46 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_orch_cli_mon} | 5 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7562526 | 2024-02-16 15:33:02 | 2024-02-16 22:46:47 | 2024-02-16 23:14:00 | 0:27:13 | 0:17:34 | 0:09:39 | smithi | main | ubuntu | 22.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/mgr} | 1 | |
fail | 7562527 | 2024-02-16 15:33:02 | 2024-02-16 22:46:47 | 2024-02-17 01:08:40 | 2:21:53 | 2:12:44 | 0:09:09 | smithi | main | centos | 9.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-radosbench} | 2 | |
Failure Reason:
reached maximum tries (801) after waiting for 4800 seconds |
||||||||||||||
pass | 7562528 | 2024-02-16 15:33:03 | 2024-02-16 22:46:48 | 2024-02-16 23:18:22 | 0:31:34 | 0:21:57 | 0:09:37 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/cache-pool-snaps} | 2 | |
dead | 7562529 | 2024-02-16 15:33:04 | 2024-02-16 22:46:48 | 2024-02-17 10:54:57 | 12:08:09 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562530 | 2024-02-16 15:33:05 | 2024-02-16 22:46:48 | 2024-02-17 10:56:28 | 12:09:40 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/cache-snaps-balanced} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562531 | 2024-02-16 15:33:06 | 2024-02-16 22:46:49 | 2024-02-17 10:55:07 | 12:08:18 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_rgw_multisite} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7562532 | 2024-02-16 15:33:06 | 2024-02-16 22:46:49 | 2024-02-16 23:17:23 | 0:30:34 | 0:21:37 | 0:08:57 | smithi | main | centos | 9.stream | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7562533 | 2024-02-16 15:33:07 | 2024-02-16 22:46:49 | 2024-02-16 23:36:35 | 0:49:46 | 0:39:38 | 0:10:08 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
dead | 7562534 | 2024-02-16 15:33:08 | 2024-02-16 22:46:50 | 2024-02-17 10:55:41 | 12:08:51 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7562535 | 2024-02-16 15:33:09 | 2024-02-16 22:46:50 | 2024-02-16 23:19:40 | 0:32:50 | 0:23:35 | 0:09:15 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/cache-snaps} | 2 | |
dead | 7562536 | 2024-02-16 15:33:09 | 2024-02-16 22:46:50 | 2024-02-17 10:56:25 | 12:09:35 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_set_mon_crush_locations} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562537 | 2024-02-16 15:33:10 | 2024-02-16 22:46:51 | 2024-02-17 11:10:34 | 12:23:43 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/snaps-few-objects} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562538 | 2024-02-16 15:33:11 | 2024-02-16 23:02:13 | 2024-02-17 11:11:56 | 12:09:43 | smithi | main | ubuntu | 22.04 | rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7562539 | 2024-02-16 15:33:12 | 2024-02-16 23:02:14 | 2024-02-17 02:41:42 | 3:39:28 | 3:29:06 | 0:10:22 | smithi | main | ubuntu | 22.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/misc} | 1 | |
Failure Reason:
Command failed (workunit test misc/test-ceph-helpers.sh) on smithi002 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2fad629fd3f128a0e41fe61243031e0d2b287b9d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/misc/test-ceph-helpers.sh' |
||||||||||||||
dead | 7562540 | 2024-02-16 15:33:13 | 2024-02-16 23:02:14 | 2024-02-17 11:10:52 | 12:08:38 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562541 | 2024-02-16 15:33:13 | 2024-02-16 23:02:15 | 2024-02-17 11:11:35 | 12:09:20 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-small-objects-balanced} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562542 | 2024-02-16 15:33:14 | 2024-02-16 23:02:25 | 2024-02-17 11:11:07 | 12:08:42 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7562543 | 2024-02-16 15:33:15 | 2024-02-16 23:02:25 | 2024-02-16 23:34:07 | 0:31:42 | 0:22:44 | 0:08:58 | smithi | main | ubuntu | 22.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/rados_workunit_loadgen_mix} | 2 | |
dead | 7562544 | 2024-02-16 15:33:16 | 2024-02-16 23:02:36 | 2024-02-17 11:10:45 | 12:08:09 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562545 | 2024-02-16 15:33:16 | 2024-02-16 23:02:36 | 2024-02-17 11:18:25 | 12:15:49 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/pool-snaps-few-objects} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562546 | 2024-02-16 15:33:17 | 2024-02-16 23:08:57 | 2024-02-17 11:25:47 | 12:16:50 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_ca_signed_key} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7562547 | 2024-02-16 15:33:18 | 2024-02-16 23:17:39 | 2024-02-16 23:43:09 | 0:25:30 | 0:15:33 | 0:09:57 | smithi | main | ubuntu | 22.04 | rados/singleton/{all/recovery-preemption mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7562548 | 2024-02-16 15:33:19 | 2024-02-16 23:17:39 | 2024-02-17 02:40:53 | 3:23:14 | 3:14:11 | 0:09:03 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi120 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2fad629fd3f128a0e41fe61243031e0d2b287b9d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 7562549 | 2024-02-16 15:33:20 | 2024-02-16 23:17:39 | 2024-02-17 02:36:38 | 3:18:59 | 3:08:36 | 0:10:23 | smithi | main | centos | 9.stream | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi050 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2fad629fd3f128a0e41fe61243031e0d2b287b9d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
dead | 7562550 | 2024-02-16 15:33:20 | 2024-02-16 23:17:40 | 2024-02-17 11:27:57 | 12:10:17 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562551 | 2024-02-16 15:33:21 | 2024-02-16 23:17:40 | 2024-02-17 11:26:14 | 12:08:34 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7562552 | 2024-02-16 15:33:22 | 2024-02-16 23:17:40 | 2024-02-17 00:36:55 | 1:19:15 | 1:10:09 | 0:09:06 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
Failure Reason:
reached maximum tries (501) after waiting for 3000 seconds |
||||||||||||||
dead | 7562553 | 2024-02-16 15:33:23 | 2024-02-16 23:17:41 | 2024-02-17 11:25:58 | 12:08:17 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7562554 | 2024-02-16 15:33:24 | 2024-02-16 23:17:41 | 2024-02-16 23:56:39 | 0:38:58 | 0:28:10 | 0:10:48 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 3 | |
Failure Reason:
Command failed on smithi018 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
dead | 7562555 | 2024-02-16 15:33:24 | 2024-02-16 23:17:51 | 2024-02-17 11:29:50 | 12:11:59 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{centos_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7562556 | 2024-02-16 15:33:25 | 2024-02-16 23:19:42 | 2024-02-17 01:41:10 | 2:21:28 | 2:02:55 | 0:18:33 | smithi | main | ubuntu | 22.04 | rados/upgrade/parallel/{0-random-distro$/{ubuntu_22.04} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
"2024-02-17T00:04:46.757736+0000 mon.a (mon.0) 113 : cluster 4 [ERR] OSD_UNREACHABLE: 8 osds(s) are not reachable" in cluster log |
||||||||||||||
pass | 7562557 | 2024-02-16 15:33:26 | 2024-02-17 01:53:14 | 2024-02-17 03:01:49 | 1:08:35 | 0:57:03 | 0:11:32 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest} thrashers/mapgap thrashosds-health workloads/radosbench} | 2 | |
dead | 7562558 | 2024-02-16 15:33:27 | 2024-02-17 01:54:24 | 2024-02-17 14:12:55 | 12:18:31 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7562559 | 2024-02-16 15:33:28 | 2024-02-17 02:03:26 | 2024-02-17 02:35:10 | 0:31:44 | 0:19:34 | 0:12:10 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/redirect} | 2 | |
dead | 7562560 | 2024-02-16 15:33:28 | 2024-02-17 02:05:16 | 2024-02-17 14:16:24 | 12:11:08 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-small-objects-fast-read} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7562561 | 2024-02-16 15:33:29 | 2024-02-17 02:07:07 | 2024-02-17 02:42:35 | 0:35:28 | 0:25:58 | 0:09:30 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
dead | 7562562 | 2024-02-16 15:33:30 | 2024-02-17 02:07:37 | 2024-02-17 14:15:14 | 12:07:37 | smithi | main | centos | 9.stream | rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/basic 3-final} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562563 | 2024-02-16 15:33:31 | 2024-02-17 02:07:38 | 2024-02-17 14:15:26 | 12:07:48 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_cephadm_timeout} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7562564 | 2024-02-16 15:33:32 | 2024-02-17 02:07:38 | 2024-02-17 04:06:35 | 1:58:57 | 1:47:04 | 0:11:53 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
pass | 7562565 | 2024-02-16 15:33:32 | 2024-02-17 02:08:49 | 2024-02-17 04:10:11 | 2:01:22 | 1:50:41 | 0:10:41 | smithi | main | ubuntu | 22.04 | rados/singleton/{all/thrash-backfill-full mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 2 | |
dead | 7562566 | 2024-02-16 15:33:33 | 2024-02-17 02:10:19 | 2024-02-17 14:23:28 | 12:13:09 | smithi | main | ubuntu | 22.04 | rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7562567 | 2024-02-16 15:33:34 | 2024-02-17 02:13:50 | 2024-02-17 02:36:16 | 0:22:26 | 0:10:48 | 0:11:38 | smithi | main | centos | 9.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest} tasks/readwrite} | 2 | |
dead | 7562568 | 2024-02-16 15:33:35 | 2024-02-17 02:15:21 | 2024-02-17 14:24:31 | 12:09:10 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_extra_daemon_features} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562569 | 2024-02-16 15:33:36 | 2024-02-17 02:16:21 | 2024-02-17 14:25:57 | 12:09:36 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/set-chunks-read} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7562570 | 2024-02-16 15:33:36 | 2024-02-17 02:17:22 | 2024-02-17 09:23:10 | 7:05:48 | 6:50:43 | 0:15:05 | smithi | main | centos | 9.stream | rados/singleton/{all/thrash-eio mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_latest}} | 2 | |
Failure Reason:
reached maximum tries (3651) after waiting for 21900 seconds |
||||||||||||||
dead | 7562571 | 2024-02-16 15:33:37 | 2024-02-17 02:21:42 | 2024-02-17 14:43:10 | 12:21:28 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_host_drain} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562572 | 2024-02-16 15:33:38 | 2024-02-17 04:22:29 | 2024-02-17 16:48:40 | 12:26:11 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562573 | 2024-02-16 15:33:39 | 2024-02-17 04:38:41 | 2024-02-17 16:48:09 | 12:09:28 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7562574 | 2024-02-16 15:33:39 | 2024-02-17 04:39:01 | 2024-02-17 05:08:41 | 0:29:40 | 0:19:23 | 0:10:17 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/small-objects-balanced} | 2 | |
dead | 7562575 | 2024-02-16 15:33:40 | 2024-02-17 04:39:42 | 2024-02-17 05:01:17 | 0:21:35 | smithi | main | centos | 9.stream | rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_latest}} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds |
||||||||||||||
dead | 7562576 | 2024-02-16 15:33:41 | 2024-02-17 04:41:53 | 2024-02-17 16:50:52 | 12:08:59 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562577 | 2024-02-16 15:33:42 | 2024-02-17 04:41:53 | 2024-02-17 16:52:14 | 12:10:21 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7562578 | 2024-02-16 15:33:43 | 2024-02-17 04:44:03 | 2024-02-17 05:17:40 | 0:33:37 | 0:23:05 | 0:10:32 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/small-objects-localized} | 2 | |
dead | 7562579 | 2024-02-16 15:33:43 | 2024-02-17 04:44:04 | 2024-02-17 16:55:44 | 12:11:40 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562580 | 2024-02-16 15:33:44 | 2024-02-17 04:47:15 | 2024-02-17 16:55:27 | 12:08:12 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562581 | 2024-02-16 15:33:45 | 2024-02-17 04:47:15 | 2024-02-17 16:57:46 | 12:10:31 | smithi | main | ubuntu | 22.04 | rados/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562582 | 2024-02-16 15:33:46 | 2024-02-17 04:49:06 | 2024-02-17 16:58:32 | 12:09:26 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_monitoring_stack_basic} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562583 | 2024-02-16 15:33:46 | 2024-02-17 04:49:26 | 2024-02-17 16:58:07 | 12:08:41 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562584 | 2024-02-16 15:33:47 | 2024-02-17 04:49:56 | 2024-02-17 17:00:39 | 12:10:43 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/snaps-few-objects-localized} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7562585 | 2024-02-16 15:33:48 | 2024-02-17 04:52:17 | 2024-02-17 08:11:43 | 3:19:26 | 3:09:28 | 0:09:58 | smithi | main | centos | 9.stream | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi102 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2fad629fd3f128a0e41fe61243031e0d2b287b9d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
dead | 7562586 | 2024-02-16 15:33:49 | 2024-02-17 04:52:37 | 2024-02-17 17:00:47 | 12:08:10 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_orch_cli} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562587 | 2024-02-16 15:33:50 | 2024-02-17 04:52:38 | 2024-02-17 17:02:40 | 12:10:02 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/snaps-few-objects} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7562588 | 2024-02-16 15:33:50 | 2024-02-17 04:54:08 | 2024-02-17 05:22:54 | 0:28:46 | 0:19:26 | 0:09:20 | smithi | main | centos | 9.stream | rados/singleton/{all/backfill-toofull mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_latest}} | 1 | |
dead | 7562589 | 2024-02-16 15:33:51 | 2024-02-17 04:54:09 | 2024-02-17 05:20:51 | 0:26:42 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_orch_cli_mon} | 5 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds |
||||||||||||||
dead | 7562590 | 2024-02-16 15:33:52 | 2024-02-17 05:01:30 | 2024-02-17 17:11:01 | 12:09:31 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/quincy backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/radosbench} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7562591 | 2024-02-16 15:33:53 | 2024-02-17 05:02:11 | 2024-02-17 05:30:37 | 0:28:26 | 0:16:08 | 0:12:18 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/write_fadvise_dontneed} | 2 | |
pass | 7562592 | 2024-02-16 15:33:54 | 2024-02-17 05:05:11 | 2024-02-17 05:25:35 | 0:20:24 | 0:10:11 | 0:10:13 | smithi | main | centos | 9.stream | rados/singleton/{all/deduptool mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_latest}} | 1 | |
dead | 7562593 | 2024-02-16 15:33:54 | 2024-02-17 05:05:22 | 2024-02-17 17:14:26 | 12:09:04 | smithi | main | ubuntu | 22.04 | rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562594 | 2024-02-16 15:33:55 | 2024-02-17 05:05:32 | 2024-02-17 17:16:00 | 12:10:28 | smithi | main | centos | 9.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/e2e} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562595 | 2024-02-16 15:33:56 | 2024-02-17 05:07:53 | 2024-02-17 17:16:11 | 12:08:18 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7562596 | 2024-02-16 15:33:57 | 2024-02-17 05:07:53 | 2024-02-17 06:09:09 | 1:01:16 | 0:52:18 | 0:08:58 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
dead | 7562597 | 2024-02-16 15:33:58 | 2024-02-17 05:08:04 | 2024-02-17 17:17:44 | 12:09:40 | smithi | main | ubuntu | 22.04 | rados/cephadm/smoke-singlehost/{0-random-distro$/{ubuntu_22.04} 1-start 2-services/rgw 3-final} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7562598 | 2024-02-16 15:33:58 | 2024-02-17 05:08:44 | 2024-02-17 06:05:58 | 0:57:14 | 0:46:56 | 0:10:18 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/cache-agent-big} | 2 | |
fail | 7562599 | 2024-02-16 15:33:59 | 2024-02-17 05:09:04 | 2024-02-17 05:29:14 | 0:20:10 | 0:10:02 | 0:10:08 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats' |
||||||||||||||
fail | 7562600 | 2024-02-16 15:34:00 | 2024-02-17 05:09:35 | 2024-02-17 10:25:08 | 5:15:33 | 5:05:38 | 0:09:55 | smithi | main | centos | 9.stream | rados/standalone/{supported-random-distro$/{centos_latest} workloads/osd} | 1 | |
Failure Reason:
Command failed (workunit test osd/osd-reuse-id.sh) on smithi165 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2fad629fd3f128a0e41fe61243031e0d2b287b9d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-reuse-id.sh' |
||||||||||||||
dead | 7562601 | 2024-02-16 15:34:01 | 2024-02-17 05:09:35 | 2024-02-17 17:25:14 | 12:15:39 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_rgw_multisite} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7562602 | 2024-02-16 15:34:02 | 2024-02-17 05:16:06 | 2024-02-17 17:26:34 | 12:10:28 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_set_mon_crush_locations} | 3 | |||
Failure Reason:
hit max job timeout |