Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7668423 2024-04-22 18:21:18 2024-04-22 18:21:58 2024-04-22 19:10:41 0:48:43 0:40:00 0:08:43 smithi main rhel 8.6 rados/standalone/{supported-random-distro$/{rhel_8} workloads/scrub} 1
Failure Reason:

Command failed (workunit test scrub/osd-scrub-dump.sh) on smithi038 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6a7528e4aecd36b18c4b41cee6012e9f92aa7ab0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-dump.sh'

pass 7668424 2024-04-22 18:21:19 2024-04-22 18:21:59 2024-04-22 19:03:26 0:41:27 0:31:59 0:09:28 smithi main centos 9.stream rados/singleton/{all/lost-unfound-delete mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_latest}} 1
fail 7668425 2024-04-22 18:21:20 2024-04-22 18:21:59 2024-04-22 19:02:04 0:40:05 0:30:03 0:10:02 smithi main centos 8.stream rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi090 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6a7528e4aecd36b18c4b41cee6012e9f92aa7ab0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

pass 7668426 2024-04-22 18:21:21 2024-04-22 18:21:59 2024-04-22 18:44:14 0:22:15 0:12:43 0:09:32 smithi main centos 9.stream rados/rest/{mgr-restful supported-random-distro$/{centos_latest}} 1
pass 7668427 2024-04-22 18:21:22 2024-04-22 18:22:00 2024-04-22 18:44:51 0:22:51 0:16:16 0:06:35 smithi main centos 9.stream rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{centos_latest}} 1
pass 7668428 2024-04-22 18:21:24 2024-04-22 18:22:00 2024-04-22 18:43:44 0:21:44 0:12:56 0:08:48 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} 2
fail 7668429 2024-04-22 18:21:25 2024-04-22 18:22:00 2024-04-22 18:45:00 0:23:00 0:11:15 0:11:45 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi178 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6a7528e4aecd36b18c4b41cee6012e9f92aa7ab0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7668430 2024-04-22 18:21:27 2024-04-22 18:22:01 2024-04-22 19:01:03 0:39:02 0:31:46 0:07:16 smithi main centos 9.stream rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest} thrashers/one workloads/snaps-few-objects} 2
pass 7668431 2024-04-22 18:21:28 2024-04-22 18:22:01 2024-04-22 18:41:56 0:19:55 0:11:56 0:07:59 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/redirect} 2
pass 7668432 2024-04-22 18:21:29 2024-04-22 18:22:01 2024-04-22 18:48:51 0:26:50 0:19:34 0:07:16 smithi main centos 9.stream rados/standalone/{supported-random-distro$/{centos_latest} workloads/crush} 1
fail 7668433 2024-04-22 18:21:30 2024-04-22 18:22:02 2024-04-22 19:23:42 1:01:40 0:52:34 0:09:06 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

"2024-04-22T18:54:53.261384+0000 mon.a (mon.0) 851 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

pass 7668434 2024-04-22 18:21:31 2024-04-22 18:22:02 2024-04-22 18:50:26 0:28:24 0:20:28 0:07:56 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/small-objects-balanced} 2
pass 7668435 2024-04-22 18:21:32 2024-04-22 18:22:02 2024-04-22 19:49:00 1:26:58 1:20:43 0:06:15 smithi main centos 9.stream rados/standalone/{supported-random-distro$/{centos_latest} workloads/erasure-code} 1
pass 7668436 2024-04-22 18:21:33 2024-04-22 18:22:03 2024-04-22 18:56:01 0:33:58 0:25:15 0:08:43 smithi main centos 9.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 7668437 2024-04-22 18:21:34 2024-04-22 18:22:03 2024-04-22 19:01:28 0:39:25 0:28:00 0:11:25 smithi main ubuntu 22.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi069 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6a7528e4aecd36b18c4b41cee6012e9f92aa7ab0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 7668438 2024-04-22 18:21:36 2024-04-22 18:22:03 2024-04-22 18:48:48 0:26:45 0:17:39 0:09:06 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

['Data could not be sent to remote host "smithi006.front.sepia.ceph.com". Make sure this host can be reached over ssh: ssh: connect to host smithi006.front.sepia.ceph.com port 22: No route to host']

fail 7668439 2024-04-22 18:21:37 2024-04-22 18:22:04 2024-04-22 18:30:53 0:08:49 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/small-objects} 2
Failure Reason:

Stale jobs detected, aborting.

fail 7668440 2024-04-22 18:21:38 2024-04-22 18:22:04 2024-04-22 18:43:33 0:21:29 0:10:09 0:11:20 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats'

pass 7668441 2024-04-22 18:21:39 2024-04-22 18:22:05 2024-04-22 18:56:53 0:34:48 0:28:49 0:05:59 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/snaps-few-objects-balanced} 2
pass 7668442 2024-04-22 18:21:40 2024-04-22 18:22:05 2024-04-22 18:38:50 0:16:45 0:11:20 0:05:25 smithi main centos 9.stream rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{centos_latest}} 1
fail 7668443 2024-04-22 18:21:42 2024-04-22 18:22:05 2024-04-22 19:00:54 0:38:49 0:29:21 0:09:28 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

saw valgrind issues

pass 7668444 2024-04-22 18:21:43 2024-04-22 18:22:06 2024-04-22 18:58:55 0:36:49 0:29:25 0:07:24 smithi main centos 9.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest} tasks/rados_workunit_loadgen_big} 2
pass 7668445 2024-04-22 18:21:44 2024-04-22 18:22:06 2024-04-22 18:46:48 0:24:42 0:17:35 0:07:07 smithi main centos 9.stream rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{centos_latest}} 1
pass 7668446 2024-04-22 18:21:45 2024-04-22 18:22:06 2024-04-22 18:43:47 0:21:41 0:13:33 0:08:08 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest} thrashers/none thrashosds-health workloads/admin_socket_objecter_requests} 2
pass 7668447 2024-04-22 18:21:48 2024-04-22 18:22:07 2024-04-22 18:55:48 0:33:41 0:25:29 0:08:12 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/lockdep} 2
fail 7668448 2024-04-22 18:21:49 2024-04-22 18:22:07 2024-04-22 19:05:47 0:43:40 0:34:20 0:09:20 smithi main rhel 8.6 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi146 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6a7528e4aecd36b18c4b41cee6012e9f92aa7ab0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

dead 7668449 2024-04-22 18:21:50 2024-04-22 18:22:07 2024-04-23 06:31:52 12:09:45 smithi main centos 9.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-stupid} supported-random-distro$/{centos_latest} tasks/progress} 2
Failure Reason:

hit max job timeout

pass 7668450 2024-04-22 18:21:51 2024-04-22 18:22:08 2024-04-22 18:55:26 0:33:18 0:25:07 0:08:11 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/cache-pool-snaps-readproxy} 2
pass 7668451 2024-04-22 18:21:52 2024-04-22 18:22:08 2024-04-22 18:37:43 0:15:35 0:09:14 0:06:21 smithi main centos 9.stream rados/singleton/{all/test-noautoscale-flag mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_latest}} 1
fail 7668452 2024-04-22 18:21:53 2024-04-22 18:22:08 2024-04-22 22:27:57 4:05:49 3:56:57 0:08:52 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

Command failed (workunit test cls/test_cls_lock.sh) on smithi064 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6a7528e4aecd36b18c4b41cee6012e9f92aa7ab0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_lock.sh'

fail 7668453 2024-04-22 18:21:54 2024-04-22 18:22:09 2024-04-22 19:21:54 0:59:45 0:48:44 0:11:01 smithi main centos 8.stream rados/standalone/{supported-random-distro$/{centos_8} workloads/mon} 1
Failure Reason:

Command failed (workunit test mon/osd-erasure-code-profile.sh) on smithi165 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6a7528e4aecd36b18c4b41cee6012e9f92aa7ab0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/osd-erasure-code-profile.sh'

pass 7668454 2024-04-22 18:21:55 2024-04-22 18:22:09 2024-04-22 18:43:39 0:21:30 0:13:01 0:08:29 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/dedup-io-mixed} 2
pass 7668455 2024-04-22 18:21:56 2024-04-22 18:22:10 2024-04-22 18:51:49 0:29:39 0:20:13 0:09:26 smithi main centos 9.stream rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest}} 2
pass 7668456 2024-04-22 18:21:57 2024-04-22 18:22:10 2024-04-22 18:48:22 0:26:12 0:17:34 0:08:38 smithi main centos 9.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_latest} tasks/repair_test} 2
pass 7668457 2024-04-22 18:21:57 2024-04-22 18:22:10 2024-04-22 18:51:52 0:29:42 0:19:46 0:09:56 smithi main centos 9.stream rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} 1
pass 7668458 2024-04-22 18:21:58 2024-04-22 18:44:17 737 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados tasks/mon_recovery validater/lockdep} 2
fail 7668459 2024-04-22 18:21:59 2024-04-22 18:22:11 2024-04-22 18:59:46 0:37:35 0:28:10 0:09:25 smithi main ubuntu 20.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_20.04}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi033 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6a7528e4aecd36b18c4b41cee6012e9f92aa7ab0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

pass 7668460 2024-04-22 18:22:00 2024-04-22 18:22:11 2024-04-22 18:39:34 0:17:23 0:07:59 0:09:24 smithi main centos 9.stream rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_latest} tasks/mon_clock_with_skews} 2
pass 7668461 2024-04-22 18:22:01 2024-04-22 18:59:24 1639 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/rados_api_tests} 2
pass 7668462 2024-04-22 18:22:03 2024-04-22 18:22:12 2024-04-22 18:41:48 0:19:36 0:10:46 0:08:50 smithi main centos 9.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
fail 7668463 2024-04-22 18:22:05 2024-04-22 18:22:13 2024-04-22 19:42:53 1:20:40 1:11:55 0:08:45 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

['Data could not be sent to remote host "smithi139.front.sepia.ceph.com". Make sure this host can be reached over ssh: ssh: connect to host smithi139.front.sepia.ceph.com port 22: Connection refused', 'Data could not be sent to remote host "smithi139.front.sepia.ceph.com". Make sure this host can be reached over ssh: ssh: connect to host smithi139.front.sepia.ceph.com port 22: No route to host', 'Failed to lookup user adamlyon: "getpwnam(): name not found: \'adamlyon\'"', 'Failed to lookup user agombar: "getpwnam(): name not found: \'agombar\'"', 'Failed to lookup user alimasa: "getpwnam(): name not found: \'alimasa\'"', 'Failed to lookup user amk: "getpwnam(): name not found: \'amk\'"', 'Failed to lookup user avilez: "getpwnam(): name not found: \'avilez\'"', 'Failed to lookup user avivcaro-ev-vm: "getpwnam(): name not found: \'avivcaro-ev-vm\'"', 'Failed to lookup user baum: "getpwnam(): name not found: \'baum\'"', 'Failed to lookup user bcs-ceph: "getpwnam(): name not found: \'bcs-ceph\'"', 'Failed to lookup user bdavidov: "getpwnam(): name not found: \'bdavidov\'"', 'Failed to lookup user billscales: "getpwnam(): name not found: \'billscales\'"', 'Failed to lookup user chunmei: "getpwnam(): name not found: \'chunmei\'"', 'Failed to lookup user connorfa: "getpwnam(): name not found: \'connorfa\'"', 'Failed to lookup user devansh: "getpwnam(): name not found: \'devansh\'"', 'Failed to lookup user devanshsingh: "getpwnam(): name not found: \'devanshsingh\'"', 'Failed to lookup user gbregman: "getpwnam(): name not found: \'gbregman\'"', 'Failed to lookup user jamiepryde: "getpwnam(): name not found: \'jamiepryde\'"', 'Failed to lookup user jjperez: "getpwnam(): name not found: \'jjperez\'"', 'Failed to lookup user kalpesh-mac: "getpwnam(): name not found: \'kalpesh-mac\'"', 'Failed to lookup user lechernin: "getpwnam(): name not found: \'lechernin\'"', 'Failed to lookup user leonidus: "getpwnam(): name not found: \'leonidus\'"', 'Failed to lookup user levh: "getpwnam(): name not found: \'levh\'"', 'Failed to lookup user liangmingyuan: "getpwnam(): name not found: \'liangmingyuan\'"', 'Failed to lookup user linuxkidd: "getpwnam(): name not found: \'linuxkidd\'"', 'Failed to lookup user ljsanders: "getpwnam(): name not found: \'ljsanders\'"', 'Failed to lookup user luorixin: "getpwnam(): name not found: \'luorixin\'"', 'Failed to lookup user medhavi: "getpwnam(): name not found: \'medhavi\'"', 'Failed to lookup user mer: "getpwnam(): name not found: \'mer\'"', 'Failed to lookup user nicowang: "getpwnam(): name not found: \'nicowang\'"', 'Failed to lookup user pegonzal: "getpwnam(): name not found: \'pegonzal\'"', 'Failed to lookup user pkalever: "getpwnam(): name not found: \'pkalever\'"', 'Failed to lookup user poonc3: "getpwnam(): name not found: \'poonc3\'"', 'Failed to lookup user rakshithakamath: "getpwnam(): name not found: \'rakshithakamath\'"', 'Failed to lookup user roysahar: "getpwnam(): name not found: \'roysahar\'"', 'Failed to lookup user ssancheti: "getpwnam(): name not found: \'ssancheti\'"', 'Failed to lookup user ssharon: "getpwnam(): name not found: \'ssharon\'"', 'Failed to lookup user submishr: "getpwnam(): name not found: \'submishr\'"', 'Failed to lookup user suriarte: "getpwnam(): name not found: \'suriarte\'"', 'Failed to lookup user svelar: "getpwnam(): name not found: \'svelar\'"', 'Failed to lookup user tonay: "getpwnam(): name not found: \'tonay\'"', 'Failed to lookup user travisn: "getpwnam(): name not found: \'travisn\'"', 'Failed to lookup user vedansh: "getpwnam(): name not found: \'vedansh\'"', 'Failed to lookup user wmcroberts: "getpwnam(): name not found: \'wmcroberts\'"', 'Failed to lookup user yajgaonk: "getpwnam(): name not found: \'yajgaonk\'"', 'ubuntu@smithi139.front.sepia.ceph.com: Permission denied (publickey,password,keyboard-interactive).']

dead 7668464 2024-04-22 18:22:06 2024-04-22 18:22:13 2024-04-22 18:32:50 0:10:37 0:01:10 0:09:27 smithi main centos 9.stream rados/singleton/{all/divergent_priors2 mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

["useradd: user 'kalpesh-mac' already exists"]

pass 7668465 2024-04-22 18:22:07 2024-04-22 18:22:13 2024-04-22 20:56:47 2:34:34 2:28:57 0:05:37 smithi main centos 9.stream rados/standalone/{supported-random-distro$/{centos_latest} workloads/osd} 1
pass 7668466 2024-04-22 18:22:08 2024-04-22 18:22:14 2024-04-22 18:43:48 0:21:34 0:13:07 0:08:27 smithi main centos 9.stream rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{centos_latest}} 1
pass 7668467 2024-04-22 18:22:09 2024-04-22 18:22:14 2024-04-22 18:41:45 0:19:31 0:12:03 0:07:28 smithi main centos 9.stream rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} tasks/mon_recovery} 2
pass 7668468 2024-04-22 18:22:10 2024-04-22 18:49:35 1053 smithi main centos 9.stream rados/singleton/{all/ec-inconsistent-hinfo mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest}} 1
pass 7668469 2024-04-22 18:22:11 2024-04-22 18:47:52 1096 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest} thrashers/mapgap thrashosds-health workloads/redirect_set_object} 2
fail 7668470 2024-04-22 18:22:12 2024-04-22 18:44:42 737 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi008 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6a7528e4aecd36b18c4b41cee6012e9f92aa7ab0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7668471 2024-04-22 18:22:13 2024-04-22 18:57:46 1600 smithi main centos 9.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_latest} tasks/rados_api_tests} 2
pass 7668472 2024-04-22 18:22:14 2024-04-22 18:22:16 2024-04-22 18:48:15 0:25:59 0:18:05 0:07:54 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados tasks/rados_cls_all validater/lockdep} 2